This keynote will look at the connections between where we are now and how we got here. Connecting the “Crypto Wars”, the role of encryption and privacy, and ultimately the hype of AI… all through the lens of Signal.
Full text of Meredith's talk: https://signal.org/blog/pdfs/ndss-key...
Artificial Intelligence (AI)
NDSS 2024 Keynote - AI, Encryption, and the Sins of the 90s, Meredith Whittaker
Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry
in FuturismA few years back, a writer in a developing country started doing contract work for a company called AdVon Commerce, getting a few pennies per word to write online product reviews.
But the writer — who like other AdVon sources interviewed for this story spoke on condition of anonymity — recalls that the gig's responsibilities soon shifted. Instead of writing, they were now tasked with polishing drafts generated using an AI system the company was developing, internally dubbed MEL.
"They started using AI for content generation," the former AdVon worker told us, "and paid even less than what they were paying before."
The former writer was asked to leave detailed notes on MEL's work — feedback they believe was used to fine-tune the AI which would eventually replace their role entirely.
The situation continued until MEL "got trained enough to write on its own," they said. "Soon after, we were released from our positions as writers."
TechScape: How cheap, outsourced labour in Africa is shaping AI English
in The GuardianIn late March, AI influencer Jeremy Nguyen, at the Swinburne University of Technology in Melbourne, highlighted one: ChatGPT’s tendency to use the word “delve” in responses. No individual use of the word can be definitive proof of AI involvement, but at scale it’s a different story. When half a percent of all articles on research site PubMed contain the word “delve” – 10 to 100 times more than did a few years ago – it’s hard to conclude anything other than an awful lot of medical researchers using the technology to, at best, augment their writing.
[…]
Hundreds of thousands of hours of work goes into providing enough feedback to turn an LLM into a useful chatbot, and that means the large AI companies outsource the work to parts of the global south, where anglophonic knowledge workers are cheap to hire.
[…]
I said “delve” was overused by ChatGPT compared to the internet at large. But there’s one part of the internet where “delve” is a much more common word: the African web. In Nigeria, “delve” is much more frequently used in business English than it is in England or the US. So the workers training their systems provided examples of input and output that used the same language, eventually ending up with an AI system that writes slightly like an African.
And that’s the final indignity. If AI-ese sounds like African English, then African English sounds like AI-ese. Calling people a “bot” is already a schoolyard insult (ask your kids; it’s a Fortnite thing); how much worse will it get when a significant chunk of humanity sounds like the AI systems they were paid to train?
When ChatGPT founder had ‘no idea’ how to monetise product
in MintThese people have no idea how computers work, how brains work, or how to define intelligence. They just believe that if they get enough transistors together, feed it enough data and the electricity requirements of a large industrialised nation, they will eventually create God. It's the ultimate cargo cult. They're drunk on they're own snake oil. And they're among the wealthiest and most powerful people in the world, instead of being institutionalised for their own safety. It's so funny/scary.
The video shows Sam Altman in talk with Connie Loizos. When Loizos asked Altman is he is planning to monetise his product, Sam Altman replied with: “The honest answer is, we have no idea."
Sam Altman further said that they had no plans to make any revenue. "We never made any revenue. We have no current plans to make any revenue. We have no idea how we may one day generate revenue," he said.
Speaking about the investors, Sam Altman said, “We have made soft promises to investors that once we build this sort of generally intelligent system, basically we will ask it to figure out a way to generate an investment return for you."
As the audience laugh, Sam Altman said, “You can laugh. It's all right. But, it is what I actually believe is going to happen."
AI really is smoke and mirrors
We are at a unique juncture in the AI timeline; one in which it’s still remarkably nebulous as to what generative AI systems actually can and cannot do, or what their actual market propositions really are — and yet it’s one in which they nonetheless enjoy broad cultural and economic interest.
It’s also notably a point where, if you happen to be, say, an executive or a middle manager who’s invested in AI but it’s not making you any money, you don’t want to be caught admitting doubt or asking, now, in 2024, ‘well what is AI actually, and what is it good for, really?’ This combination of widespread uncertainty and dominance of the zeitgeist, for the time being, continues to serve the AI companies, who lean even more heavily on mythologizing — much more so than, say, Microsoft selling Office software suites or Apple hocking the latest iPhone — to push their products. In other words, even now, this far into its reign over the tech sector, “AI” — a highly contested term already — is, largely, what its masters tell us it is, as well as how much we choose to believe them.
And that, it turns out, is an uncanny echo of the original smoke and mirrors phenomenon from which that politics journo cribbed the term. The phrase describes the then-high tech magic lanterns in the 17th and 18th centuries and the illusionists and charlatans who exploited them to convince an excitable and paying public that they could command great powers — including the ability illuminate demons and monsters or raise the spirits of the dead — while tapping into widespread anxieties about too-fast progress in turbulent times. I didn’t set out to write a whole thing about the origin of the smoke and mirrors and its relevance to Our Modern Moment, but, well, sometimes the right rabbit hole finds you at the right time.
We’re told AI neural networks ‘learn’ the way humans do. A neuroscientist explains why that’s not the case
in The ConversationNeural nets are typically trained by “supervised learning”. So they’re presented with many examples of an input and the desired output, and then gradually the connection weights are adjusted until the network “learns” to produce the desired output.
To learn a language task, a neural net may be presented with a sentence one word at a time, and will slowly learns to predict the next word in the sequence.
This is very different from how humans typically learn. Most human learning is “unsupervised”, which means we’re not explicitly told what the “right” response is for a given stimulus. We have to work this out ourselves.
AI hiring tools may be filtering out the best job applicants
for British Broadcasting Corporation (BBC)"The problem [is] no-one knows exactly where the harm is," she explains. And, given that companies have saved money by replacing human HR staff with AI – which can process piles of resumes in a fraction of the time – she believes firms may have little motivation to interrogate kinks in the machine.
"One biased human hiring manager can harm a lot of people in a year, and that's not great. But an algorithm that is maybe used in all incoming applications at a large company… that could harm hundreds of thousands of applicants" – Hilke Schellman
From her research, Schellmann is also concerned screening-software companies are "rushing" underdeveloped, even flawed products to market to cash in on demand. "Vendors are not going to come out publicly and say our tool didn't work, or it was harmful to people", and companies who have used them remain "afraid that there's going to be a gigantic class action lawsuit against them".
How should regulators think about "AI"?
On Thursday 9/28, I had the opportunity to speak at a virtual roundtable convened by Congressman Bobby Scott on "AI in the Workplace: New Crisis or Longstanding Challenge?". The roundtable was a closed meeting, but sharing our opening remarks is allowed, so I am posting mine here.
Losing the imitation game
The intersection of AI hype with that elision of complexity seems to have produced a kind of AI booster fanboy, and they're making personal brands out of convincing people to use AI to automate programming. This is an incredibly bad idea. The hard part of programming is building and maintaining a useful mental model of a complex system. The easy part is writing code. They're positioning this tool as a universal solution, but it's only capable of doing the easy part. And even then, it's not able to do that part reliably. Human engineers will still have to evaluate and review the code that an AI writes. But they'll now have to do it without the benefit of having anyone who understands it. No one can explain it. No one can explain what they were thinking when they wrote it. No one can explain what they expect it to do. Every choice made in writing software is a choice not to do things in a different way. And there will be no one who can explain why they made this choice, and not those others. In part because it wasn't even a decision that was made. It was a probability that was realized.
But it's worse than AI being merely inadequate for software development. Developing that mental model requires learning about the system. We do that by exploring it. We have to interact with it. We manipulate and change the system, then observe how it responds. We do that by performing the easy, simple programing tasks. Delegating that learning work to machines is the tech equivalent of eating our seed corn. That holds true beyond the scope of any team, or project, or even company. Building those mental models is itself a skill that has to be learned. We do that by doing it, there's not another way. As people, and as a profession, we need the early career jobs so that we can learn how to do the later career ones. Giving those learning opportunities to computers instead of people is profoundly myopic.
Effective obfuscation
The one-sentence description of effective altruism sounds like a universal goal rather than an obscure pseudo-philosophy. After all, most people are altruistic to some extent, and no one wants to be ineffective in their altruism. From the group’s website: “Effective altruism is a research field and practical community that aims to find the best ways to help others, and put them into practice.” Pretty benign stuff, right?
Dig a little deeper, and the rationalism and utilitarianism emerges. […]
The problem with removing the messy, squishy, human part of decisionmaking is you can end up with an ideology like effective altruism: one that allows a person to justify almost any course of action in the supposed pursuit of maximizing their effectiveness.