Artificial Intelligence (AI)

AI really is smoke and mirrors

by Brian Merchant 

We are at a unique juncture in the AI timeline; one in which it’s still remarkably nebulous as to what generative AI systems actually can and cannot do, or what their actual market propositions really are — and yet it’s one in which they nonetheless enjoy broad cultural and economic interest.

It’s also notably a point where, if you happen to be, say, an executive or a middle manager who’s invested in AI but it’s not making you any money, you don’t want to be caught admitting doubt or asking, now, in 2024, ‘well what is AI actually, and what is it good for, really?’ This combination of widespread uncertainty and dominance of the zeitgeist, for the time being, continues to serve the AI companies, who lean even more heavily on mythologizing — much more so than, say, Microsoft selling Office software suites or Apple hocking the latest iPhone — to push their products. In other words, even now, this far into its reign over the tech sector, “AI” — a highly contested term already — is, largely, what its masters tell us it is, as well as how much we choose to believe them.

And that, it turns out, is an uncanny echo of the original smoke and mirrors phenomenon from which that politics journo cribbed the term. The phrase describes the then-high tech magic lanterns in the 17th and 18th centuries and the illusionists and charlatans who exploited them to convince an excitable and paying public that they could command great powers — including the ability illuminate demons and monsters or raise the spirits of the dead — while tapping into widespread anxieties about too-fast progress in turbulent times. I didn’t set out to write a whole thing about the origin of the smoke and mirrors and its relevance to Our Modern Moment, but, well, sometimes the right rabbit hole finds you at the right time.

via Cory Doctorow

We’re told AI neural networks ‘learn’ the way humans do. A neuroscientist explains why that’s not the case

in The Conversation  

Neural nets are typically trained by “supervised learning”. So they’re presented with many examples of an input and the desired output, and then gradually the connection weights are adjusted until the network “learns” to produce the desired output.

To learn a language task, a neural net may be presented with a sentence one word at a time, and will slowly learns to predict the next word in the sequence.

This is very different from how humans typically learn. Most human learning is “unsupervised”, which means we’re not explicitly told what the “right” response is for a given stimulus. We have to work this out ourselves. 

AI hiring tools may be filtering out the best job applicants

for British Broadcasting Corporation (BBC)  

"The problem [is] no-one knows exactly where the harm is," she explains. And, given that companies have saved money by replacing human HR staff with AI – which can process piles of resumes in a fraction of the time – she believes firms may have little motivation to interrogate kinks in the machine. 

"One biased human hiring manager can harm a lot of people in a year, and that's not great. But an algorithm that is maybe used in all incoming applications at a large company… that could harm hundreds of thousands of applicants" – Hilke Schellman

From her research, Schellmann is also concerned screening-software companies are "rushing" underdeveloped, even flawed products to market to cash in on demand. "Vendors are not going to come out publicly and say our tool didn't work, or it was harmful to people", and companies who have used them remain "afraid that there's going to be a gigantic class action lawsuit against them".

How should regulators think about "AI"?

by Emily M. Bender 

On Thursday 9/28, I had the opportunity to speak at a virtual roundtable convened by Congressman Bobby Scott on "AI in the Workplace: New Crisis or Longstanding Challenge?". The roundtable was a closed meeting, but sharing our opening remarks is allowed, so I am posting mine here.

Remote video URL

Losing the imitation game

by Jennifer Moore 

The intersection of AI hype with that elision of complexity seems to have produced a kind of AI booster fanboy, and they're making personal brands out of convincing people to use AI to automate programming. This is an incredibly bad idea. The hard part of programming is building and maintaining a useful mental model of a complex system. The easy part is writing code. They're positioning this tool as a universal solution, but it's only capable of doing the easy part. And even then, it's not able to do that part reliably. Human engineers will still have to evaluate and review the code that an AI writes. But they'll now have to do it without the benefit of having anyone who understands it. No one can explain it. No one can explain what they were thinking when they wrote it. No one can explain what they expect it to do. Every choice made in writing software is a choice not to do things in a different way. And there will be no one who can explain why they made this choice, and not those others. In part because it wasn't even a decision that was made. It was a probability that was realized.

But it's worse than AI being merely inadequate for software development. Developing that mental model requires learning about the system. We do that by exploring it. We have to interact with it. We manipulate and change the system, then observe how it responds. We do that by performing the easy, simple programing tasks. Delegating that learning work to machines is the tech equivalent of eating our seed corn. That holds true beyond the scope of any team, or project, or even company. Building those mental models is itself a skill that has to be learned. We do that by doing it, there's not another way. As people, and as a profession, we need the early career jobs so that we can learn how to do the later career ones. Giving those learning opportunities to computers instead of people is profoundly myopic.

Effective obfuscation

by Molly White 

The one-sentence description of effective altruism sounds like a universal goal rather than an obscure pseudo-philosophy. After all, most people are altruistic to some extent, and no one wants to be ineffective in their altruism. From the group’s website: “Effective altruism is a research field and practical community that aims to find the best ways to help others, and put them into practice.” Pretty benign stuff, right?

Dig a little deeper, and the rationalism and utilitarianism emerges.  […]

The problem with removing the messy, squishy, human part of decisionmaking is you can end up with an ideology like effective altruism: one that allows a person to justify almost any course of action in the supposed pursuit of maximizing their effectiveness.

via Dan Gillmor

Adobe is selling fake AI images of the war in Israel-Gaza

by Cam Wilson in Crikey  

Amid the flurry of misinformation and misleading online content about the Israel-Hamas war that’s circulating on social media, these images, too, are being used without disclosure of whether they are real or not.

A handful of small online news outlets, blogs and newsletters have featured “Conflict between Israel and Palestine generative AI” without marking it as the product of generative AI. It’s not clear whether these publications are aware it is a fake image.