Artificial Intelligence (AI)

for British Broadcasting Corporation (BBC)  

"The problem [is] no-one knows exactly where the harm is," she explains. And, given that companies have saved money by replacing human HR staff with AI – which can process piles of resumes in a fraction of the time – she believes firms may have little motivation to interrogate kinks in the machine. 

"One biased human hiring manager can harm a lot of people in a year, and that's not great. But an algorithm that is maybe used in all incoming applications at a large company… that could harm hundreds of thousands of applicants" – Hilke Schellman

From her research, Schellmann is also concerned screening-software companies are "rushing" underdeveloped, even flawed products to market to cash in on demand. "Vendors are not going to come out publicly and say our tool didn't work, or it was harmful to people", and companies who have used them remain "afraid that there's going to be a gigantic class action lawsuit against them".

by Emily M. Bender 

On Thursday 9/28, I had the opportunity to speak at a virtual roundtable convened by Congressman Bobby Scott on "AI in the Workplace: New Crisis or Longstanding Challenge?". The roundtable was a closed meeting, but sharing our opening remarks is allowed, so I am posting mine here.

Remote video URL
by Jennifer Moore 

The intersection of AI hype with that elision of complexity seems to have produced a kind of AI booster fanboy, and they're making personal brands out of convincing people to use AI to automate programming. This is an incredibly bad idea. The hard part of programming is building and maintaining a useful mental model of a complex system. The easy part is writing code. They're positioning this tool as a universal solution, but it's only capable of doing the easy part. And even then, it's not able to do that part reliably. Human engineers will still have to evaluate and review the code that an AI writes. But they'll now have to do it without the benefit of having anyone who understands it. No one can explain it. No one can explain what they were thinking when they wrote it. No one can explain what they expect it to do. Every choice made in writing software is a choice not to do things in a different way. And there will be no one who can explain why they made this choice, and not those others. In part because it wasn't even a decision that was made. It was a probability that was realized.

But it's worse than AI being merely inadequate for software development. Developing that mental model requires learning about the system. We do that by exploring it. We have to interact with it. We manipulate and change the system, then observe how it responds. We do that by performing the easy, simple programing tasks. Delegating that learning work to machines is the tech equivalent of eating our seed corn. That holds true beyond the scope of any team, or project, or even company. Building those mental models is itself a skill that has to be learned. We do that by doing it, there's not another way. As people, and as a profession, we need the early career jobs so that we can learn how to do the later career ones. Giving those learning opportunities to computers instead of people is profoundly myopic.

by Molly White 

The one-sentence description of effective altruism sounds like a universal goal rather than an obscure pseudo-philosophy. After all, most people are altruistic to some extent, and no one wants to be ineffective in their altruism. From the group’s website: “Effective altruism is a research field and practical community that aims to find the best ways to help others, and put them into practice.” Pretty benign stuff, right?

Dig a little deeper, and the rationalism and utilitarianism emerges.  […]

The problem with removing the messy, squishy, human part of decisionmaking is you can end up with an ideology like effective altruism: one that allows a person to justify almost any course of action in the supposed pursuit of maximizing their effectiveness.

via Dan Gillmor
by Cam Wilson in Crikey  

Amid the flurry of misinformation and misleading online content about the Israel-Hamas war that’s circulating on social media, these images, too, are being used without disclosure of whether they are real or not.

A handful of small online news outlets, blogs and newsletters have featured “Conflict between Israel and Palestine generative AI” without marking it as the product of generative AI. It’s not clear whether these publications are aware it is a fake image.