Technology

Australia’s teen social media ban is a flop. But there’s no joy in ‘I told you so’

by Samantha Floreani in The Guardian  

Well said:

This week, it was revealed that despite the Australian government’s world-first teen social media ban, around seven in 10 children remain on major platforms. What’s more, the eSafety report also shows that there has been no notable change in cyberbullying or image-based abuse reported by children.

For a policy that was touted as the solution to keeping kids safe from harm online, this is a damning indictment of the ban’s effectiveness.

Who could possibly have predicted that this wasn’t going to work? Well, lots of people.

Countless experts were ignored, including those in the fields of digital wellbeing, digital rights advocacy, youth mental health and more than 140 academics and 20 Australian civil society organisations. Even the eSafety commissioner herself had doubts, and internally the government was aware of a lack of evidence to support the ban before they passed the legislation anyway.

[
]

Ultimately, the fundamental problem with age-gating is that it fails to address any of the root problems with our current online landscape – that is, the extractive business models and pernicious design features of mainstream tech companies. We all exist in a highly commercialised information ecosystem, rife with algorithmically amplified misinformation, scams, harmful content and AI slop. Children are particularly vulnerable to these issues but the reality is that it impacts everyone, even if you’re blissfully absent from Facebook or Instagram.

Not only is the social media ban working just as predicted (that is to say, it’s not); what other, more effective alternatives might the Australian government have pursued while spending the better part of two years chasing this red herring? What if, instead of trying and failing to kick kids off social media, we focused our attention on the reasons why being online is so often detrimental in the first place?

Age verification is coming to search engines in Australia – with huge implications for privacy and inclusion

by Samantha Floreani in The Guardian  

If this is the first time you’re hearing about it, you’re not alone. Despite the significance of the changes, these latest rules are the result of industry codes, which differs to regular legislation. These codes don’t go through parliament. Instead, they’re developed by the tech industry and registered by the eSafety commissioner in a process called co-regulation. On one hand, this can be good: it can allow for more flexibility or technology-specific detail that is less appropriate in legislation. On the other: it creates risk of industry co-option, and by bypassing parliamentary process, can give an enormous amount of power to an unelected official (in this case, the eSafety commissioner).

Greens senator David Shoebridge has called the implications of age verification for search engines “staggering” and noted that “these proposals don’t have to go through an elected parliament and we can’t vote them down no matter how significant concerns are. That combined with lack of public input is a serious issue.”

The age verification policy development process has been littered with blunders that make a mockery of meaningful consultation and evidence-based policy development. It is particularly striking that these codes were drafted before the completion of the government’s $6.5m trial into the efficacy of age assurance. Later, the trial’s preliminary findings conceded the technology is not guaranteed to be effective, and noted “concerning evidence” that some technology providers were seeking to collect too much personal information.

While a government-commissioned survey on the teen social media ban found overwhelming support in theory, it also found most people have no idea what that means in practice, with many uncomfortable with the methods it might entail – such as biometric face scanning or handing over your credit card details. And while there was much fanfare around the social media ban, it’s not clear there is a social licence to extend this approach to search engines and beyond. It seems many people may be unpleasantly surprised.

What is driving the AI hype machine? — Cory Doctorow

in Al Jazeera  for YouTube  

This is a really good succinct explainer for the people in you life who have no precise, coherent definition of "intelligence" beyond I know it when I see it (which is, you, me, and everybody else), and/or a belief that computers are fundamentally magical (which appears to be most people in the world).

Remote video URL

Artificial intelligence is routinely framed as unstoppable – a technology the world must adapt to, not question. But as companies invest hundreds of billions and the hype accelerates, scrutiny has fallen away. Cory Doctorow on who controls the story around AI and why past tech “revolutions” offer a warning.

Algorithm-based tool for home support funding is cruel and inhumane, Australian aged care workers warn

in The Guardian  

Mark Aitken, a registered nurse for 39 years who spent 16 years in aged care roles including assessing elderly people for support and funding, said he quit his job in regional Victoria just four months into using the tool.

[
]

“Eight times out of 10, the outcome was different to one that I would have recommended, or my colleagues would have recommended,” Aitken said.

It follows previous controversies over automated decision-making tools being used by the government, including the robodebt welfare scandal, and concerns about algorithm-driven disability funding through the NDIS.

The IAT user guide does not explain how the algorithm weighs risk, need or complexity, and Aitken said this information was never revealed to assessors.

When he asked at a government seminar about the evaluation framework, including what data was being collected, how accuracy would be assessed, and whether results would be publicly reported, he said he felt “shut down”.

“I left my job because I didn’t want to be part of a system that removed the ultimate decision-making about support from real, experienced people who care,” he said.

“The government valued the algorithm more than people with skills, intelligence and knowledge.”

He said some assessors began “gaming” the system, inputting information they knew would generate the level of care the person needed even if that information did not accurately reflect their situation.

“People shouldn’t have to put in fake information,” Aitken said. “I just started to feel like it was going to be another robodebt, I became very uncomfortable, and just felt the tool wasn’t ethical.”

via John Holmes

Thinking Through...The AI Con & Deconstructing the Hype

by Emily M. Bender ,  Alex Hanna for YouTube  

Most interviews with Emily and Alex have assumed quite a bit of prior knowledge. This one not so much, so it's a good explainer for laypersons:

Remote video URL

Dr. Allison Lester sits down with Dr. Emily M. Bender and Dr. Alex Hanna authors of the AI Con: How to Fight Big Tech's Hype and Create the Future We Want for a conversation about what ChatGPT is, what it is pretending to be, and what we lose when we treat it like an all-knowing answer engine.

Together they ask: What is a large language model, actually? Why does “search engine” framing mislead people so quickly? What gets erased when we focus on convenience, from labor and surveillance to environmental cost?

They talk resistance, agency, and the classroom, including why banning is a dead end, how to protect learning without turning teaching into policing, and what it means to be human together in an era of synthetic text.

It is no longer safe to move our governments and societies to US clouds

by Bert Hubert 

Not only is it scary to have all your data available to US spying, it is also a huge risk for your business/government continuity. From now on, all our business processes can be brought to a halt with the push of a button in the US. And not only will everything then stop, will we ever get our data back? Or are we being held hostage? This is not a theoretical scenario, something like this has already happened.

Here and there, some parts of at least the Dutch government are deciding not to migrate EVERYTHING to the US (kudos to the government workers who are fighting for this!).

But even here, the details of Dutch policy are that our data will only ‘for now’ stay on our own servers. Experts are also doubtful whether it’s actually possible with the current “partial cloud” plan to keep the data here exclusively.

And then we come to the apparent reason why we are putting our head on Trump’s chopping block: “American software is just so easy to use”.

Personally, I don’t know many fans of MS Teams, Office, and Outlook. We are, however, very used to these software products. We’ve become quite good at using them.

But this brings us to the unbearable conclusion that we are entrusting all our data and business processes to the new King of America
 because we can’t be bothered to get used to a different word processor, or make an effort to support other software.

The cognitive and moral harms of platform decay

Platform decay is the phenomenon of major internet platforms, such as Google search, Facebook, and Amazon, systematically declining in quality in recent years. This decline in quality is attributed to the particular business model of these platforms and its harms are usually understood to be violations of principles of economic fairness and of inconveniencing users. In this article, we argue that the scope and nature of these harms are underappreciated. In particular, we establish that platform decay constitutes both a cognitive and moral harm to its users. We make this case by arguing that platforms function as cognitive scaffolds or extensions, as understood by the extended mind approach to cognition. It is then a straightforward implication that platform decay constitutes cognitive damage to a platform’s users. This cognitive damage is a harm on its own; however, it can also undermine cognitive capacities that virtue ethicists argue are necessary for developing a virtuous character. We will focus on this claim in regards to the capacity to pay attention, a capacity that platform decay targets specifically. Platform decay therefore also constitutes both cognitive and moral harm, which simultaneously affects billions of people.

Optus’s triple zero debacle is further proof of the failure of the neoliberal experiment

by John Quiggin in The Guardian  

A nice little potted history of Australian telecommunication privatisation failure:

A closer look at the record tells a different story. Technological progress in telecommunications produced a steady reduction in prices throughout the 20th century, taking place around the world and regardless of the organisational structure. The shift from analog to digital telecommunications accelerated the process. Telecom Australia, the statutory authority that became Telstra, recorded total factor productivity growth rates as high as 10% per year, remaining profitable while steadily reducing prices.

But for the advocates of neoliberal microeconomic reform, this wasn’t enough. They hoped, or rather assumed, that competition would produce both better outcomes for consumers and a more efficient rollout of physical infrastructure. [
]

The failures emerged early. Seeking to cement their positions before the advent of open competition, Telstra and Optus spent billions rolling out fibre-optic cable networks. But rather than seeking to maximise total coverage, the two networks were virtually parallel, a result that is a standard prediction of economic theory. The rollout stopped when the market was fully opened in 1997, leaving parts of urban Australia with two redundant fibre networks and the rest of the country with none.

The next failure came with the rollout of broadband. Under public ownership, this would have been a relatively straightforward matter. But the newly privatised Telstra played hardball, demanding a system that would cement its monopoly position in fixed-line infrastructure. The end result was the need to return to public ownership with the national broadband network, while paying Telstra handsomely for access to ducts and wires that the public had owned until a few years previously.

Meanwhile the hoped-for competition in mobile telephony has failed to emerge. The near-duopoly created in 1991, with Telstra as the dominant player and Optus playing second fiddle, has endured for more than 30 years. 

Who does Woolworths’ tracking and timing of its workers serve? It’s certainly not the customers

by Samantha Floreani in The Guardian  

Fears about losing jobs to automation have become commonplace, but according to United Workers Union (UWU) research and policy officer Lauren Kelly, who researches labour and supermarket automation, rather than manual work being eliminated, it is often augmented by automation technologies. This broadens the concern from one of job loss to more wide-ranging implications for the nature of work itself. That is, she says, “rather than replace human workers with robots, many are being forced to work like robots”.

In addition to the monitoring tactics used upon workers, supermarkets also direct their all-seeing eye towards customers through an array of surveillance measures: cameras track individuals through stores, “smart” exit gates remain closed until payment, overhead image recognition at self-serve checkouts assess whether you’re actually weighing brown onions, and so on. Woolworths even invests in a data-driven “crime intelligence platform”, which raises significant privacy concerns, shares data with police and claims that it can predict crime before it happens – not just the plot of Minority Report but also an offshoot of the deeply problematic concept of “predictive policing”. Modern supermarkets have become a testing ground for an array of potential rights-infringing technologies.

Samsung caught faking zoom photos of the Moon

in The Verge  

For years, Samsung “Space Zoom”-capable phones have been known for their ability to take incredibly detailed photos of the Moon. But a recent Reddit post showed in stark terms just how much computational processing the company is doing, and — given the evidence supplied — it feels like we should go ahead and say it: Samsung’s pictures of the Moon are fake. 

[
]

The test of Samsung’s phones conducted by Reddit user u/ibreakphotos was ingenious in its simplicity. They created an intentionally blurry photo of the Moon, displayed it on a computer screen, and then photographed this image using a Samsung S23 Ultra. As you can see below, the first image on the screen showed no detail at all, but the resulting picture showed a crisp and clear “photograph” of the Moon. The S23 Ultra added details that simply weren’t present before. There was no upscaling of blurry pixels and no retrieval of seemingly lost data. There was just a new Moon — a fake one.