By Samantha Floreani

Australia’s teen social media ban is a flop. But there’s no joy in ‘I told you so’

by Samantha Floreani in The Guardian  

Well said:

This week, it was revealed that despite the Australian government’s world-first teen social media ban, around seven in 10 children remain on major platforms. What’s more, the eSafety report also shows that there has been no notable change in cyberbullying or image-based abuse reported by children.

For a policy that was touted as the solution to keeping kids safe from harm online, this is a damning indictment of the ban’s effectiveness.

Who could possibly have predicted that this wasn’t going to work? Well, lots of people.

Countless experts were ignored, including those in the fields of digital wellbeing, digital rights advocacy, youth mental health and more than 140 academics and 20 Australian civil society organisations. Even the eSafety commissioner herself had doubts, and internally the government was aware of a lack of evidence to support the ban before they passed the legislation anyway.

[
]

Ultimately, the fundamental problem with age-gating is that it fails to address any of the root problems with our current online landscape – that is, the extractive business models and pernicious design features of mainstream tech companies. We all exist in a highly commercialised information ecosystem, rife with algorithmically amplified misinformation, scams, harmful content and AI slop. Children are particularly vulnerable to these issues but the reality is that it impacts everyone, even if you’re blissfully absent from Facebook or Instagram.

Not only is the social media ban working just as predicted (that is to say, it’s not); what other, more effective alternatives might the Australian government have pursued while spending the better part of two years chasing this red herring? What if, instead of trying and failing to kick kids off social media, we focused our attention on the reasons why being online is so often detrimental in the first place?

Age verification is coming to search engines in Australia – with huge implications for privacy and inclusion

by Samantha Floreani in The Guardian  

If this is the first time you’re hearing about it, you’re not alone. Despite the significance of the changes, these latest rules are the result of industry codes, which differs to regular legislation. These codes don’t go through parliament. Instead, they’re developed by the tech industry and registered by the eSafety commissioner in a process called co-regulation. On one hand, this can be good: it can allow for more flexibility or technology-specific detail that is less appropriate in legislation. On the other: it creates risk of industry co-option, and by bypassing parliamentary process, can give an enormous amount of power to an unelected official (in this case, the eSafety commissioner).

Greens senator David Shoebridge has called the implications of age verification for search engines “staggering” and noted that “these proposals don’t have to go through an elected parliament and we can’t vote them down no matter how significant concerns are. That combined with lack of public input is a serious issue.”

The age verification policy development process has been littered with blunders that make a mockery of meaningful consultation and evidence-based policy development. It is particularly striking that these codes were drafted before the completion of the government’s $6.5m trial into the efficacy of age assurance. Later, the trial’s preliminary findings conceded the technology is not guaranteed to be effective, and noted “concerning evidence” that some technology providers were seeking to collect too much personal information.

While a government-commissioned survey on the teen social media ban found overwhelming support in theory, it also found most people have no idea what that means in practice, with many uncomfortable with the methods it might entail – such as biometric face scanning or handing over your credit card details. And while there was much fanfare around the social media ban, it’s not clear there is a social licence to extend this approach to search engines and beyond. It seems many people may be unpleasantly surprised.

Who does Woolworths’ tracking and timing of its workers serve? It’s certainly not the customers

by Samantha Floreani in The Guardian  

Fears about losing jobs to automation have become commonplace, but according to United Workers Union (UWU) research and policy officer Lauren Kelly, who researches labour and supermarket automation, rather than manual work being eliminated, it is often augmented by automation technologies. This broadens the concern from one of job loss to more wide-ranging implications for the nature of work itself. That is, she says, “rather than replace human workers with robots, many are being forced to work like robots”.

In addition to the monitoring tactics used upon workers, supermarkets also direct their all-seeing eye towards customers through an array of surveillance measures: cameras track individuals through stores, “smart” exit gates remain closed until payment, overhead image recognition at self-serve checkouts assess whether you’re actually weighing brown onions, and so on. Woolworths even invests in a data-driven “crime intelligence platform”, which raises significant privacy concerns, shares data with police and claims that it can predict crime before it happens – not just the plot of Minority Report but also an offshoot of the deeply problematic concept of “predictive policing”. Modern supermarkets have become a testing ground for an array of potential rights-infringing technologies.