Technology

It is no longer safe to move our governments and societies to US clouds

by Bert Hubert 

Not only is it scary to have all your data available to US spying, it is also a huge risk for your business/government continuity. From now on, all our business processes can be brought to a halt with the push of a button in the US. And not only will everything then stop, will we ever get our data back? Or are we being held hostage? This is not a theoretical scenario, something like this has already happened.

Here and there, some parts of at least the Dutch government are deciding not to migrate EVERYTHING to the US (kudos to the government workers who are fighting for this!).

But even here, the details of Dutch policy are that our data will only ‘for now’ stay on our own servers. Experts are also doubtful whether it’s actually possible with the current “partial cloud” plan to keep the data here exclusively.

And then we come to the apparent reason why we are putting our head on Trump’s chopping block: “American software is just so easy to use”.

Personally, I don’t know many fans of MS Teams, Office, and Outlook. We are, however, very used to these software products. We’ve become quite good at using them.

But this brings us to the unbearable conclusion that we are entrusting all our data and business processes to the new King of America
 because we can’t be bothered to get used to a different word processor, or make an effort to support other software.

The cognitive and moral harms of platform decay

Platform decay is the phenomenon of major internet platforms, such as Google search, Facebook, and Amazon, systematically declining in quality in recent years. This decline in quality is attributed to the particular business model of these platforms and its harms are usually understood to be violations of principles of economic fairness and of inconveniencing users. In this article, we argue that the scope and nature of these harms are underappreciated. In particular, we establish that platform decay constitutes both a cognitive and moral harm to its users. We make this case by arguing that platforms function as cognitive scaffolds or extensions, as understood by the extended mind approach to cognition. It is then a straightforward implication that platform decay constitutes cognitive damage to a platform’s users. This cognitive damage is a harm on its own; however, it can also undermine cognitive capacities that virtue ethicists argue are necessary for developing a virtuous character. We will focus on this claim in regards to the capacity to pay attention, a capacity that platform decay targets specifically. Platform decay therefore also constitutes both cognitive and moral harm, which simultaneously affects billions of people.

Optus’s triple zero debacle is further proof of the failure of the neoliberal experiment

by John Quiggin in The Guardian  

A nice little potted history of Australian telecommunication privatisation failure:

A closer look at the record tells a different story. Technological progress in telecommunications produced a steady reduction in prices throughout the 20th century, taking place around the world and regardless of the organisational structure. The shift from analog to digital telecommunications accelerated the process. Telecom Australia, the statutory authority that became Telstra, recorded total factor productivity growth rates as high as 10% per year, remaining profitable while steadily reducing prices.

But for the advocates of neoliberal microeconomic reform, this wasn’t enough. They hoped, or rather assumed, that competition would produce both better outcomes for consumers and a more efficient rollout of physical infrastructure. [
]

The failures emerged early. Seeking to cement their positions before the advent of open competition, Telstra and Optus spent billions rolling out fibre-optic cable networks. But rather than seeking to maximise total coverage, the two networks were virtually parallel, a result that is a standard prediction of economic theory. The rollout stopped when the market was fully opened in 1997, leaving parts of urban Australia with two redundant fibre networks and the rest of the country with none.

The next failure came with the rollout of broadband. Under public ownership, this would have been a relatively straightforward matter. But the newly privatised Telstra played hardball, demanding a system that would cement its monopoly position in fixed-line infrastructure. The end result was the need to return to public ownership with the national broadband network, while paying Telstra handsomely for access to ducts and wires that the public had owned until a few years previously.

Meanwhile the hoped-for competition in mobile telephony has failed to emerge. The near-duopoly created in 1991, with Telstra as the dominant player and Optus playing second fiddle, has endured for more than 30 years. 

Who does Woolworths’ tracking and timing of its workers serve? It’s certainly not the customers

by Samantha Floreani in The Guardian  

Fears about losing jobs to automation have become commonplace, but according to United Workers Union (UWU) research and policy officer Lauren Kelly, who researches labour and supermarket automation, rather than manual work being eliminated, it is often augmented by automation technologies. This broadens the concern from one of job loss to more wide-ranging implications for the nature of work itself. That is, she says, “rather than replace human workers with robots, many are being forced to work like robots”.

In addition to the monitoring tactics used upon workers, supermarkets also direct their all-seeing eye towards customers through an array of surveillance measures: cameras track individuals through stores, “smart” exit gates remain closed until payment, overhead image recognition at self-serve checkouts assess whether you’re actually weighing brown onions, and so on. Woolworths even invests in a data-driven “crime intelligence platform”, which raises significant privacy concerns, shares data with police and claims that it can predict crime before it happens – not just the plot of Minority Report but also an offshoot of the deeply problematic concept of “predictive policing”. Modern supermarkets have become a testing ground for an array of potential rights-infringing technologies.

Samsung caught faking zoom photos of the Moon

in The Verge  

For years, Samsung “Space Zoom”-capable phones have been known for their ability to take incredibly detailed photos of the Moon. But a recent Reddit post showed in stark terms just how much computational processing the company is doing, and — given the evidence supplied — it feels like we should go ahead and say it: Samsung’s pictures of the Moon are fake. 

[
]

The test of Samsung’s phones conducted by Reddit user u/ibreakphotos was ingenious in its simplicity. They created an intentionally blurry photo of the Moon, displayed it on a computer screen, and then photographed this image using a Samsung S23 Ultra. As you can see below, the first image on the screen showed no detail at all, but the resulting picture showed a crisp and clear “photograph” of the Moon. The S23 Ultra added details that simply weren’t present before. There was no upscaling of blurry pixels and no retrieval of seemingly lost data. There was just a new Moon — a fake one. 

The Digital Packrat Manifesto

in 404 Media  

For more than two decades, I’ve been what some might call a hoarder but what I’ve more affectionately dubbed a “digital packrat.” Which is to say I mostly avoid streaming services, I don’t trust any company or cloud with my digital media, and I store everything as files on devices that I physically control. My mp3 collection has been going strong since the Limewire days, I keep high-quality rips of all my movies on a local media server, and my preferred reading device holds a large collection of DRM-free ebooks and PDFs—everything from esoteric philosophy texts and scientific journals to scans of lesbian lifestyle magazines from the 1980s.

Sure, there are websites where you can find some of this material, like the Internet Archive. But this archive is mine. It’s my own little Library of Alexandria, built from external hard drives, OCD, and a strong distrust of corporations. I know I’m not the only one who has gone to these lengths. Sometimes when I’m feeling gloomy, I imagine how when society falls apart, we packrats will be the only ones in our village with all six seasons of The Sopranos. At the rate we’re going, that might not be too far off.

Amazon is far from alone in this long-running trend towards eliminating digital ownership. For many people, digital distribution and streaming services have already practically ended the concept of owning and controlling your own media files. Spotify is now almost synonymous with music for some younger generations, having strip-mined the music industry from both ends by demonetizing more than 60% of the artists on its platform and pushing algorithmic slop while­ simultaneously raising subscription fees.

Of course, surrendering this control means being at the complete mercy of Amazon and other platforms to determine what we can watch, read, and listen to—and we’ve already seen that these services frequently remove content for all sorts of reasons. Last October, one year after the Israeli military began its campaign of genocide in Gaza, Netflix removed “Palestinian Stories,” a collection of 19 films featuring Palestinian filmmakers and characters, saying it declined to renew its distribution license. Amazon also once famously deleted copies of 1984 off of people’s Kindles. Fearing piracy, many software companies have moved from the days of “Don’t Copy That Floppy” to the cloud-based software-as-a-service model, which requires an internet connection and charges users monthly subscription fees to use apps like Photoshop. No matter how you look at it, digital platforms have put us on a path to losing control of any media that we can’t physically touch.

Power Cut

by Edward Zitron 

Microsoft has, through a combination of canceled leases, pullbacks on Statements of Qualifications, cancellations of land parcels and deliberate expiration of Letters of Intent, effectively abandoned data center expansion equivalent to over 14% of its current capacity.

[
]

The reason I'm writing in such blunt-force terms is that I want to make it clear that Microsoft is effectively cutting its data center expansion by over a gigawatt of capacity, if not more, and it’s impossible to reconcile these cuts with the expectation that generative AI will be a massive, transformative technological phenomenon. 

I believe the reason Microsoft is cutting back is that it does not have the appetite to provide further data center expansion for OpenAI, and it’s having doubts about the future of generative AI as a whole. If Microsoft believed there was a massive opportunity in supporting OpenAI's further growth, or that it had "massive demand" for generative AI services, there would be no reason to cancel capacity, let alone cancel such a significant amount.

[
]

Microsoft is cancelling plans to massively expand its data center capacity right at a time when OpenAI just released its most computationally-demanding model ever. How do you reconcile those two things without concluding either that Microsoft expects GPT-4.5 to be a flop, or that it’s simply unwilling to continue bankrolling OpenAI’s continued growth, or that it’s having doubts about the future of generative AI as a whole?

[
]

Generative AI does not have meaningful mass-market use cases, and while ChatGPT may have 400 million weekly active users, as I described last week, there doesn’t appear to be meaningful consumer adoption outside of ChatGPT, mostly because almost all AI coverage inevitably ends up marketing one company: OpenAI. Argue with me all you want about your personal experiences with ChatGPT, or how you’ve found it personally useful. That doesn’t make it a product with mass-market utility, or enterprise utility, or worth the vast sums of money being ploughed into generative AI. 

via Cory Doctorow

Dumping open source for proprietary rarely pays off: Better to stick a fork in it

in ZDNet  

At the UK's State of Open conference, Dawn Foster, director of data science for the CHAOSS Project, unveiled compelling evidence that forks -- community-driven alternatives to proprietary codebases -- are thriving. At the same time, companies that abandoned open-source principles face stagnant growth and disillusioned users.

[
]

At the event in London, James Governor, RedMonk's co-founder, said: "There is neither a share price rise for public companies nor revenue gains. There's no clear, 'Oh, we relicensed and got a hockey stick.' So, I think that if businesses are making these decisions, the expectation is that relicensing will be the special source that takes it to the next level. The numbers do not indicate that."

Simultaneously, Foster noted at the event that when companies closed their code, communities fought back with successful forks.

[
]

Foster's CHAOSS research also revealed that forks under neutral foundations have three times more organizational diversity than their proprietary counterparts. OpenSearch, for example, saw contributions from 45 organizations in its first year -- a stark contrast to Elasticsearch's single-vendor dominance.

In other words, open-source forks are far more popular than their proprietary counterparts. Foster said users flock to forks to avoid vendor lock-in.

AI Personality Extraction from Faces: Labor Market Implications

The stupid use cases for AI just keep coming:

Human capital---encompassing cognitive skills and personality traits---is critical for labor market success, yet the personality component remains difficult to measure at scale. Leveraging advances in artificial intelligence and comprehensive LinkedIn microdata, we extract the Big 5 personality traits from facial images of 96,000 MBA graduates, and demonstrate that this novel ``Photo Big 5'' predicts school rank, compensation, job seniority, industry choice, job transitions, and career advancement. Using administrative records from top-tier MBA programs, we find that the Photo Big 5 exhibits only modest correlations with cognitive measures like GPA and standardized test scores, yet offers comparable incremental predictive power for labor outcomes. Unlike traditional survey-based personality measures, the Photo Big 5 is readily accessible and potentially less susceptible to manipulation, making it suitable for wide adoption in academic research and hiring processes. However, its use in labor market screening raises ethical concerns regarding statistical discrimination and individual autonomy

via Cory Doctorow