Surveillance

Solving the supermarket: why Coles just hired US defence contractor Palantir

in The Conversation  

First, by inking this deal, Coles frames itself as future-forward and logistically driven. Groceries and grocery-store labour become more data, just like the hedge funds, healthcare, or immigrants that other Palantir clients coordinate.

Supermarkets have been under fire over the past year for increasing profit margins through a pandemic and cost-of-living crisis, and accused of underpaying workers.

The Palantir deal continues this extractive trajectory. Rather than paying workers more or passing savings onto customers, Coles has chosen to invest millions in technology that will “address workforce-related spend” as part of a larger effort to cut costs by a billion dollars over the next four years. Food (and the labour needed to grow, pack and ship it) is transformed from a human need to an optimisation problem. 

Second, dependence. As my own research found, Palantir clients tend to enjoy the all-encompassing data and new features but also become dependent on them. Data mounts up; new servers are needed; licensing fees are high but must be paid.

Much like Apple or Amazon, Palantir’s services excel at creating “vendor lock-in”, a perfect walled garden which clients find hard to leave. This pattern suggests that, over the next three years, Coles will increasingly depend on Silicon Valley technology to understand and manage its own business. A company that sells a quarter of Australia’s groceries may become operationally reliant on a US tech titan.

Who does Woolworths’ tracking and timing of its workers serve? It’s certainly not the customers

by Samantha Floreani in The Guardian  

Fears about losing jobs to automation have become commonplace, but according to United Workers Union (UWU) research and policy officer Lauren Kelly, who researches labour and supermarket automation, rather than manual work being eliminated, it is often augmented by automation technologies. This broadens the concern from one of job loss to more wide-ranging implications for the nature of work itself. That is, she says, “rather than replace human workers with robots, many are being forced to work like robots”.

In addition to the monitoring tactics used upon workers, supermarkets also direct their all-seeing eye towards customers through an array of surveillance measures: cameras track individuals through stores, “smart” exit gates remain closed until payment, overhead image recognition at self-serve checkouts assess whether you’re actually weighing brown onions, and so on. Woolworths even invests in a data-driven “crime intelligence platform”, which raises significant privacy concerns, shares data with police and claims that it can predict crime before it happens – not just the plot of Minority Report but also an offshoot of the deeply problematic concept of “predictive policing”. Modern supermarkets have become a testing ground for an array of potential rights-infringing technologies.

FBI Becomes Rent-A-Cops for CEOs

by Kan Klippenstien 

What’s especially creepy about conflating anti-corporate sentiment with terrorism is that it opens the door to spying on the American people. Counter-terrorism is literally the business of “pre-crime,” in which law enforcement and its intelligence arm work to seek to prevent hypothetical crimes of the future, even where no information exists to suggest any preparations. This is the post-9/11 standard that has become the norm when it comes to well-resourced terrorist organizations like al Qaeda and ISIS. But it should have no place against random shitposters online.

If it sounds like I’m exaggerating when I say there’s a new War on Terrorism, consider Attorney General Pam Bondi’s recent remark calling Molotov cocktails thrown at Teslas “Weapons of Mass Destruction.”

“We are not negotiating” with the vandals whom she has elsewhere deemed “terrorists,” Bondi also declared, as if she were speaking of airline hijackers bargaining to release hostages on an airplane. 

via Cory Doctorow

The New McCarthyism: LGBTQ+ Purges In Government Begin

by Erin Reed in Erin in the Morning  

In the early 1950s, a moral panic over gay people swept across America. LGBTQ+ individuals were cast as threats—vulnerable to blackmail, labeled “deviant sex perverts,” and accused of colluding with communist governments. Senator Joseph McCarthy, infamous for the Red Scare, pressured President Eisenhower into signing an executive order purging LGBTQ+ people from government service. With that signature, the campaign escalated rapidly—up to 10,000 federal employees were fired or forced to resign during what became known as the Lavender Scare, a far less taught but even more devastating purge than the Red Scare. The episode remains a lasting stain on U.S. history. And now, it appears we are witnessing its revival: 100 intelligence officials were just fired for participating in an LGBTQ+ support group chat—an internal network not unlike employee resource groups (ERGs) at most companies.

The firings stem from out-of-context chat logs leaked by far-right commentator Chris Rufo on Monday. Sources tell Erin in the Morning that the chat functioned as an ERG-adjacent LGBTQ+ safe space, where participants discussed topics like gender-affirming surgery, hormone therapy, workplace LGBTQ+ policies, and broader queer issues. Rufo, however, framed these conversations as evidence of misconduct, claiming that “NSA, CIA, and DIA employees discuss genital castration” and alleging discussions of “fetishes, kink, and sex.” To Rufo and his audience, merely talking about being transgender and the realities of transition is enough to be labeled “fetish” content.

Eisenhower and McCarthy would have killed for such an easily accessible list of LGBTQ+ federal employees—and the flimsy pretext to purge them.

Within a day of the chat logs’ release, Director of National Intelligence Tulsi Gabbard announced that all participants in the “obscene, pornographic, and sexually explicit” chatroom would be terminated.

Google is on the Wrong Side of History

for Electronic Frontier Foundation (EFF)  

Google continues to show us why it chose to abandon its old motto of “Don’t Be Evil,” as it becomes more and more enmeshed with the military-industrial complex. Most recently, Google has removed four key points from its AI principles. Specifically, it previously read that the company would not pursue AI applications involving (1) weapons, (2) surveillance, (3) technologies that “cause or are likely to cause overall harm,” and (4) technologies whose purpose contravenes widely accepted principles of international law and  human rights.

Those principles are gone now.

In its place, the company has written that “democracies” should lead in AI development and companies should work together with governments “to create AI that protects people, promotes global growth, and supports national security.” This could mean that the provider of the world’s largest search engine–the tool most people use to uncover the best apple pie recipes and to find out what time their favorite coffee shop closes–could be in the business of creating AI-based weapons systems and leveraging its considerable computing power for surveillance. 

via Cory Doctorow

Everyone knows your location: tracking myself down through in-app ads

After more than couple dozen hours of trying, here are the main takeaways:

  1. I found a couple requests sent by my phone with my location + 5 requests that leak my IP address, which can be turned into geolocation using reverse DNS.
  2. Learned a lot about the RTB (real-time bidding) auctions and OpenRTB protocol and was shocked by the amount and types of data sent with the bids to ad exchanges.
  3. Gave up on the idea to buy my location data from a data broker or a tracking service, because I don't have a big enough company to take a trial or $10-50k to buy a huge database with the data of millions of people + me. Well maybe I do, but such expense seems a bit irrational. Turns out that EU-based peoples` data is almost the most expensive.

But still, I know my location data was collected and I know where to buy it! 

One Person One Price

by David Dayen in The American Prospect  

Today, the fine-graining of data and the isolation of consumers has changed the game. The old idiom is that every man has his price. But that’s literally true now, much more than you know, and it’s certainly the plan for the future.

“The idea of being able to charge every individual person based on their individual willingness to pay has for the most part been a thought experiment,” said Lina Khan, chairwoman of the Federal Trade Commission. “And now … through the enormous amount of behavioral and individualized data that these data brokers and other firms have been collecting, we’re now in an environment that technologically it actually is much more possible to be serving every individual person an individual price based on everything they know about you.”

Economists soft-pedal this emerging trend by calling it “personalized” pricing, which reflects their view that tying price to individual characteristics adds value for consumers. But Zephyr Teachout, who helped write anti-price-gouging rules in the New York attorney general’s office, has a different name for it: surveillance pricing.

“I think public pricing is foundational to economic liberty,” said Teachout, now a law professor at Fordham University. “Now we need to lock it down with rules.”

via Cory Doctorow

Die Rede der Zukunftspreisträgerin

by Meredith Whittaker 

Acceptance speech upon receiving the 2024 Helmut Schmidt Future Prize:

Make no mistake – I am optimistic – but my optimism is an invitation to analysis and action, not a ticket to complacency.

With that in mind, I want to start with some definitions to make sure we’re all reading from the same score. Because so often, in this hype-based discourse, we are not. And too rarely do we make time for the fundamental questions – whose answers, we shall see, fundamentally shift our perspective. Questions like, what is AI? Where did it come from? And why is it everywhere, guaranteeing promises of omniscience, automated consciousness, and what can only be described as magic?

Well, first answer first: AI is a marketing term, not a technical term of art. The term “artificial intelligence” was coined in 1956 by cognitive and computer scientist John McCarthy – about a decade after the first proto-neural network architectures were created. In subsequent interviews McCarthy is very clear about why he invented the term. First, he didn’t want to include the mathematician and philosopher Norbert Wiener in a workshop he was hosting that summer. You see, Wiener had already coined the term “cybernetics,” under whose umbrella the field was then organized. McCarthy wanted to create his own field, not to contribute to Norbert’s – which is how you become the “father” instead of a dutiful disciple. This is a familiar dynamic for those of us familiar with “name and claim” academic politics. Secondly, McCarthy wanted grant money. And he thought the phrase “artificial intelligence” was catchy enough to attract such funding from the US government, who at the time was pouring significant resources into technical research in service of post-WWII cold war dominance.

Now, in the course of the term’s over 70 year history, “artificial intelligence” has been applied to a vast and heterogeneous array of technologies that bear little resemblance to each other. Today, and throughout, it connotes more aspiration and marketing than coherent technical approach. And its use has gone in and out of fashion, in time with funding prerogatives and the hype-to-disappointment cycle.

So why, then, is AI everywhere now? Or, why did it crop up in the last decade as the big new thing?

The answer to that question is to face the toxic surveillance business model – and the big tech monopolies that built their empires on top of this model.

via Meredith Whittaker

Opinion: Banning TikTok isn’t just a bad idea. It’s a dangerous one

by Evan Greer for Cable News Network CNN  

 As they hyperventilate about TikTok, US politicians are so eager to appear “tough on China” that they’re suggesting we build our very own Great Firewall here at home. There is a small but growing number of countries in the world so authoritarian that they block popular apps and websites entirely. It’s regrettable that so many US lawmakers want to add us to that list.

Several of the proposals wending their way through Congress would grant the federal government unprecedented new powers to control what technology we can use and how we can express ourselves – authority that goes far beyond TikTok. The bipartisan RESTRICT Act (S. 686), for example, would enable the Commerce Department to engage in extraordinary acts of policing, criminalizing a wide range of activities with companies from “hostile” countries and potentially even banning entire apps simply by declaring them a threat to national security. 

[…] 

The law is vague enough that some experts have raised concerns that it could threaten individual internet users with lengthy prison sentences for taking steps to “evade” a ban, like side-loading an app (i.e., bypassing approved app distribution channels such as the Apple store) or using a virtual private network (VPN). 

[…] 

A ban on TikTok wouldn’t even be effective: The Chinese government could purchase much of the same information from data brokers, which are largely unregulated in the US.

The rush to ban TikTok – or force its sale to a US company – is a convenient distraction from what our elected officials should be doing to protect us from government manipulation and commercial surveillance: passing basic data privacy legislation. It’s a matter of common knowledge that Instagram, YouTube, Venmo, Snapchat and most of the other apps on your phone engage in similar data harvesting business practices to TikTok. Some are even worse. `

TikTok Threat Is Purely Hypothetical, U.S. Intelligence Admits

by Kan Klippenstien in The Intercept  

The relatively measured tone adopted by top intelligence officials contrasts sharply with the alarmism emanating from Congress. In 2022, Rep. Mike Gallagher, R-Wis., deemed TikTok “digital fentanyl,” going on to co-author a column in the Washington Post with Sen. Marco Rubio, R-Fla., calling for TikTok to be banned. Gallagher and Rubio later introduced legislation to do so, and 39 states have, as of this writing, banned the use of TikTok on government devices.

None of this is to say that China hasn’t used TikTok to influence public opinion and even, it turns out, to try to interfere in American elections. “TikTok accounts run by a [People’s Republic of China] propaganda arm reportedly targeted candidates from both political parties during the U.S. midterm election cycle in 2022,” says the annual Intelligence Community threat assessment released on Monday. But the assessment provides no evidence that TikTok coordinated with the Chinese government. In fact, governments — including the United States — are known to use social media to influence public opinion abroad.

“The problem with TikTok isn’t related to their ownership; it’s a problem of surveillance capitalism and it’s true of all social media companies,” computer security expert Bruce Schneier told The Intercept. “In 2016 Russia did this with Facebook and they didn’t have to own Facebook — they just bought ads like everybody else.”`

via Steven Zekowski