Today, the fine-graining of data and the isolation of consumers has changed the game. The old idiom is that every man has his price. But that’s literally true now, much more than you know, and it’s certainly the plan for the future.
“The idea of being able to charge every individual person based on their individual willingness to pay has for the most part been a thought experiment,” said Lina Khan, chairwoman of the Federal Trade Commission. “And now … through the enormous amount of behavioral and individualized data that these data brokers and other firms have been collecting, we’re now in an environment that technologically it actually is much more possible to be serving every individual person an individual price based on everything they know about you.”
Economists soft-pedal this emerging trend by calling it “personalized” pricing, which reflects their view that tying price to individual characteristics adds value for consumers. But Zephyr Teachout, who helped write anti-price-gouging rules in the New York attorney general’s office, has a different name for it: surveillance pricing.
“I think public pricing is foundational to economic liberty,” said Teachout, now a law professor at Fordham University. “Now we need to lock it down with rules.”
Surveillance
Acceptance speech upon receiving the 2024 Helmut Schmidt Future Prize:
Make no mistake – I am optimistic – but my optimism is an invitation to analysis and action, not a ticket to complacency.
With that in mind, I want to start with some definitions to make sure we’re all reading from the same score. Because so often, in this hype-based discourse, we are not. And too rarely do we make time for the fundamental questions – whose answers, we shall see, fundamentally shift our perspective. Questions like, what is AI? Where did it come from? And why is it everywhere, guaranteeing promises of omniscience, automated consciousness, and what can only be described as magic?
Well, first answer first: AI is a marketing term, not a technical term of art. The term “artificial intelligence” was coined in 1956 by cognitive and computer scientist John McCarthy – about a decade after the first proto-neural network architectures were created. In subsequent interviews McCarthy is very clear about why he invented the term. First, he didn’t want to include the mathematician and philosopher Norbert Wiener in a workshop he was hosting that summer. You see, Wiener had already coined the term “cybernetics,” under whose umbrella the field was then organized. McCarthy wanted to create his own field, not to contribute to Norbert’s – which is how you become the “father” instead of a dutiful disciple. This is a familiar dynamic for those of us familiar with “name and claim” academic politics. Secondly, McCarthy wanted grant money. And he thought the phrase “artificial intelligence” was catchy enough to attract such funding from the US government, who at the time was pouring significant resources into technical research in service of post-WWII cold war dominance.
Now, in the course of the term’s over 70 year history, “artificial intelligence” has been applied to a vast and heterogeneous array of technologies that bear little resemblance to each other. Today, and throughout, it connotes more aspiration and marketing than coherent technical approach. And its use has gone in and out of fashion, in time with funding prerogatives and the hype-to-disappointment cycle.
So why, then, is AI everywhere now? Or, why did it crop up in the last decade as the big new thing?
The answer to that question is to face the toxic surveillance business model – and the big tech monopolies that built their empires on top of this model.
As they hyperventilate about TikTok, US politicians are so eager to appear “tough on China” that they’re suggesting we build our very own Great Firewall here at home. There is a small but growing number of countries in the world so authoritarian that they block popular apps and websites entirely. It’s regrettable that so many US lawmakers want to add us to that list.
Several of the proposals wending their way through Congress would grant the federal government unprecedented new powers to control what technology we can use and how we can express ourselves – authority that goes far beyond TikTok. The bipartisan RESTRICT Act (S. 686), for example, would enable the Commerce Department to engage in extraordinary acts of policing, criminalizing a wide range of activities with companies from “hostile” countries and potentially even banning entire apps simply by declaring them a threat to national security.
[…]
The law is vague enough that some experts have raised concerns that it could threaten individual internet users with lengthy prison sentences for taking steps to “evade” a ban, like side-loading an app (i.e., bypassing approved app distribution channels such as the Apple store) or using a virtual private network (VPN).
[…]
A ban on TikTok wouldn’t even be effective: The Chinese government could purchase much of the same information from data brokers, which are largely unregulated in the US.
The rush to ban TikTok – or force its sale to a US company – is a convenient distraction from what our elected officials should be doing to protect us from government manipulation and commercial surveillance: passing basic data privacy legislation. It’s a matter of common knowledge that Instagram, YouTube, Venmo, Snapchat and most of the other apps on your phone engage in similar data harvesting business practices to TikTok. Some are even worse. `
The relatively measured tone adopted by top intelligence officials contrasts sharply with the alarmism emanating from Congress. In 2022, Rep. Mike Gallagher, R-Wis., deemed TikTok “digital fentanyl,” going on to co-author a column in the Washington Post with Sen. Marco Rubio, R-Fla., calling for TikTok to be banned. Gallagher and Rubio later introduced legislation to do so, and 39 states have, as of this writing, banned the use of TikTok on government devices.
None of this is to say that China hasn’t used TikTok to influence public opinion and even, it turns out, to try to interfere in American elections. “TikTok accounts run by a [People’s Republic of China] propaganda arm reportedly targeted candidates from both political parties during the U.S. midterm election cycle in 2022,” says the annual Intelligence Community threat assessment released on Monday. But the assessment provides no evidence that TikTok coordinated with the Chinese government. In fact, governments — including the United States — are known to use social media to influence public opinion abroad.
“The problem with TikTok isn’t related to their ownership; it’s a problem of surveillance capitalism and it’s true of all social media companies,” computer security expert Bruce Schneier told The Intercept. “In 2016 Russia did this with Facebook and they didn’t have to own Facebook — they just bought ads like everybody else.”`
“I thought it would be really funny if a stranger came over asking to do a poo,” explained Will. They never did, and about a year ago Will moved out.
Recently, Will had a look to see if Big Dumpers was still marked on Google Maps. It was. He was getting monthly emails about the performance of his business with information on how many people had viewed it or clicked to see its phone number.
But looking at the app’s listing for the “business”, Will spotted something that he didn’t find as funny. Like many other businesses, Google Maps showed a “Popular times” graph depicting how popular the location is using information provided by Google users who’ve agreed to let the app access their geolocation data. 9AM on Thursday was a busy time for Big Dumpers, according to Google Maps, but completely empty later in the day.
What clicked in Will’s mind is that he had inadvertently created a public tracker of when people were in his share house — almost certainly without their knowledge. Will quickly voluntarily “closed” his business on Google but the listing remained up afterwards.
After being informed of the exploit by Crikey, founder of Australian information security company DVULN Jamieson O’Reilly said that his review of Google’s technical material corroborated Will’s understanding of the situation.
“My gut tells me you could list any place as a business then if the residents had opted in to location services you could totally use it to measure someone’s patterns,” he said.
A few years ago, I was surprised to find out that my friend Peter Eckersley — a very privacy conscious person who is Technology Projects Director at the EFF — used Gmail. I asked him why he would willingly give Google copies of all his email. Peter pointed out that if all of your friends use Gmail, Google has your email anyway. Any time I email somebody who uses Gmail — and anytime they email me — Google has that email.
Since our conversation, I have often wondered just how much of my email Google really has. This weekend, I wrote a small program to go through all the email I have kept in my personal inbox since April 2004 (when Gmail was started) to find out.
SAN FRANCISCO — Google has agreed to settle a $5 billion privacy lawsuit alleging that it spied on people who used the "incognito" mode in its Chrome browser — along with similar "private" modes in other browsers — to track their internet use.
The class-action lawsuit filed in 2020 said Google misled users into believing that it wouldn't track their internet activities while using incognito mode. It argued that Google's advertising technologies and other techniques continued to catalog details of users' site visits and activities despite their use of supposedly "private" browsing.
Just don't buy these cursed machines. Get a nice, big, dumb, computer monitor.
If you bought a new smart TV during any of the holiday sales, there’s likely to be an uninvited guest watching along with you. The most popular smart TVs sold today use automatic content recognition (ACR), a kind of ad surveillance technology that collects data on everything you view and sends it to a proprietary database to identify what you’re watching and serve you highly targeted ads. The software is largely hidden from view, and it’s complicated to opt out. Many consumers aren’t aware of ACR, let alone that it’s active on their shiny new TVs. If that’s you, and you’d like to turn it off, we’re going to show you how.
Last month the government’s tender website, AusTender, published a contract between the Australian Signals Directorate (ASD) and ShadowDragon Holdings, LLC. The contract runs for two years and is valued at $563,040.
ShadowDragon Holdings is an American company that sells software collecting “open source intelligence software, unique datasets and training” to organisations, including the United States Immigration and Customs Enforcement agency as well as state police forces in New York and Michigan.
ShadowDragon’s products pull data from a range of public online platforms — reportedly more than “200 unique sources and datasets” — to make them searchable for its users.
The full list of places isn’t published but its promotional material lists places including Facebook, Instagram, Telegram, YouTube, X, Google, Amazon, Tumblr, WhatsApp, LinkedIn, Reddit, 4Chan, Skype, Spotify, Twitch, Xbox network, PornHub, SoundCloud, Gab, Foursquare, Tripadvisor, Tinder, Etsy, PayPal, Flickr, Imgur, Disqus, eBay, GitHub, DeviantArt, Blogger, FetLife, BitChute, parenting forum BabyCenter, social network for Black people BlackPlanet and more.