Technology

in Electrek  

Over the years, Tesla has periodically offered cheaper vehicles with shorter ranges, and rather than building a new vehicle with a smaller battery pack, the automaker has decided to instead use the same battery packs capable of more range and software-locked the range.

Yesterday, we reported that Tesla stopped taking orders for the cheapest version of Model Y, the Standard Range RWD with 260 miles of range. Instead, Tesla started offering a new Long Range RWD with 320 miles of range.

Separately, CEO Elon Musk revealed that the previous Model Y Standard Range RWD was a software-locked vehicle – something that was suspected but never confirmed.

The CEO announced that Tesla plans to unlock the rest of the battery packs for an additional 40 to 60 miles of range:

'The “260 mile” range Model Y’s built over the past several months actually have more range that can be unlocked for $1500 to $2000 (gains 40 to 60 miles of range), depending on which battery cells you have.'

Musk said that Tesla is currently “working through regulatory approvals” to enable this” for this upgrade offer.

by danah boyd ,  Taylor Lorenz for YouTube  
Remote video URL

Lately, a moral panic has been brewing. People in the media, government, and across the internet are declaring that children are suffering an unprecedented mental health crisis and that smartphones and social media are to blame. But is this even true?

I talked to danah boyd, the top researcher on kids and social media use, about some of the problems that young people today are facing, why quick fixes like banning social media apps are never the answer, and what we can actually do to help younger generations.

by danah boyd 

I have to admit that it’s breaking my heart to watch a new generation of anxious parents think that they can address the struggles their kids are facing by eliminating technology from kids’ lives. I’ve been banging my head against this wall for almost 20 years, not because I love technology but because I care so deeply about vulnerable youth. And about their mental health. And boy oh boy do I loathe moral panics. I realize they’re politically productive, but they cause so much harm and distraction.

I wish there was a panacea to the mental health epidemic we are seeing. I wish I could believe that eliminating tech would make everything hunky dory. (I wish I could believe many things that are empirically not true. Like that there is no climate crisis.) Sadly, I know that what young people are facing is ecological. As a researcher, I know that young people’s relationship with tech is so much more complicated than pundits wish to suggest. I also know that the hardest part of being a parent is helping a child develop a range of social, emotional, and cognitive capacities so that they can be independent. And I know that excluding them from public life or telling them that they should be blocked from what adults values because their brains aren’t formed yet is a type of coddling that is outright destructive. And it backfires every time.

I’m also sick to my stomach listening to people talk about a “gender contagion” as if every aspect of how we present ourselves in this world isn’t socially constructed. (Never forget that pink was once the ultimate sign of masculinity.) Young people are trying to understand their place in this world. Of course they’re exploring. And I want my children to live in a world where exploration is celebrated rather than admonished. The mental health toll of forcing everyone to assimilate to binaries is brutal. I paid that price; I don’t want my kids to as well.

[…]

Please please please center young people rather than tech. They need our help. Technology mirrors and magnifies the good, bad, and ugly. It’s what makes the struggles young people are facing visible. But it is not the media effects causal force that people are pretending it is.

by Edward Zitron 

Modern AI models are trained by feeding them "publicly-available" text from the internet, scraped from billions of websites (everything from Wikipedia to Tumblr, to Reddit), which the model then uses to discern patterns and, in turn, answer questions based on the probability of an answer being correct.

Theoretically, the more training data that these models receive, the more accurate their responses will be, or at least that's what the major AI companies would have you believe. Yet AI researcher Pablo Villalobos told the Journal that he believes that GPT-5 (OpenAI's next model) will require at least five times the training data of GPT-4. In layman's terms, these machines require tons of information to discern what the "right" answer to a prompt is, and "rightness" can only be derived from seeing lots of examples of what "right" looks like.

[…]

In essence, the AI boom requires more high-quality data than currently exists to progress past the point we're currently at, which is one where the outputs of generative AI are deeply unreliable. The amount of data it needs is several multitudes more than currently exists at a time when algorithms are happily-promoting and encouraging AI-generated slop, and thousands of human journalists have lost their jobs, with others being forced to create generic search-engine-optimized slop. One (very) funny idea posed by the Journal's piece is that AI companies are creating their own "synthetic" data to train their models, a "computer-science version of inbreeding" that Jathan Sadowski calls Habsburg AI.

via Zinnia Jones
by Meredith Whittaker 

Acceptance speech upon receiving the 2024 Helmut Schmidt Future Prize:

Make no mistake – I am optimistic – but my optimism is an invitation to analysis and action, not a ticket to complacency.

With that in mind, I want to start with some definitions to make sure we’re all reading from the same score. Because so often, in this hype-based discourse, we are not. And too rarely do we make time for the fundamental questions – whose answers, we shall see, fundamentally shift our perspective. Questions like, what is AI? Where did it come from? And why is it everywhere, guaranteeing promises of omniscience, automated consciousness, and what can only be described as magic?

Well, first answer first: AI is a marketing term, not a technical term of art. The term “artificial intelligence” was coined in 1956 by cognitive and computer scientist John McCarthy – about a decade after the first proto-neural network architectures were created. In subsequent interviews McCarthy is very clear about why he invented the term. First, he didn’t want to include the mathematician and philosopher Norbert Wiener in a workshop he was hosting that summer. You see, Wiener had already coined the term “cybernetics,” under whose umbrella the field was then organized. McCarthy wanted to create his own field, not to contribute to Norbert’s – which is how you become the “father” instead of a dutiful disciple. This is a familiar dynamic for those of us familiar with “name and claim” academic politics. Secondly, McCarthy wanted grant money. And he thought the phrase “artificial intelligence” was catchy enough to attract such funding from the US government, who at the time was pouring significant resources into technical research in service of post-WWII cold war dominance.

Now, in the course of the term’s over 70 year history, “artificial intelligence” has been applied to a vast and heterogeneous array of technologies that bear little resemblance to each other. Today, and throughout, it connotes more aspiration and marketing than coherent technical approach. And its use has gone in and out of fashion, in time with funding prerogatives and the hype-to-disappointment cycle.

So why, then, is AI everywhere now? Or, why did it crop up in the last decade as the big new thing?

The answer to that question is to face the toxic surveillance business model – and the big tech monopolies that built their empires on top of this model.

via Meredith Whittaker
by Edward Zitron 

While I’m guessing, the timing of the March 2019 core update, along with the traffic increases to previously-suppressed sites, heavily suggests that Google’s response to the Code Yellow was to roll back changes that were made to maintain the quality of search results.

A few months later in May 2019, Google would roll out a redesign of how ads are shown on the platform on Google’s mobile search, replacing the bright green “ad” label and URL color on ads with a tiny little bolded black note that said “ad,” with the link looking otherwise identical to a regular search link. I guess that's how it started hitting their numbers following the code yellow.  

In January 2020, Google would bring this change to the desktop, which The Verge’s Jon Porter would suggest made “Google’s ads look just like search results now.”

Five months later, a little over a year after the Code Yellow debacle, Google would make Prabhakar Raghavan the head of Google Search, with Jerry Dischler taking his place as head of ads. After nearly 20 years of building Google Search, Gomes would be relegated to SVP of Education at Google. Gomes, who was a critical part of the original team that made Google Search work, who has been credited with establishing the culture of the world’s largest and most important search engine, was chased out by a growth-hungry managerial types led by Prabhakar Raghavan, a management consultant wearing an engineer costume. 

in The Verge  

Microsoft is starting to enable ads inside the Start menu on Windows 11 for all users. After testing these briefly with Windows Insiders earlier this month, Microsoft has started to distribute update KB5036980 to Windows 11 users this week, which includes “recommendations” for apps from the Microsoft Store in the Start menu.

“The Recommended section of the Start menu will show some Microsoft Store apps,” says Microsoft in the update notes of its latest public Windows 11 release. “These apps come from a small set of curated developers.” The ads are designed to help Windows 11 users discover more apps, but will largely benefit the developers that Microsoft is trying to tempt into building more Windows apps.

by Meredith Whittaker 
Remote video URL

This keynote will look at the connections between where we are now and how we got here. Connecting the “Crypto Wars”, the role of encryption and privacy, and ultimately the hype of AI… all through the lens of Signal.

Full text of Meredith's talk: https://signal.org/blog/pdfs/ndss-key...

for Watson Institute for International and Public Affairs  

America’s military-industrial complex has been rapidly expanding from the Capital Beltway to Silicon Valley. Although much of the Pentagon’s budget is spent on conventional weapons systems, the Defense Department has increasingly sought to adopt AI-enabled systems. Big tech companies, venture capital, and private equity firms benefit from multi-billion dollar Defense contracts, and smaller defense tech startups that “move fast and break things” also receive increased Defense funding. This report illustrates how a growing portion of the Defense Department’s spending is going to large, well-known tech firms, including some of the most highly valued corporations in the world.

Given the often-classified nature of large defense and intelligence contracts, a lack of transparency makes it difficult to discern the true amount of U.S. spending diverted to Big Tech. Yet, research reveals that the amount is substantial, and growing. According to the nonprofit research organization Tech Inquiry, three of the world’s biggest tech corporations were awarded approximately $28 billion from 2018 to 2022, including Microsoft ($13.5 billion), Amazon ($10.2 billion), and Alphabet, which is Google’s parent company ($4.3 billion). This paper found that the top five contracts to major tech firms between 2019 and 2022 had contract ceilings totaling at least $53 billion combined.

From 2021 through 2023, venture capital firms reportedly pumped nearly $100 billion into defense tech startup companies — an amount 40 percent higher than the previous seven years combined. This report examines how Silicon Valley startups, big tech, and venture capital who benefit from classified Defense contracts will create costly, high-tech defense products that are ineffective, unpredictable, and unsafe – all on the American taxpayer’s dime.

by Robin Berjon 

User agents are pieces of software that represent the user, a natural person, in their digital interactions. Examples include Web browsers, operating systems, single-sign-on systems, or voice assistants. User agents hold, due to the role they play in the digital ecosystem, a strategic position. They can be arbiters of structural power. The overwhelming majority of the data that is collected about people, particularly that which is collected passively, is collected through user agents, at times with their explicit support or at least by their leave. I propose to lean on this strategic function that user agents hold to develop a regime of fiduciary duties for them that is relatively limited in the number of actors that it affects yet has the means to significantly increase the power of users in their relationships with online platforms. The limited, tractable scope of software user agency as a fiduciary relationship provides effective structural leverage in righting the balance of power between individuals and tech companies. 

via Cory Doctorow