Alive to our era of genuine existential danger – from climate breakdown to nuclear war to sky-rocketing inequality and unregulated AI – but financially and ideologically committed to deepening those threats, contemporary far-right movements lack any credible vision for a hopeful future. The average voter is offered only remixes of a bygone past, alongside the sadistic pleasures of dominance over an ever-expanding assemblage of dehumanized others.
And so we have the Trump administration’s dedication to releasing its steady stream of real and AI-generated propaganda designed solely for these pornographic purposes. Footage of shackled immigrants being loaded on to deportation flights, set to the sounds of clanking chains and locking cuffs, which the official White House X account labeled “ASMR”, a reference to audio designed to calm the nervous system. Or the same account sharing news of the detention of Mahmoud Khalil, a US permanent resident who was active in Columbia University’s pro-Palestinian encampment, with the gloating words: “SHALOM, MAHMOUD.” Or any number of homeland security secretary Kristi Noem’s sadism-chic photo ops (atop a horse at the US-Mexican border, in front of a crowded prison cell in El Salvador, slinging a machine gun while arresting immigrants in Arizona …).
The governing ideology of the far right in our age of escalating disasters has become a monstrous, supremacist survivalism.
It is terrifying in its wickedness, yes. But it also opens up powerful possibilities for resistance. To bet against the future on this scale – to bank on your bunker – is to betray, on the most basic level, our duties to one another, to the children we love, and to every other life form with whom we share a planetary home. This is a belief system that is genocidal at its core and treasonous to the wonder and beauty of this world. We are convinced that the more people understand the extent to which the right has succumbed to the Armageddon complex, the more they will be willing to fight back, realizing that absolutely everything is now on the line.
Our opponents know full well that we are entering an age of emergency, but have responded by embracing lethal yet self-serving delusions. Having bought into various apartheid fantasies of bunkered safety, they are choosing to let the Earth burn. Our task is to build a wide and deep movement, as spiritual as it is political, strong enough to stop these unhinged traitors. A movement rooted in a steadfast commitment to one another, across our many differences and divides, and to this miraculous, singular planet.
TESCREAL
The rise of end times fascism
in The GuardianThe TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence
in First MondayThe stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.