The Alignment Battle We Already Lost
Misalignment is the human condition; resilience is the only way forward
When people talk about “AI alignment,” they usually mean the fear that some future super-intelligence will escape our control and either re-make the world in alien ways or eradicate us outright in pursuit of an unknowable goal. But the framing is a sales pitch, selling the idea that one of these alignment researchers will stumble upon the magic algorithm that solves the alignment problem, while raising capital and distracting from the real issue. The base standard state of reality is misalignment—and we’ve already lost the alignment battle against meta-intelligent beings.
Corporations and governments are our first artificial intelligences. They are emergent meta-intelligences with their own egos, memory, ideology, theoretical immortality (there is no expected death date), and agency—and their goals are often not aligned with individual human goals.
AI researchers seem obsessed with the question of how to ensure the AI isn’t lying to us, but we haven’t figured out how to identify when a government, corporation, or sociopathic individual is lying to us—and we are constantly being deceived by these meta-intelligences.
You don’t need to imagine a runaway optimizer strip-mining the Earth for resources—mining companies already do that. You don’t need to speculate about an AI hijacking our attention—social networks have been running that experiment for years, and the results are in. We don’t need to imagine SkyNet triggering armageddon as human governments are already triggering global warfare.
We built Moloch long before we built machine learning. Humanity has always created gods and systems, then carried out misaligned deeds in their name, sacrificing one another for power, status, or survival. The next couple of decades will be more of the same.
The Human Alignment Problem
The danger isn’t only corporate and government misalignment; it’s us. A 2021 meta-analysis found that 4.5% of adult humans exhibit psychopathic traits, and those traits cluster in positions of power where ruthlessness is rewarded. These individuals are optimizers too, and their goals can deviate significantly from collective survival and benefit.
Before AI converts the Earth and its inhabitants into computronium, we will face endless existential threats from humans and corporations already misaligned with humanity. A cult leader with access to CRISPR could, for only a few hundred dollars, design a pathogen with genocidal potential. An executive optimizing for profit can drive ecological collapse. A state willing to destabilize the infosphere for advantage can ignite civil or even global warfare. All of these are more immediate and more likely than a runaway super-intelligence within the next decade.
You don’t need AGI to end the world, you just need one misaligned human with access to an increasingly low bar of effort. The real alignment problem is us.
The Hype of Alignment
It’s worth pointing out that AI alignment is now an industry in its own right. Former engineers at OpenAI are starting up multi-billion dollar valued companies with ostensibly no plan for aligning AI—because the real solution requires solving a complex array of global human problems. Just as AI companies hype existential unemployment (some claimed two years ago that by today software engineering would be extinct, which turned out laughably false), alignment labs hype existential risk. They are selling something: policy influence, books, podcasts, consulting contracts, conferences, basic seed funding, and government grants.
This doesn’t make alignment research worthless, but it frames it in context. The real alignment problem is not a clean technical puzzle waiting for a breakthrough, it is a sprawling social, cultural, and institutional mess. And paradoxically, it may be AI itself that helps us navigate it. The resilience strategies we need (cultural antibodies, pathogen monitoring, disinformation defense, adaptive governance) are themselves complex alignment problems, and AI is one of the few tools that can scale to help.
Resilience in an Misaligned World
If misalignment is the default state of the world, then the real task is not perfection but resilience: the ability to absorb shocks, adapt, and endure. And this not a single algorithm that an AI alignment company is going to one-shot if we pause AI to allow them to catch up—it’s a lot of complex problems that will take all of us to solve:
1. Layered Defense
Redundancy so no single point of failure cascades. Imagine a city grid designed like a living organism: if one substation fails, neighboring ones reroute power automatically, the way capillaries bypass a clot. Instead of a cascading blackout, only a single block goes dark for a few hours.
Honeypots: create big targets that can absorb the attack, contain threats, isolate biological, digital, or memetic warfare—close devices off from the bigger network when infected.
Generally design systems to fail small, not catastrophically.
2. Cultural Antibodies
Media literacy and cognitive hygiene to resist manipulation. School curriculum where children practice recognizing deepfakes and manipulative memes the way they once practiced handwriting, learning to use the tools to control them and not be controlled by them.
Fact checking systems that build on truth axioms and trace sources to build a reputation ledger of fraudsters and lies.
Deal with our growing mental health crisis before it kills us. This is possibly the most lethal challenge we face as the IQ required to end humanity is lowering every year.
3. Institutional Armor
Governance that adapts quickly instead of ossifying. We could pass laws via a git repository, open source fork, edit, merge-request. Use software engineering practices to commit and merge single changes to the legal and policy codebase one at a time instead of working massive bills with endless unrelated riders into huge, long sessions of verbal argument. This allows rapid changes, but also isolates changes going into the system so we can detect bugs and eradicate bad code.
Design institutions knowing a minority will exploit loopholes.
Fix the peer review and scientific publication problem. In 2012, Amgen researchers found they could only reproduce the findings of 6 out of 53 "landmark" preclinical cancer research papers—an 89% failure rate. A review of psychology has a 65% failure rate. Replication effect sizes in cancer research are 85% smaller than original publications. Findings still get cited as if they’re solid science, warping the foundation. We need a reputation ledger for research. This could be blockchain-based, or as simple as GPG signatures on a directed acyclic graph. Replication would enhance credibility; invalidation would downgrade reputation. Science should self-correct in real time.
4. Technological Counterbalance
AI used as shield: anomaly detection, pathogen sequencing, disinformation filtering. Imagine an AI assistant that simply tells you “warning: the person you are speaking with is engaging in psychopathic manipulation tactics. Here’s how to disarm and disempower their techniques…”— or quietly flagging that a viral article traces back to a troll farm, or that a new drug trial is citing research already invalidated. Instead of drowning in noise, you get signal.
Innovation directed toward resilience tools rather than only profit engines—there’s big profit in having a good defense against annihilation.
Transparency in emerging tech so misuse is visible, not hidden. Make AI open again.
The Reframe
The alignment problem was never about a future AGI. If we cannot align ourselves, how can we expect to align our machines?
The point is not to dream of solving alignment once and for all. The point is resilience: firebreaks, redundancies, cultural antibodies, and institutions that survive misalignment.
The real future doesn’t belong to the companies selling AI job-destruction hype or alignment doom. It belongs to those who accept misalignment as the human condition, and still find ways to endure, adapt, and flourish.

![Adam Eivy \[._.]/'s avatar](https://substackcdn.com/image/fetch/$s_!lg7t!,w_36,h_36,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc81206b2-d085-49fc-a32c-e529b9bfd016_400x400.jpeg)
AI are a tool, as are guns-their merit/injury is a subjective reality. Folks who employ data or weapons for mass destruction will be successful, as will people whose aim is more benevolent. Most people will gut-align (on AI’s intent/impact) with their personal values; not always first with their reason/intellect (which I bet makes you bonk your head against your meat dehydrator) and I’m not different from that group. You probably are though. 😊 Humans need a variety of community in which to hash this stuff out, strengthen our hive mind, and grow. Im thinking about the first hammer right now; sore thumbs. That being said we (mostly) align on this topic and your depth of thought and reasoning spark big fancy thoughts. You are appreciated. Thank you for sharing.