Security Theatre Is a Liability
Why Overcomplicated “Protection” Breaks Safety, and Why Usable Control Is the Only Adult Option
Keywords
information security; security theatre; usability; overengineering; human risk; device continuity; key migration; authentication; threat modelling; compliance incentives; resilience; user controlSubscribe
Introduction: The industry that sells safety while breeding risk
Security, as practised by its modern priesthood, has drifted from engineering into pageantry. The rituals are elaborate, the vocabulary is holy, and the believers are paid well to mistake the incense for proof. Threats are inflated into folklore, then paraded to justify another layer of “protection” that nobody asked for and nobody can use without swearing. The result is not safety. It’s a stage show where the props are expensive and the audience is told that discomfort is the same thing as being defended.
The central fraud is simple: complication is treated as virtue. If a control is awkward, it must be strong. If a process is slow, it must be safe. If the user can understand it without a laminated flowchart, it must be naïve. So the system accretes steps like barnacles: passwords chained to codes chained to devices chained to fresh accounts, each one a new opportunity for misconfiguration, confusion, and the oldest exploit in the world — a human trying to get on with their life. The security crowd then stands back, admiring its own labyrinth, and calls the inevitable workarounds “user error,” as if ordinary people were supposed to become monks of credential management just to read their email.
This is not a harmless aesthetic preference. Every extra hoop widens the attack surface, not only in software but in behaviour. The more tedious the safe path becomes, the more predictable the unsafe path becomes. People reuse secrets because they’re drowning in them. They disable protections because they’re being punished by them. They store keys in places they shouldn’t because the system refuses to treat them like owners. And then, when the breach arrives — often through the glittering complexity itself — the same professionals lecture the victims about “awareness” while polishing the very controls that produced the mess.
So the argument here is not about whether security matters. It does, in the same way gravity matters. The argument is about what security is for. It is meant to keep control in the hands of the rightful user and preserve continuity of identity, data, and access under normal strain. That is the job. Not to produce majestic diagrams. Not to impress auditors. Not to create a museum of locked doors where the only people who can move are the curators.
This essay will therefore use one test — brutal, mundane, and non-negotiable. Does the system keep ordinary users safe in ordinary life? Not in a laboratory fantasy where every person has three spare devices, an IT department, and time to chant through a ten-step recovery liturgy. In real life: when a phone breaks, when a device is upgraded, when travel and work and family happen without asking permission from a security policy. Can a user migrate cleanly to an identical model without their world being shattered into fifty re-enrolments and broken keys? Can they clone their working state without begging a dozen servers for forgiveness? If the answer is no, then the “security” is theatre — the kind that sells tickets, not protection.
The frightening part is not that the industry gets this wrong occasionally. It is that it gets it wrong systematically, because the incentives reward spectacle. Pain is marketed as proof. Complexity is sold as seriousness. The user’s control is treated as a risk to be managed rather than the asset to be protected. And that inversion keeps breeding the very insecurity the industry claims to prevent. The show goes on. The breaches do too.
Security theatre: what it is and why it thrives
Security theatre is the industry’s favourite con: make security look harder than the attacker’s job while making the user’s life a slow-motion wreck. It isn’t protection. It’s choreography. It’s the belief that if you pile enough locks onto a door, you never have to admit the wall is made of plasterboard. The theatre lives in rituals masquerading as safeguards: the extra prompts, the constant re-authentication, the “security questions” that strangers can answer after a lazy scroll through public crumbs, the ever-tightening password rules that produce exactly two outcomes — reuse or abandonment. The badge of honour is friction. If the user suffers, the system is “serious.” If it works smoothly, it must be “weak.” That is not engineering. That is superstition with a budget line.
The “more steps = more secure” pattern is a reflex, not a thought. Something fails, and instead of tracing why it failed, security bolts on another hoop. Phishing succeeds? Add a factor. Users trip over the factor? Add another. Workflows break? Add training. Training doesn’t stick? Add policy. Policy spawns bypasses? Add monitoring. Monitoring creates noise? Add dashboards. Each addition is celebrated as maturity, even when it widens the attack surface, multiplies misconfiguration, and turns ordinary tasks into obstacle courses. In theatre-logic, a control is judged by how imposing it looks in a slide deck, not by whether it reduces risk in the world where people actually exist.
It thrives because the incentives are crooked, not because the threats are divine. Compliance frameworks reward visibility over outcome. Audits don’t pay you for a system that quietly resists failure; they pay you for artefacts, checkboxes, and rituals you can enumerate like commandments. Optics are cheap and defensible. A team can’t easily prove “nothing happened because we designed something simple and robust.” It can prove “look at all these controls we added,” and that keeps the quarterly theatre review glowing.
Fear of blame is the second engine. Nobody gets fired for adding one more gate. People get fired for failing without a gate they can point to afterwards. So the rational, self-preserving move is endless accretion: pile on controls until the system resembles a bureaucratic hedgehog, then call the inevitable brittleness “someone else’s problem.” Responsibility is personal, inconvenience is outsourced, and theatre becomes the safest career strategy.
Vendors grease the skids. They don’t sell calm, dependable design; they sell “security platforms” stuffed with toggles, alerts, integrations, and subscriptions. Complexity is marketed as sophistication because it justifies the invoice and locks you into dependence. A simple system is a threat to their business model. A system no one understands is a recurring revenue stream.
Career status finishes the job. Complexity creates a priesthood. The harder the maze, the more valuable the guides. Theatre lets security people posture as guardians of a sacred realm ordinary users must never touch without supervision. Once that hierarchy forms, it defends itself. The hoops stop being design choices and become identity. Remove them and you’re not fixing security — you’re insulting the class that built its power on the maze.
So theatre becomes policy the same way weeds become a garden: neglect, vanity, and incentives that reward appearances. Organisations feel virtuous because controls are visible. Security teams feel important because users are trapped. Vendors feel rich because the trap needs maintenance. And users learn the only lesson the system teaches: if the safe path is unbearable, bypass it. That is how theatre thrives. That is how it metastasises into policy. That is how policy manufactures breach.
Complexity as a vulnerability factory
Every extra control is sold as another brick in the fortress. In reality it’s another window in the wall. You don’t “add security” by stapling steps onto a process; you add surface area — more code, more configuration, more state, more dependencies, more places for human hands to slip. Each new factor, connector, identity broker, policy engine, and “helpful” dashboard isn’t a shield. It’s an invitation to misconfigure something at 2 a.m. and then pretend it was inevitable. The attacker doesn’t have to beat all your controls. They only need the weakest one you forgot you installed last quarter because some committee liked the logo.
Misconfiguration is the predictable child of over-control. The more knobs a system has, the more likely someone will turn the wrong one or leave it half-turned. Security stacks now resemble a badly wired switchboard: layers of conditional rules, exceptions for executives, silent bypasses for legacy apps, “temporary” holes for migrations that never end, and shadow settings that only one person in the building remembers. The organisation calls this complexity “defence in depth.” The attacker calls it “free money.” The user calls IT and gets told to reboot, because no one understands what’s actually wired to what anymore.
Then comes the cascade effect, the part the theatre crowd never audits because it makes them look incompetent. One link breaks — a token expires, a SSO provider hiccups, a device gets replaced, a certificate rotates badly, a password manager doesn’t sync, a policy update flips a bit — and suddenly the user is locked out of their own life. Not because an attacker arrived with a crowbar, but because your Rube Goldberg machine tripped on itself. So the user does what humans always do when trapped: they find a way round. They forward files to personal accounts. They keep old devices alive as unofficial backups. They reuse passwords because the system demands thirty of them. They screenshot recovery codes and stick them in places that make your blood run cold. They stop updating because updates mean new hoops. The broken link forces unsafe detours, and the detours become the real breach channel. You built a “secure” motorway and then blocked it with toll booths until everyone started driving through the fields.
Fragile systems don’t fail when villains arrive; they fail under normal use. Under upgrades, resets, travel, fatigue, urgency, and the small disasters of ordinary life. The security industry keeps designing like everyone is a full-time operator with spare hardware, perfect memory, and a monk’s schedule. Real users are none of those things. When your control stack can’t survive a phone swap, a laptop crash, or a missed step in a ten-stage recovery rite, you don’t have security. You have a high-latency breach generator that only needs time and friction to do its work.
In short: complexity doesn’t merely fail to secure. It actively manufactures insecurity, at scale, by turning routine continuity into chaos and predictable human coping into your biggest vulnerability. If you want systems that stay safe, stop building museums of locks. Start building something that can be used without bleeding.
Humans are not the problem; they are the environment
Security people love to talk about “the human factor” the way medieval doctors talked about bad humours: as a convenient explanation for anything they don’t want to admit they engineered badly. The truth is less flattering. Humans are not a bug in the system. They are the system’s operating environment. Designing security while treating ordinary human behaviour as a defect is like designing a boat while resenting water. You can do it, but you’ll look ridiculous right up to the moment you sink.
Behaviour under friction is not random. It is brutally predictable. Give someone a clear, fast, reliable route and they will take it. Give them a maze, and they will tunnel under it. When the “secure path” is a pilgrimage through logins, device prompts, expiring codes, mysterious lockouts, and recovery rites that require a calendar and a spare life, users don’t become more careful. They become more desperate. Desperation is not a moral failure; it is a design outcome. It’s the part of your system you shipped without writing it down.
Overcomplication trains bypasses the way a bad road trains potholes into a new map. Shadow IT blooms because the official tool is slower than the work and twice as fragile. People spin up personal accounts, unapproved apps, and informal channels because they need the job done today, not after your quarterly security review. Passwords get reused because you demanded dozens of them with incompatible rules, then scolded anyone who dared to write them down. So they do the only rational thing left: they compress. One password becomes many. One memory carries an entire corporate kingdom. Your policy didn’t create security; it created a monoculture.
Unsafe storage follows the same physics. If recovery is painful, users will make recovery painless. That means screenshots of backup codes sitting in photo galleries. Paper notes stuffed in wallets. Tokens pasted into emails to “save for later.” USB sticks labelled “keys” because the system refused to respect continuity. Old devices kept running long past their safe life because replacing them is a bureaucratic blood sport. Files forwarded to personal accounts because the corporate gatekeepers turned normal access into a hostage negotiation. These aren’t edge cases. They are what happens when you build a security system that punishes ownership.
Then the industry has the gall to call this “user error.” That phrase is less diagnosis than confession. It means: the design failed, so we’re blaming the people who suffered it. If a control requires constant perfect attentiveness, infinite time, and a memory like an elephant on amphetamines, it is not a control. It is a trap. And when the trap bites, the designer doesn’t get to play innocent while lecturing the victim about vigilance.
The arrogance is structural. Security teams often think they’re combating adversaries. In practice, they’re combating users. The user is treated as a suspect to be contained rather than the owner to be protected. Control is confiscated “for safety.” Continuity is broken “to prove identity.” Every normal action becomes a test of devotion. The result is predictable: users stop trusting the system, and once trust dies, compliance becomes theatre too. They do the dance when watched and go round the back door when they aren’t. Congratulations — you have formally converted your workforce into a distributed bypass network.
Real security isn’t built by demanding heroism from ordinary people. It’s built by assuming they are tired, busy, fallible, and still the rightful owners of their data and devices. If your model can’t survive a bad day — a phone swap, a forgotten token, a deadline, a flight, a broken laptop — then the model is not secure. It’s just brittle. And brittle systems don’t fail because humans are imperfect. They fail because they were designed by people who thought perfection was a prerequisite for safety.
So stop treating humans as the problem to be disciplined. Treat them as the environment to be designed for. Make the safe path the easy path. Make continuity normal. Make recovery sane. Do that and you won’t need sermons about awareness, because the system will finally align with the species using it.
Usability as the first security control
Security that can’t be used is not security. It’s a museum exhibit: impressive from a distance, useless in the rain. The foundational principle is embarrassingly simple, which is precisely why the industry keeps avoiding it: safe must be easy, or it will not be used. If the secure path requires a spreadsheet, a second device, a calm afternoon, and a priest to interpret the error messages, then the secure path is fiction. People will take the route that lets them complete the task, because tasks are how life stays stitched together. You can rage about “policy” all you like; policy doesn’t outrank reality.
The security establishment treats usability like a garnish — a nice-to-have flourish for the product team, a bit of paint on an otherwise righteous machine. That’s backward. Usability is a security property in the same way structural integrity is a property of a bridge. If a bridge looks magnificent but collapses when people walk on it, nobody praises the design for its moral seriousness. They call it a hazard and fire the engineers. Yet in security, we keep applauding systems that fail under ordinary use and then blame the pedestrians for not levitating.
Why is usability security? Because it governs behaviour. And behaviour is the battlefield. A control that users can’t follow reliably is not a control; it’s an unpredictable failure mode. The more complicated a process is, the more likely it is to be done wrong, skipped, postponed, or bypassed. Confusion doesn’t just inconvenience people — it creates attack vectors. When the secure option is obscure or painful, people route around it. They reuse passwords because the system demanded too many. They disable protections because the protections disable them. They stash keys badly because recovery is cruel. They delay upgrades because upgrades are punishment. None of that is a bug in the user. It’s a bug in the system’s design, and attackers dine on it nightly.
Control and clarity reduce risk because they reduce improvisation. Give the user a clear, dependable way to manage their identity, devices, and keys, and you shrink the space where desperate hacks are born. A person who can migrate a device cleanly isn’t forced to keep insecure relics alive. A person who can recover access with one strong authorisation step isn’t tempted to stockpile secrets in unsafe places. A person who understands what the system is asking will comply without resentment, because compliance stops feeling like tribute.
Clarity also disciplines the security team, which is an underrated benefit. If you must explain a control in plain language to a tired human at speed, you’re forced to confront whether the control is actually doing anything. This is why theatre hates usability: usability shines a harsh light on empty rituals. When the only justification for a step is “because security,” usability exposes it as superstition. The step either protects something in a measurable way, or it doesn’t deserve to exist.
This is not an argument for loosening standards. It is an argument for making standards livable. Security should be like good infrastructure: strong where it matters, invisible where it doesn’t, and aligned with how people actually move through the world. The industry’s obsession with discomfort as proof has produced brittle systems and predictable bypasses. Usability flips that. It makes the safe path the natural path. It keeps ownership in the hands that are supposed to hold it. And it cuts risk by removing the very conditions that breed human workaround, misconfiguration, and quiet sabotage.
In short: if your security design requires heroic users, it is insecure by definition. Build for ordinary people doing ordinary things under ordinary stress, and you won’t need to lecture them — you’ll finally be protecting them.
Case study: phone migration and the absurdity of modern security
Here is the real-world expectation, the one any sane person carries without needing a committee to validate it: if I buy the same model of phone, running the same operating system, I should be able to move from old to new as a true digital clone. Not a scavenger hunt. Not a spiritual journey. A clone. My apps, my settings, my keys, my identity, my working life — all of it should transfer cleanly and just work. The device changed; the owner didn’t. Continuity is the point of owning technology rather than renting headaches.
Now look at what we actually get. Migration today is a bureaucratic obstacle course dressed up as safety. First, a dozen accounts you didn’t remember existed because you only signed up to stop your phone nagging you. Then repeated re-authentication for each little kingdom inside the phone: the app store, the bank, the authenticator, the cloud vault, the messaging apps, the “security” suite, the work profile, the private profile, the vendor’s profile, and whatever else some product manager bolted on since last time. Every service wants to re-test your identity as if a new rectangle of glass means you’ve become a different human.
Half the apps don’t restore state properly. Some demand fresh device registration. Some throw tantrums because the secure element isn’t the “original” one. Some decide your perfectly legitimate migration looks like fraud because their threat model is a paranoid hallucination written by people who never leave the office. Keys disappear into the ether. Tokens expire mid-transfer. Two-factor setups require new enrolment but the old device must remain alive to approve the enrolment, which is a wonderful joke if your old phone died on impact with the floor five minutes ago. The system that was supposed to protect your life then refuses to let you back into it.
And we’re told this is “secure.” It isn’t. It’s fragile, punitive, and it manufactures risk. Because when you make the normal path unbearable, you train people to do abnormal things. Users start keeping old devices alive as unofficial lifeboats, even when those devices are unpatched antiques. They postpone upgrades because they can’t afford a day of authentication purgatory, so they remain on outdated software longer than any attacker would ever need. They turn off protections just to get through the transfer, then never turn them back on. They copy secrets into unsafe places because they don’t trust the migration to preserve them. They screenshot recovery codes. They email themselves tokens. They dump app data into whatever backup method seems least likely to betray them at the crucial moment. Shadow backups bloom like mould in the corners of your “secure” house.
What’s deliciously obscene is that none of this is caused by attackers. It’s caused by designers who decided ownership was suspicious and continuity was optional. The migration pain is not a side effect; it’s a policy choice disguised as virtue. And every one of those choices creates a bigger, juicier attack surface than the risk it was meant to address. The user didn’t become safer. The user became cornered. Cornered users improvise. Improvisation is where breaches are born.
So this case study is not about convenience. It’s about basic security reality. A system that cannot survive routine device replacement without shattering identity and scattering keys is not “high security.” It’s high drama. It forces unsafe workarounds, delays critical updates, and turns ordinary continuity into a fertile breeding ground for compromise. If the industry wants to keep calling that protection, it can do so only in the same way a pickpocket calls a crowd “job security.”
What sane migration looks like
A sane migration model starts by acknowledging a fact so obvious it embarrasses the people who ignore it: if the phone changes and the owner doesn’t, continuity is the default state. The job of security is to protect that continuity, not smash it into shards and call the wreckage “verification.” So the design is simple, hard, and adult: one strong authorisation moment, then full state continuity. Not fifty petty interrogations. One real gate, then the transfer.
Here’s the high-level shape. The user initiates a clone from old device to new device of the same class and operating system. The system pauses once and demands a serious proof of control — a proof that matches the stakes of transferring a life. That proof can be a hardware passkey, a secure element handshake between devices, or a formal attestation bound to the user’s identity and the device lineage. Pick your poison, but make it strong and make it singular. The user presents the proof, the system accepts it, and trust is established for the migration session.
After that moment, the system does what it should have been doing all along: it copies the state. Keys, tokens, credentials, app data, settings, identities — the whole working fabric, moved cleanly and verifiably. If something must be re-wrapped for the new secure element, fine; do it invisibly. If certain secrets require re-derivation, do it within the authorised session. The point is that the user doesn’t re-prove themselves to every petty fiefdom inside the phone. The authorisation is global for the migration because the risk is global, and you already paid for it with the single serious gate.
Biometrics and extra factors belong after trust is established, not as toll booths scattered across the road. Use fingerprint or face checks to streamline local access on the new device, to add convenience, to reduce shoulder-surfing or casual theft. Great. But don’t weaponise them to block continuity. Don’t turn routine ownership into a gauntlet. Extra factors should be like seatbelts: quietly present, instantly helpful, not a ritual humiliation you must endure to start the engine.
This model shrinks risk because it shrinks chaos. One strong proof is easier to protect than a dozen weak ones. One authorised session is easier to audit than thirty re-enrolments spread across random apps. And continuity removes the desperate improvisation that theatre systems manufacture. You stop training users to hoard secrets in unsafe places. You stop trapping them into keeping old devices as relic lifeboats. You stop making upgrades feel like a hostile takeover of their own identity.
In short: migration security should be a single, decisive act of authorisation followed by faithful continuity. Anything else is not “high security.” It’s insecurity wearing a badge and collecting applause.
Risk, not superstition
Security collapses the moment it stops thinking in risk and starts thinking in demons. The professional superstition goes like this: if a threat is possible, you must build for it as if it were probable. That is how you get systems designed for a Hollywood heist while failing at Tuesday afternoon. Possibility is infinite; probability is the adult filter. When you refuse to use it, you don’t become cautious. You become ridiculous — and you drag everyone else into the circus with you.
Probable threats are the boring ones: lost devices, weak recovery flows, phishing that works because the user is exhausted, misconfigurations because the stack is too complex for its own good, stale software because upgrades are painful, and credential sprawl because identity has been fragmented into a hundred petty kingdoms. Fantasy threats are the edge-case nightmares security committees love to clutch: the omniscient attacker with your phone, your face, your spare device, your exact timing, and the patience to navigate a maze you can’t even explain to your own staff. Yes, such a creature is possible. So is a meteor through your data centre. But if you design as if the meteor is scheduled, you will spend a fortune to make ordinary life unworkable — and still get wiped out by a cigarette you forgot to stub out.
Designing for edge paranoia punishes everyone because it turns normal behaviour into suspicious behaviour. It forces every user to pay the cost of a threat that almost never materialises, and the payment isn’t abstract. It’s friction, confusion, delay, and the predictable bypasses that follow. People stop upgrading because upgrades break access. They store secrets badly because recovery is hostile. They reuse passwords because you’ve drowned them in them. They route around official tools because official tools hurt. The control meant to stop the rare nightmare ends up breeding the common failure. The guard dog spends the day biting the owner’s legs while the burglar walks in through the back window you forgot existed.
Worse, edge paranoia doesn’t even catch what it’s hunting. The fantasy threat model becomes so intricate that the actual system turns brittle. You add layers to stop a unicorn and accidentally create ten real holes: mis-set permissions, stale certificates, tangled SSO dependencies, silent exceptions, and a user base trained to distrust and evade you. The attacker doesn’t need to be mythical. They just need to be awake and patient while your own complexity does half the work for them.
Risk-based security re-anchors the field to reality. Threat modelling is not a vibe; it is a discipline of trade-offs. You ask: what is the asset, who wants it, how likely are they to try, what tools do they actually have, and what failure modes are most damaging under normal use? Then you build controls that reduce those risks without creating larger ones. Every control has a cost — in user friction, operational burden, and attack surface. Pretending otherwise is how you end up with a fortress that collapses under its own weight.
Trade-offs are not a weakness. They’re the only honest way to engineer systems for humans. The goal isn’t to be invulnerable in an imagined apocalypse. The goal is to be resilient in the ordinary world where devices break, people forget things, attackers pick the easiest route, and life keeps moving whether your policy approves or not. Security that doesn’t respect probability is not security. It’s ritual. And ritual has never stopped a breach; it only makes the post-mortem sound pious.
The incentives that keep security stupid
If security were judged on outcomes, half the industry would be forced to take up honest work. Instead, it’s judged on spectacle. Standards and audits don’t reward “quietly safe.” They reward visible friction — the kind you can count, screenshot, and staple to a compliance binder. An auditor can’t easily certify that a system is secure because users are rarely trapped into bypassing it. But they can certify that you’ve imposed MFA on everything including the kettle, that passwords rotate on a schedule nobody can survive, that access requires a pilgrimage through forms and approvals. In other words, they certify pain. Pain is legible. Safety is subtle. So the system evolves toward what is legible, not what is safe. The audit trail becomes the product, and actual security is a side hobby.
Then come the vendors, smiling like undertakers at a wedding. They sell boondoggles with the same pitch every time: complexity equals maturity. The more sliders, dashboards, integrations, and “AI-driven insights” you get, the more secure you must be — because look how much you paid. Vendors don’t profit from a system that is simple enough to run without them. They profit from a system that needs a certified interpreter. So they package complication as virtue, then call your dependency “enterprise-grade support.” The uglier truth is that they’re renting you a maze and charging extra for the map. If a product reduces friction to the point where users stay on the safe path naturally, it threatens the vendor’s reason to exist. So the market keeps producing decorative armour that weighs more than the soldier.
Internal teams complete the triangle. Security departments gain power by becoming gatekeepers. If access is easy, they’re invisible. If access is painful, they’re indispensable. Bureaucratic friction becomes a currency: every approval chain, every mandatory exception, every obscure control is leverage. It allows the team to say “no” without needing to be right, because the cost of arguing is higher than the cost of surrendering. This veto power feels like importance, so it gets defended with religious zeal. The user’s autonomy is treated as a threat to be managed. The organisation learns to fear its own people more than its actual adversaries, because internal politics are louder than external breaches — right up until the breach arrives through the workaround your politics created.
Put these incentives together and the stupidity becomes inevitable. Audits reward theatre, vendors monetise theatre, and internal teams consolidate authority through theatre. Nobody in that chain is rewarded for removing nonsense. Everyone is rewarded for adding it. The result is a security culture that treats subtraction as heresy and usability as weakness. And the users — the owners, the people the whole edifice is supposedly protecting — are left navigating a system designed less to keep them safe than to keep everyone else employed and preening.
That is why security stays stupid. Not because the people are uniquely foolish, but because the machine pays them to be.
Principles for security that actually works
Start with the principle the theatre crowd can’t stand to hear: user control and continuity are non-negotiable. Security exists to keep rightful ownership intact — of identity, devices, data, keys, access. If the user cannot move their working state to a replacement device, recover after a failure, or maintain continuity without begging ten separate systems for mercy, then security has failed at its first duty. A system that treats its owner as a suspect is not “protecting” anything; it is confiscating control and calling the confiscation virtue. Real security protects continuity because continuity is what makes people safe in real life. Break it, and you don’t prove anything except your contempt for the people who pay for the system.
Second: fewer steps with stronger guarantees beats layered rituals. One solid gate, properly designed and properly audited, is worth more than a dozen flimsy ones stacked like papier-mâché armour. Every extra step is another chance to misconfigure, another chance to phish, another chance for the user to give up and improvise. The security industry keeps building obstacle courses and calling them defences. It is the same mistake every bad bureaucrat makes: confusing movement with progress. Strong security is decisive and clean. It asks for proof once, at the right moment, with the right strength, then gets out of the way. Anything else is just you charging rent for access to someone else’s life.
Third: measure outcomes, not intentions. If a control causes bypass, the control is a vulnerability. Full stop. You don’t get to wave a policy document and claim innocence while your design trains people into unsafe behaviour. If users are reusing passwords, storing keys badly, running shadow tools, delaying upgrades, or keeping dead devices alive as lifeboats, that’s not “non-compliance.” That’s a red flare from reality telling you your control is hostile to normal use. In engineering, a design that reliably produces failure is called a defect. In security, it’s too often called “best practice.” Stop judging controls by how righteous they look and start judging them by what they do to breach rates and user behaviour.
Fourth: design for normal life first, adversaries second. Not because adversaries don’t matter, but because normal life is where systems actually live or die. Devices break. People travel. Batteries die. Children scream. Deadlines land. Memory fails. A security model that only works for a fully refreshed operator sitting in a lab with spare hardware and unlimited time is a toy. The attacker doesn’t need genius if the system collapses under ordinary stress. Build something that survives the Tuesday problems — migrations, recoveries, routine access — and you have already defeated most real-world threats. Build for the apocalypse first and you’ll breed the everyday breaches that actually happen.
Taken together, these principles are not radical. They are simply what security would look like if it remembered its job. Protect ownership. Preserve continuity. Use strong, minimal gates. Measure reality. Design for humans in motion. Do that, and the theatre dies because it has nothing left to sell. Good. It should die.
Conclusion: Stop building locks for museums and start building doors for homes
The modern security field has achieved a remarkable inversion: it makes systems less secure by making them unliveable. It has turned protection into a pilgrimage and called the blisters “best practice.” It multiplies gates until the owner of the house has to sleep outside, then pins the eviction notice on the user’s forehead and calls it “user error.” Breaches, when they arrive, stroll in through the holes punched by this nonsense: the shadow backups, the reused secrets, the unpatched relics kept alive because upgrades feel like amputation. The industry points at the attacker with one hand and hides its own handiwork with the other.
All of this is avoidable. Not with more theatre, not with thicker policy binders, not with another vendor dashboard, but with a standard so simple it offends the people who’ve built careers on complexity: security must be something users can actually keep without heroics. If staying safe requires perfect memory, infinite patience, and a free afternoon every time a device changes, then your “security” is just an elaborate way of pushing people into failure. A system that collapses under a phone swap is not resilient. It’s decorative.
Real security is brutally practical. It keeps control in the hands it belongs to. It preserves continuity through the boring disasters of ordinary life. It uses one strong gate where a coward would build ten flimsy ones. It respects probability instead of worshipping fantasy. It treats usability as a structural requirement, not a marketing flourish. And above all, it assumes humans will remain human — tired, distracted, busy, fallible — and still worthy of owning their own machines.
The rest is vanity. Locks built for museum pedestals, polished and useless, admired by committees and ignored by reality. Doors built for homes are dull by comparison. They open. They close. They lock when they must and get out of the way when they don’t. They do their job so quietly that nobody writes a white paper about them. That should be the ambition. If a security system can’t reach that standard — if people can’t live with it and stay safe at the same time — then it is not security at all. It’s just an expensive superstition, and nobody should pretend to believe in it.