The Algorithm Is Watching: How AI Surveillance Is Gutting the Fourth Amendment
Your phone knows where you slept last night. An algorithm already decided whether that matters.
In 1967, the Supreme Court told us the Fourth Amendment “protects people, not places.” It was a revolutionary idea — that constitutional privacy doesn’t stop at your front door, that the government needs a warrant to eavesdrop on your phone call from a public phone booth. For decades, that principle felt sturdy enough. Police needed warrants to tap your phone. They needed probable cause to search your car. The system wasn’t perfect, but the basic architecture held: the government couldn’t pry into your private life without convincing a judge it had a good reason.
That architecture is now collapsing — not because anyone repealed the Fourth Amendment, but because the surveillance technologies that police use in 2026 are so fundamentally different from anything the courts have encountered that the old rules have become almost meaningless. Artificial intelligence hasn’t just made surveillance faster or cheaper. It has changed what surveillance is. And the legal frameworks we rely on to protect our privacy haven’t caught up. They may never catch up, not without a fundamental rethinking of what constitutional privacy means in an age of algorithmic analysis.
This isn’t a distant hypothetical. It’s happening right now, in your city, probably on your block.
The Surveillance Machine You Don’t See
Consider what a modern police department can do today, without a warrant, without probable cause, and in most jurisdictions without any legal constraint whatsoever.
Predictive policing algorithms ingest millions of data points — arrest records, 911 calls, social media posts, weather patterns, even moon phases — and generate heat maps that direct officers to specific neighborhoods before any crime has occurred. In some cities, these systems have evolved from predicting where crimes will happen to predicting who will commit them. Chicago’s Strategic Subject List — sometimes called the “heat list” — assigned risk scores to individuals based on their social networks, arrest histories, and associations with victims of violence. You could land on that list without ever being charged with a crime. And once you’re on it, the consequences are real: increased police contact, heightened scrutiny, and the quiet accumulation of a digital record that follows you indefinitely.
The bias problem isn’t a bug — it’s baked into the architecture. These algorithms are trained on historical crime data, which reflects decades of racially disparate policing. Feed a machine learning system data from a city where Black neighborhoods were policed more aggressively, and it will dutifully learn that Black neighborhoods are where the crime is. The algorithm then directs more officers to those neighborhoods, generating more arrests, producing more data that confirms the original pattern. It’s a feedback loop that launders human prejudice through the veneer of mathematical objectivity.
Facial recognition systems scan crowds in real time, matching faces against databases that, in the case of Clearview AI, contain billions of photographs scraped from social media without anyone’s consent. Clearview’s pitch to law enforcement was breathtaking in its ambition: upload a photograph of anyone, and the system would return every publicly available image of that person on the internet, along with links to the pages where those images appeared. The company built this capability by scraping Facebook, Instagram, YouTube, and Venmo — in violation of those platforms’ terms of service — and then marketed it to police departments across the country.
The technology has already produced wrongful arrests — Robert Williams, a Black man in Detroit, was arrested in front of his family based on a faulty facial recognition match. He isn’t alone. Studies have consistently shown that these systems perform worst on the people who are most frequently subjected to them: Black and brown faces, women, and young people. Joy Buolamwini and Timnit Gebru’s landmark “Gender Shades” study found that commercial facial recognition systems had error rates of up to 34.7% for dark-skinned women, compared to 0.8% for light-skinned men. When these systems are deployed in communities that are already over-policed, the consequences of those errors fall disproportionately on the people who can least afford them.
Automated license plate readers photograph every vehicle that passes, logging the time, date, and location. A single camera captures thousands of plates per hour. Networked together across a metropolitan area, they produce a comprehensive record of where every car in the city has been, day after day, week after week. Companies like Vigilant Solutions maintain databases containing billions of plate scans, accessible to law enforcement agencies across the country. The retention policies are stunning — some agencies keep plate data for years, creating a retroactive surveillance capability that allows investigators to reconstruct your movements long after the fact.
Social media monitoring adds another layer. The Department of Homeland Security and FBI have used social media surveillance tools to track activists, monitor protests, and flag individuals whose online expression is deemed concerning. These programs don’t just watch public posts — they use natural language processing and sentiment analysis to interpret tone, detect “radicalization” markers, and map social networks. Combined with facial recognition and location data, they can identify who attended a protest, what they posted about it, and who they associate with — a comprehensive dossier assembled without any judicial oversight.
And then there’s data aggregation — the quiet engine that makes all of it work. Fusion centers and platforms like Palantir pull information from government databases, commercial data brokers, social media platforms, and surveillance feeds, then use AI to stitch it all together into detailed profiles of individuals. Your grocery store loyalty card, your fitness tracker, your browsing history, your Venmo transactions, your location pings — each one seemingly trivial on its own, but together they paint a portrait of your life that is more detailed than anything a team of private investigators could assemble in a year. A 2012 Senate investigation found that fusion centers had produced “irrelevant, useless, or inappropriate” intelligence while routinely infringing on civil liberties — and the programs have only expanded since then.
None of this requires a warrant. In most jurisdictions, none of it requires any legal process at all.
The Doctrine That Can’t Keep Up
How did we get here? The short answer is that Fourth Amendment law was built for a world that no longer exists.
The foundational framework dates to Katz v. United States in 1967, which established that the Fourth Amendment protects any situation where a person has a “reasonable expectation of privacy.” For decades, that test was the primary tool courts used to determine whether government surveillance constituted a “search” requiring a warrant. But the test has a circularity problem that has become fatal in the digital age: what counts as a “reasonable” expectation of privacy is shaped by the very surveillance practices the test is supposed to constrain. The more the government surveils us, the less reasonable it becomes to expect that we aren’t being surveilled. The test contains the seeds of its own destruction.
Then there’s the third-party doctrine — the legal principle that you forfeit your Fourth Amendment protection over any information you “voluntarily” share with a third party. When the Supreme Court announced this rule in the 1970s, the relevant “third parties” were your bank and your telephone company. The information you shared with them was limited and specific: your financial transactions, the numbers you dialed. The doctrine made a kind of intuitive sense in that context. If you told your banker something, you couldn’t claim it was still secret.
But apply that logic to the digital world and the doctrine devours itself. In 2026, virtually every human activity generates data that is transmitted to and stored by third-party service providers. Your phone shares your location with your carrier, your apps, and your operating system — continuously, automatically, and largely without your knowledge. Your email provider has every message you’ve sent. Your smart speaker has a record of every command you’ve uttered in your living room. Under a strict reading of the third-party doctrine, you have no Fourth Amendment protection over any of this information, because you “voluntarily” shared it with a third party. The doctrine written for a world of bank statements and phone bills now threatens to eliminate constitutional privacy protection for the most intimate details of modern life.
The Supreme Court recognized this problem in Carpenter v. United States in 2018, holding that the government’s acquisition of historical cell-site location information — records showing where your phone has been — constitutes a Fourth Amendment search, even though that data is held by your wireless carrier. The decision was important. It acknowledged that digital surveillance is categorically different from its analog predecessors. Chief Justice Roberts wrote that cell-phone tracking provides “an intimate window into a person’s life, revealing not only his particular movements, but through them his familial, political, professional, religious, and sexual associations.”
But Carpenter was deliberately narrow. The Court emphasized, repeatedly, that it was deciding only the specific question before it. It said nothing about real-time tracking, nothing about facial recognition, nothing about predictive policing, nothing about data aggregation, and nothing about AI. Lower courts have been left to guess at how Carpenter‘s reasoning applies to the explosion of surveillance technologies that have emerged since 2018 — and, predictably, they have reached wildly inconsistent conclusions. Some federal circuits read Carpenter expansively, extending its logic to other categories of digital data. Others treat it as tightly limited to historical cell-site location records. The result is a patchwork of conflicting rules that leaves both citizens and law enforcement without clear guidance.
And here’s the deeper problem that Carpenter didn’t touch: the case involved surveillance that collected information. The cell-site records showed where Timothy Carpenter’s phone had been. The constitutional analysis focused on the comprehensiveness and duration of that tracking. But AI surveillance does something qualitatively different from collection. It creates new information. It takes your purchasing records and infers your pregnancy. It takes your location data and infers your religion. It takes your browsing history and infers your mental health status. Whether the Fourth Amendment has anything to say about this inferential process — this generation of knowledge that you never disclosed to anyone — is perhaps the most important open question in constitutional law, and no court has answered it.
What Makes AI Different
Here’s what matters: AI surveillance isn’t just traditional surveillance done faster. It represents a qualitative transformation in what the government can learn about you, and how.
AI generates knowledge that doesn’t exist in the raw data. Traditional surveillance collects information — it records what you said, photographs where you went, logs who you called. AI surveillance does something fundamentally different: it creates new information through inference. Machine learning algorithms can analyze your purchasing patterns and predict your pregnancy before you’ve told anyone. They can analyze your social media activity and infer your political affiliation, your sexual orientation, your mental health status. They can analyze your movements and predict where you’ll be tomorrow. This inferential capacity has no real precedent in the history of surveillance. The government isn’t just learning what you’ve done; it’s generating predictions about who you are and what you’ll do — predictions that you never disclosed to anyone, because they existed only as latent patterns in data you didn’t even know you were producing.
AI aggregates across every dimension of your life. A single surveillance tool — a license plate reader, a facial recognition camera, a phone location ping — might reveal relatively little about you in isolation. But AI systems don’t operate in isolation. They integrate data from dozens, sometimes hundreds of sources, assembling a composite picture of your life that is far more than the sum of its parts. Research has demonstrated that metadata alone — the records of who you communicated with and when, stripped of any content — can predict personality traits, mental health conditions, and substance use patterns with startling accuracy. Purchasing data can reveal your religion, your health conditions, and your political beliefs. When these data streams are combined and analyzed by AI systems, the result is a form of surveillance so comprehensive that the concept of “practical obscurity” — the idea that your privacy is protected in part by the sheer difficulty of assembling scattered information about you — ceases to exist.
AI chills the exercise of constitutional rights. Empirical research has documented what many of us intuitively sense: pervasive surveillance changes behavior. After the Snowden revelations in 2013, searches for terrorism-related articles on Wikipedia dropped significantly — not because people stopped being curious, but because they became afraid of being watched. When people know that facial recognition cameras are scanning protest crowds, they stay home. When people know that social media monitoring tools are flagging “suspicious” speech, they self-censor. The chilling effect is real, measurable, and deeply corrosive to the freedoms of speech, association, and assembly that the First Amendment protects and the Fourth Amendment was designed to safeguard.
The Framework We Need
The existing doctrinal toolkit — the reasonable expectation of privacy test, the third-party doctrine, the mosaic theory — cannot handle these challenges. Each was designed for a different technological era, and each fails in specific, identifiable ways when confronted with AI surveillance.
What we need is a new framework — one that evaluates the constitutional significance of surveillance practices based on the features that actually make AI surveillance dangerous. I’d propose evaluating AI surveillance along three dimensions:
Inferential depth: How much new knowledge does the surveillance technique generate beyond the raw data it collects? A license plate reader that checks plates against a stolen vehicle database performs a simple comparison — low inferential depth. A system that analyzes your purchasing history to predict your health conditions or political beliefs generates knowledge that goes far beyond the underlying data — high inferential depth. The deeper the inferences, the greater the intrusion on privacy, and the stronger the constitutional protection should be.
Aggregative scope: How comprehensive is the data collection? A single traffic camera at one intersection reveals little. A network of thousands of cameras, integrated with license plate readers, facial recognition, and cell-phone tracking, reveals everything. The broader the aggregation, the more the surveillance resembles the kind of comprehensive monitoring that Carpenter recognized as constitutionally significant.
Autonomy implications: To what extent does the surveillance practice chill the exercise of constitutionally protected freedoms? Surveillance of political protests, religious gatherings, or journalistic activity strikes at the heart of First Amendment freedoms. Surveillance practices that operate pervasively and visibly — like public facial recognition — impose a generalized chilling effect on everyone, not just the targets. The greater the autonomy implications, the greater the constitutional concern.
These three dimensions produce a graduated spectrum of protection. Surveillance practices that score high on all three — like comprehensive data aggregation platforms that generate intimate inferences about entire populations in ways that chill political activity — would require a full warrant supported by probable cause. Practices that score high on one or two dimensions might require intermediate protections. And practices that score low across the board would face no additional constitutional constraints.
This isn’t a radical departure from existing law. It’s a structured application of the principles that Katz, Jones, and Carpenter established, adapted to the specific challenges that AI presents. And it’s designed to be technology-neutral — to evaluate surveillance practices based on their functional capabilities rather than their specific technical architecture, so that the framework doesn’t become obsolete every time a new tool is developed.
Why Courts Can’t Do It Alone
Even the best judicial framework isn’t sufficient by itself. Constitutional litigation is slow, reactive, and limited to the specific facts of each case. By the time a surveillance practice works its way through the courts, it may have been in use for years, affecting millions of people.
We need legislation. Congress should enact comprehensive federal rules governing law enforcement’s use of AI surveillance — including a prohibition on purchasing personal data from commercial brokers to circumvent constitutional protections (which is exactly what some agencies are already doing, buying from Acxiom and LexisNexis the kind of data they’d need a warrant to collect directly). State legislatures should continue experimenting with targeted moratoria on the most dangerous technologies — as Massachusetts and San Francisco have done with facial recognition — and community oversight requirements for surveillance procurement. And we need independent oversight bodies with real expertise, real authority, and real teeth. The Privacy and Civil Liberties Oversight Board was a start, but its jurisdiction is too narrow and its enforcement powers too weak to address the full scope of AI surveillance.
Algorithmic transparency matters, too. When a person is arrested, detained, or flagged for investigation based on an AI system’s output, they should have the right to know that an algorithm was involved and to challenge its reliability. Right now, many of these systems are shielded by trade secret claims — the companies that build them argue that disclosing how they work would reveal proprietary information. The result is that defendants are convicted, suspects are detained, and communities are surveilled on the basis of algorithmic processes that no one outside the vendor can examine. That’s incompatible with basic principles of due process.
The European Union’s AI Act and General Data Protection Regulation offer instructive models — not because American law should replicate European law, but because they demonstrate that democratic societies can impose meaningful constraints on AI without sacrificing public safety. The choice between security and privacy is, in most cases, a false one. The RAND Corporation’s evaluation of Chicago’s predictive policing program found no evidence that it reduced gun violence. Independent studies of facial recognition technology have documented error rates far higher than vendors claim. The benefits of unconstrained AI surveillance are often illusory; the costs are not.
The Window Is Closing
Here’s what keeps me up at night: every day that passes without meaningful legal constraints on AI surveillance is a day in which the infrastructure of algorithmic monitoring becomes more deeply embedded in the routine operations of law enforcement. Technologies get purchased, contracts get signed, databases get built, institutional practices get established. Reversing these developments becomes exponentially harder over time. The window for establishing meaningful constraints — while the technology is still relatively early and the institutional practices surrounding it are still taking shape — is narrowing.
And without a constitutional floor — a clear baseline of protection that applies everywhere — we face a race to the bottom. Jurisdictions compete to offer law enforcement the most permissive surveillance environment, and communities that lack the political power to resist bear the brunt. The neighborhoods where facial recognition cameras go up first are not the neighborhoods where city council members live. The people who end up on predictive policing lists are not the people who write the procurement contracts. AI surveillance doesn’t fall equally on everyone, and neither does the absence of legal protection.
The danger isn’t dramatic. It’s incremental. No single deployment of a facial recognition system, no individual use of a predictive policing algorithm, no particular instance of data aggregation seems to pose a fundamental threat. But the cumulative effect is the construction of a surveillance apparatus that is fundamentally incompatible with democratic freedom. History teaches that the greatest threats to liberty come not from sudden usurpations but from the slow, steady accumulation of government power — each individual step seemingly harmless, the aggregate transformative.
Justice Brandeis warned nearly a century ago that “the progress of science in furnishing the Government with means of espionage is not likely to stop with wire-tapping.” He urged the Court to recognize that constitutional protections must evolve to meet new threats. His warning was prescient beyond anything he could have imagined.
The Fourth Amendment embodies a choice made at the founding: that the security of a free society depends not on the government’s capacity to observe everything, but on its willingness to respect the boundaries of individual autonomy. The question we face now is whether that choice still means anything — whether “We the People” retain the capacity to define the terms on which the government may monitor, analyze, and act upon the intimate details of our lives.
A society in which every movement is tracked, every communication analyzed, every association mapped, and every deviation flagged is not a free society, regardless of the efficiency with which its surveillance apparatus operates. The technology is here. The law is not. The time to act is now — before the algorithm has already decided the question for us.
This essay is adapted from “Algorithmic Surveillance and the Eroding Fourth Amendment: Redefining Constitutional Privacy Protections in the Age of Artificial Intelligence,” forthcoming Law Review paper.