What banning AI surveillance should look like, at a minimum
Prohibited practices, others that face higher scrutiny, and the rest at least transparent and optional
I previously called on Congress to ban AI surveillance because of its heightened potential to easily manipulate people, both for commercial and ideological ends. Essentially, we need an AI privacy law.
Yet Congress has stalled on general privacy legislation for decades, even in moments of broad public privacy focus, like after the Snowden revelations and the Cambridge Analytica scandal. So, instead of calling for another general privacy bill that would encompass AI, I believe we should focus on an AI-specific privacy bill.
Many of the privacy frameworks floated over the years for general privacy regulation could essentially be repurposed to apply more narrowly to AI. For example, one approach is to enumerate broad consumer AI rights, such as rights of access, correction, deletion, portability, notice, transparency, opt-out, human review, etc., with clear processes to exercise those rights. Another approach is to create legally binding duties of care and/or loyalty on organizations that hold AI data, requiring them to protect consumers' interests regarding this data, such as to minimize it, avoid foreseeable harm, prohibit secondary use absent consent or necessity, etc.
There are more approaches out there and they are not mutually exclusive. While I have personal thoughts on some of them, my overriding goal is to get something, anything useful passed, and so I remain framework-agnostic. However, I believe within whatever framework Congress adopts, certain fundamentals are non-negotiable:
Ban a set of clearly harmful practices. Start with what (I hope are) universal agreement items, like identity theft, deceptive impersonation, unauthorized deepfakes, etc. The key is explicitly defining this as a category so that we can debate politically harder cases like personalized pricing and predictive policing (both of which I think should also be banned).
Practices near the ban threshold should face higher scrutiny. For example, if we can’t manage to outright ban using AI to assist in law enforcement decisions, at the very least this type of use should always be subject to human review, reasonable auditing procedures, etc. Using AI for consequential decisions, like for loan approvals, or for processing sensitive data, like health information, should at least be in this category. And many practices within this category, especially with regards to consumer AI, should be explicitly opt-in.
Make everything else transparent and optional. Outside the bright-line bans and practices subject to higher scrutiny, any other AI profiling must be transparent and at least come with the ability to opt-out, with only highly limited exceptions where opt‑outs would defeat the purpose, like for legal compliance. Consumers also need meaningful transparency, including prominent disclosures that indicate clearly when you are interacting with an AI system. That means not just generic data collection notices or folding into existing privacy policies, but plain-language explanations shown (or spoken) prominently at the time of processing, which detail what AI systems are inferring and deciding.
States must maintain authority to strengthen, not undermine, federal minimums. I wrote a whole post about why, with the gist being AI is changing rapidly, the federal government doesn’t react to these changes quickly enough, and states have shown they will act, both in AI and privacy.
Finally, these protections won't stifle progress. Some oppose any AI regulation because they believe it will hinder AI adoption or innovation. In terms of innovation, privacy makes a good analogy: Despite fears that a “patchwork” of state privacy laws would wreak havoc on innovation by going too far, they haven’t. Innovation hasn’t stalled, and neither have Big Tech privacy violations. In terms of adoption, the backlash against AI is real and rising, and smart regulation can help build the trust necessary for sustained AI adoption, not hinder it. We can get the productivity benefits of AI without the privacy harms.