Individual Privacy as a Social Necessity for the Technological Age
Legal, Ethical, and Civic Reflections
CPA, CA, CISA, CGEIT, CISSP, CITP, CFF, CIPT, CIA
Abstract
In a society increasingly driven by data extraction and algorithmic modeling, privacy is no longer just a personal issue. It has evolved from a civic cornerstone based on philosophical, religious and classical legal thought, to a critical personal, psychological, social, and political issue mired in our contemporary techno-corporate reality. There is a hidden cost to current information asymmetry that urgently calls for a renewed legal vision of privacy rooted in equity and supported by an updated legal conception of tort, consent, civil rights and “the self.”
Law at a Crossroads
Privacy is too often described in static terms: the right to be left alone, the ability to control one’s data, the freedom from intrusion. But in the digital age, these precepts fall short. Today, privacy is ambient, behavioral, predictive and structural. It no longer depends on direct observation; it can be, and is inferred, aggregated, monetized and sold.
For legal practitioners, particularly those concerned with civil rights, class harms, and democratic integrity, this shift represents a troubling, insidious undermining of social compact. The legal frameworks that once protected privacy are lagging far behind the technological apparatus that manipulates it. To remedy this, a reframing of the law is required: for example, that privacy is a foundational civic infrastructure, without which the legal and moral order itself begins to fray.
Privacy as a Common Good
At its core, privacy enables people to think freely, explore identity, engage in dissent, and form beliefs without constant scrutiny. It underpins:
- Autonomy: enabling decision-making free from coercion.
- Moral agency: fostering spaces for internal development.
- Pluralism: allowing dissenting or unpopular views to incubate without retaliation.
These are not secondary concerns. As Neil Richards puts it, privacy enables the “intellectual privacy” required for democracy to function (Richards, Why Privacy Matters1). When people believe they are watched, they conform. When they conform, society calcifies.
Surveillance typifies this situation – and not just accidental, incidental, or even law-enforcement-justified surveillance. In the modern world, surveillance is a system, designed to entice, entrap, and then capture the viewer-cum-subject. Shoshana Zuboff’s work in The Age of Surveillance Capitalism2 highlights the underlying systemic logic: Platforms monetize behavioral surplus and then prediction markets feed on the quantification of private behavior. What this activity then encourages is:
- Opacity: platforms hide what is being collected and inferred.
- Abstraction: harm becomes indirect and diffused.
- Normalization: users accept tracking as the cost of participation.
We should understand that contemporary privacy harms are not the result of rogue actors or bad design. They are structural features of a socio-economic model that rewards data collection and analysis.
Most dishearteningly, psychologists now observe that surveillance operates not only externally; it becomes internalized. People self-censor and expression narrows. The sense of self is reconfigured to anticipate the external gaze. This is what legal scholar Julie Cohen3 refers to as the erosion of the “habitats of privacy” necessary for personhood. Without such habitats, the law cannot protect what no longer has the psychic room to exist.
And so, the legal system struggles.
Why the Law is Losing
In the United States, privacy law still largely depends on a “notice and choice” model, wherein users are presumed to have consented after clicking through unread terms. In these circumstances, meaningful consent becomes a fiction. Worse, these forms of “consent” can sometimes be used as a legal cudgel to intimidate or dissuade the poorly resourced from any legal action at all. Opt-out provisions were abandoned a decade or more ago by more privacy-protective jurisdictions. For example, GDPR Article 6 mandates that consent must be opt-in, requiring clear affirmative action, stating that “silence, pre-ticked boxes or inactivity should not… constitute consent.”
Consider the practical challenges facing current legal frameworks. Tort law requires identifiable victims with particularized injuries, yet algorithmic harms, which can arise from data/model bias, mis-training, model poisoning and a variety of other causes, are diffuse and probabilistic. Class actions demand commonality among plaintiffs, but each person’s data profile and resulting treatment is unique. And the injury itself resists traditional legal categorization. As Danielle Citron2 and others have observed, contemporary privacy harms often fall outside traditional injury models. When someone is algorithmically sorted out of housing, employment, or insurance based on inferred traits, where is the remedy?
At a more prosaic level, courts are still trying to quantify the damage an individual suffers when their data is lost, or privacy breached. In many cases, the value of privacy damages determined is either nominal or based on an arbitrary determination of possible future harm. However, in some instances, the Courts have imposed privacy awards based on what a judge or jury feel is suitable punitive damage. Punitive damage, however, is not privacy damage, only a proxy for it.
In In re Anthem Inc. Data Breach Litigation, 162 F. Supp. 3d 953 (N.D. Cal. 2016), the court approved a $115,000,000 settlement where class members each received compensation despite difficulty proving actual damages. In Ari v. Insurance Corporation of British Columbia (2025, BCCA 131), the BC Court of Appeal affirmed an award of $15,000 per class member in a case where the privacy breach had been affected by a former employee. In both these cases, the court recognized the inherent value of personal information, as well as the risk of future harm, but did not directly address the value of privacy, per se. Additional Canadian class action large privacy settlements are listed in Appendix 1.
Information Asymmetry and Civic Dignity
When companies or states know more about individuals than individuals know about those institutions, an imbalance forms. This is more than unfair. It is anti-democratic.
Helen Nissenbaum4 5 calls for “contextual integrity”: data should flow in ways consistent with social norms. But today, norms are outpaced by technical affordances. Predictive analytics create identities that individuals never chose, and these digital shadows follow them—in housing, credit, and even criminal justice. The result? A social order where data governs discretion.
And then, there is artificial intelligence. AI represents a frontier where privacy, psychology, and personality blur. Generative systems engage users in emotionally resonant dialogues. These are not traditional search engines or data forms. They are interactive inference machines. Users share vulnerabilities. The system builds models. We have already seen multiple, tragic results from this interaction paradigm. In February 2024, a 14-year-old Florida teenager died by suicide after developing an emotional attachment to an AI chatbot on Character.AI, with his mother filing a lawsuit alleging the platform’s lack of safeguards contributed to his death.5
Unlike a doctor or a lawyer, there is no Hippocratic oath, no confidentiality doctrine, no informed consent, no professional accountability in AI. And yet the system may appear to understand the user’s emotional triggers, anxieties, and aspirations better than any friend or therapist. The risks are surely profound – risks to our identities, our relationships, our cognitive skills, our social skills, our mental health, and wellbeing. Without realizing it, we offload our inner worlds into computational actors. Our private self merges with machines.
Whither the Law?
Legal doctrines must evolve if they are to address new threats to privacy. To address the challenge of inferred harms, courts and legislators might recognize a new tort of “algorithmic inference injury,” defined as:
“The use of automated systems to make adverse determinations about an individual based on inferred, derived, or predicted characteristics, where such determinations result in the denial of opportunities, benefits, or fair treatment, regardless of whether the inferences are accurate.”
Legal standing would be established upon showing (1) subjection to an algorithmic decision-making process, (2) an adverse outcome, and (3) evidence that inferred characteristics influenced the decision. Damages could include both economic losses and dignitary harms, with statutory minimums to address proof difficulties. This is a similar approach to that taken by the Europeans in their efforts to hold large language model manufacturers to account. The European AI Liability Directive6, had it moved forward, would have created a rebuttable ‘presumption of causality’ to ease the burden of proof for victims to establish damage caused by an AI system.
A more comprehensive approach has been proposed by Gaertner (2025) – the concept of “psychographic sovereignty”. This represents the principle that individuals possess an inherent right to control how their psychological attributes, behavioral patterns, and inferred mental states are modeled, stored, and utilized by third parties. Embodying this concept in law has not yet been attempted.
And Finally – Design, Friction, and Resistance
We should not confuse economic inevitability with structural or social invincibility. Human systems can be restructured. Friction (slow it down) can be reintroduced so that data capture is not so very automatic. Interfaces can be redesigned to inform rather than nudge. Architecture can be made “moral,” or systems put in place to mitigate corporate and institutional immorality. Giving up should not be an option. Privacy is not nostalgia. It is survival.
Lawyers, in particular, have a role in preserving privacy as an exclusive human space, both as trustees of the social compact and guardians of institutional trust. If privacy is the soil in which liberty and dissent grow, then its defense cannot be a luxury. It is an obligation.
Privacy is not a screen. It is a sanctuary.
- Richards, Neil M. Why Privacy Matters. Oxford University Press, 2022 ↩︎
- Zuboff, Shoshana. The Age of Surveillance Capitalism. PublicAffairs, 2019 ↩︎
- Cohen, Julie E. “What Privacy is For.” Harvard Law Review, vol. 126, no. 7, 2013 ↩︎
- Citron, Danielle Keats. Hate Crimes in Cyberspace. Harvard University Press, 2014 ↩︎
- Nissenbaum, Helen. Privacy in Context. Stanford Law Books, 2010 ↩︎
- Garcia v. Character Technologies Inc., Case No. 2024-CA-007262 (Fla. Cir. Ct. Oct. 22, 2024) ↩︎
- Work on the directive was recently (2024) suspended in apparent reaction to American anti-AI regulation pressure. See: Artificial
Intelligence Liability Briefing — EU Legislation in Process
https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf ↩︎
Substantial Value Settlement Canadian Privacy Class Action Cases (Selected)
| Case Name | Jurisdiction | Citation | Judgement | Comments | URL |
|---|---|---|---|---|---|
| Insurance Corporation of British Columbia v. Ari | British Columbia | 2025 BCCA 131 (Under the Privacy Act) | $15,000 per class member | Damages without proof of harm | Link |
| Granger v. Ontario | Ontario | 2024 ONSC 6503 (Under the Canadian Charter of Rights and Freedoms) | $7.267 million aggregate | Charter s.8 breach, vindication and deterrence damages, improper DNA collection | Link |
| Bannister v. Canadian Imperial Bank of Commerce | Ontario | 2021 ONSC 2927 (Ontario Tort of Breach of Privacy) | $23 million settlement | Data breach at major banks (CIBC/BMO and Simplii Financial) | Link |
| Boulay v. Fédération des Caisses Desjardins du Québec | Quebec | 2022 QCCS 2301 (Approval of settlement under Quebec Code of Civil Procedures) | $200.9 million settlement | Largest Canadian banking settlement | Link |
Mr. Gaertner can be reached at jerry@bizcomgrp.com