s freedom of thought at risk in the digital age? Social media, evolving neuro-technologies and the algorithmic environment increasingly complicate how we form thoughts and interfere with our ability to make decisions in autonomous, self-directed ways. New and emerging therapeutic technologies can benefit individuals, but as scholars from a range of disciplines have already documented — among them, Ruha Benjamin, Virginia Eubanks, Cathy O’Neil and Shoshana Zuboff — these same technologies hold the potential to manipulate private thoughts and thereby harm individuals, democratic institutions, markets and society more broadly. In that light, there is a strong prima facie case for protecting freedom of thought. But exactly how far should the law go in insulating people from ideas, practices and technologies that might influence or manipulate their thinking? Where should we draw the line between legitimate influence and objectionable manipulation?
Those questions are at the centre of CIGI’s project on freedom of thought, led by Susie Alegre and Aaron Shull. In their special report, Freedom of Thought: Reviving and Protecting a Forgotten Human Right, Alegre and Shull grapple with them in constructive and illuminating ways. They make a compelling case for the urgency of protecting the “forgotten right” of freedom of thought as a counterweight to the business models and geopolitical realities that incentivize cognitive interference. Assessing the nature of threats posed to cognition by new and emerging technologies, they advocate for a “new legal test” to provide clarity “as to when the use of a technology will contravene the right to freedom of thought and relevant factors to assess when the lines are crossed,” and identify institutions to guide global efforts to enact, interpret and enforce it. In these and other ways, they draw attention to and emphasize the importance of freedom of thought in making the digital age more hospitable to human well-being.
At the same time, Alegre and Shull interpret and defend freedom of thought in a way that could leave individuals with limited room to make their own decisions about what should or should not be allowed to influence their thoughts. By focusing primarily on how to protect people in the current technological environment, Alegre and Shull risk diminishing the central importance of individual agency — the freedom and capacity to lead a self-directed life. Few people want to leave regulation of the current technological environment to profit-oriented techno-libertarians. Freedom of thought requires protection. But we should be careful not to adopt protections that undermine the very thing we want to protect: namely, freedom.
Freedom of Thought and Its Enemies
Alegre and Shull offer a rich account of threats to freedom of thought in the current and future technological age to motivate their case for better protection. They highlight how advances in functional magnetic resonance imaging (fMRI) have the potential to visualize brain functions and how developments in brain-computer interfaces (BCIs) open the possibility for information transfer between brains and external devices — both of which have potentially beneficial and harmful applications. Additionally, they raise concerns about the possibility of “data-driven persuasive technologies” morphing into “harmful instrument[s] capable of eroding societal resilience, democratic institutions and individual agency.”
Some observers might balk at the fact that Alegre and Shull have mainly pointed to potential, rather than demonstrated, harm from emerging technologies. Until there is evidence of harm, some say, there is at best a weak case for interfering with innovation. That critique is shortsighted. We should applaud Alegre and Shull for trying to anticipate harm with a view to enacting laws and regulations that would help us avoid or mitigate it. As they note, current efforts to address risks are “mismatch[ed] to the speed of technological developments.” What we need is anticipatory regulation that “outpaces the technological threat to human rights.”
Anticipating and addressing potential harms is tricky work. Not making the attempt is equally fraught. In his book The Social Control of Technology, published in 1980, David Collingridge observed that it is easier to regulate technologies when they are new and still developing — largely because they do not yet reflect sunk costs and entrenched interests — but hard to know exactly how to regulate them because of substantial uncertainty about their potential effects. “When change is easy,” he writes, “the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming.”
Alegre and Shull ease the “Collingridge dilemma” somewhat by examining threats to freedom of thought using a three-pronged (albeit partly implicit) framework:
- First, they anticipate potential harm by extrapolating from harms that have already occurred. Having witnessed how social media algorithms damage adolescent mental health and accelerate ethnic conflict, for example, it is not too far a stretch to see how other brain- and thought-related technologies might generate similar or greater harms.
- Second, they highlight an underlying political economy of technology development that often prioritizes profit over people. “A great deal of money is vested in manipulation,” they note, and therefore it is not unreasonable to expect future violations of freedom of thought that serve the interests of those who own and control key technologies.
- Finally, they point to US intelligence about China’s efforts to engage in psychological warfare using “subliminal messaging, deep fakes, overt propaganda, and public sentiment analysis…to influence an adversary’s cognitive functions.” If the United States sees this as a threat, then it is a good bet the threat is real — and perhaps practised by more state and non-state actors than the Department of Defense is ready to admit.
Altogether, the case that certain new and emerging technologies pose threats to freedom of thought and human well-being is strong. The precise features of an anticipatory protective strategy may be unclear, but the need for one is not.
An Absolute Right?
Having established that there are existing and emerging threats to freedom of thought, Alegre and Shull turn to the task of distinguishing between what they call “lawful influence” and “unlawful manipulation.” In doing so, they zero in on the central challenge in the discourse about freedom of thought. While attempts to coerce or deceive us into changing our minds or altering our thoughts might be unlawful because morally objectionable, less intrusive and more transparent efforts to persuade or influence us might be morally unobjectionable. Many people accept and sometimes welcome efforts to influence their thoughts in positive ways, whether through dialogue, psychological or pharmaceutical therapies, entertainment or other means. In that case, freedom of thought demands interpretive nuance.
Can we interpret, enact and enforce freedom of thought in ways that protect people from objectionable manipulation without limiting their opportunities to be influenced and shaped by ideas, therapies and technologies that they believe would make their lives better? How free should individuals be to decide for themselves what stimuli and sources should be allowed to shape their thinking?
How we protect freedom of thought should be about empowering individuals to make their own decisions about what enters and exits their brains, erring on the side of a wide, rather than too narrow, set of options, while ensuring that options are presented in transparent and agency-respecting ways.
While Alegre and Shull acknowledge the distinction between legitimate influence and objectionable manipulation and advocate for a legal test that upholds the difference, they appear to sketch the line in a way that risks undermining human agency. They hold that freedom of thought is an “absolute right,” view interference as “morally repugnant” and maintain that there is or should be “an absolute prohibition on anything that interferes with our right to inner freedom.” They take exception to legal tests that suggest consent may be a relevant factor in assessing whether freedom of thought has been violated, arguing by analogy, as Alegre did in her 2022 book, Freedom to Think: The Long Struggle to Liberate Our Minds, that “just as it will never be lawful to sell ourselves into slavery or to consent to torture, we cannot effectively consent to the deprivation of our right to freedom of thought, now or for the future.”
Much more should be done to protect individuals from objectional interference by malicious or even merely self-interested actors. But the key is protection against objectionable interference. Many people are open to myriad forms of persuasion, or even to manipulation under certain conditions — including advertising (which appeals to emotion), fantasy and roleplay (which are kinds of deception), mind-altering drugs (which can disable or circumvent cognitive faculties), and other psychological and pharmaceutical therapies. Consenting to some of these influences is not morally analogous to selling oneself into slavery or consenting to torture.
Thought is an inherently social and relational phenomenon that requires external input and engagement (dimensions of which have been explored in the work of Rogers Brubaker and Charles Taylor, among others). How we protect freedom of thought should be about empowering individuals to make their own decisions about what enters and exits their brains, erring on the side of a wide, rather than too narrow, set of options, while ensuring that options are presented in transparent and agency-respecting ways. In short, the line between legitimate influence and objectional manipulation should be drawn with a conception of agency at the centre.
An Agency-Centred Approach
What would an agency-centred approach to distinguishing between legitimate influence and objectionable manipulation look like? We can get some leverage by examining the strengths and weaknesses of two other criteria often invoked to make the distinction between legitimate and objectionable influence: deliberative autonomy and consent (for example, in American legal scholar Cass R. Sunstein’s “Fifty Shades of Manipulation”). Legitimate persuasion slips into objectionable manipulation — and thus violates freedom of thought — according to these criteria if it tries to disable or bypass our deliberative faculties and denies meaningful opportunities to provide or withhold consent. Or at least this appears to be the case.
While each criterion captures part of what distinguishes legitimate persuasion from objectionable manipulation, underlying both is a conception of agency that helps us make better sense of the issues. Through an agency lens, we begin to see that some instances of influence can be freedom-enhancing, and that well-intentioned efforts to protect people from all manipulation can be paternalistic and agency-diminishing. In that case, laws and regulations that aim to protect freedom of thought — specifically, freedom from manipulation — should not prohibit manipulation per se, but instead manipulation that undermines agency.
Deliberative Autonomy
A normative conception of deliberative autonomy holds that people should be treated as rational agents capable of assessing evidence, weighing reasons and reaching informed conclusions. Freedom of thought is violated, on this view, when other agents try to disable, work around or “bypass” an individual’s deliberative autonomy. Coercion, lying and deception are objectionable, for example, because they aim to bypass rational faculties. By contrast, providing facts or having a discussion are fine, because these acts do not undermine or circumvent agents’ deliberative capacities.
Through an agency lens, we begin to see that some instances of influence can be freedom-enhancing, and that well-intentioned efforts to protect people from all manipulation can be paternalistic and agency-diminishing.
Consider advertising. Many ads appeal to emotions and target “the biases of fast thinking” rather than directly engage people’s rational, deliberative faculties. Because they aim to work around deliberative faculties, they appear to violate freedom of thought. At the same time, so long as advertisers avoid false claims and subliminal techniques, and so long as those to whom ads are targeted know that they are ads, one might say that such marketing does not undermine deliberative autonomy in an important sense: advertising appeals to non-rational features of psychology, but not without a heads-up to the recipient that they are being served an ad. Our deliberative faculties are still available to us, and we can think critically about what we are seeing, even as the ads appeal to our baser selves.
Moreover, some people want to have their rational faculties disabled or bypassed, at least temporarily. From reading fiction to consuming mind-altering drugs or floating in sensory deprivation tanks, many people consider vacations from their deliberative faculties part of a good life. It would be hard to say that people are somehow less free when such opportunities are available — except, perhaps, when the vacations become permanent absences, such as with addictions. It is too simple to say that freedom of thought is violated whenever deliberative autonomy is bypassed. We also need to consider who gets to say when it can happen.
Informed Consent
Informed consent is prominent in efforts to distinguish between legitimate and objectionable influence. On this view, whether something counts as objectionable or unobjectionable manipulation depends on whether agents have opportunities to learn about relevant options and make choices about whether to participate. Informed consent plays a central role in many spheres: in health care, where patients have authority to provide or withhold consent for treatments; in finance, where clients seeking services can agree or refuse to allow institutions access to their credit history; and in interpersonal relationships, where rights to bodily autonomy empower individuals to consent to or refuse physical interactions with others.
Freedom of thought is violated on this view when people are denied opportunities to provide or withhold informed consent to have their thoughts accessed or altered. Coercion undermines freedom of thought, for example, because it compels someone to believe or do something whether they want to or not. Similarly, agents who are deceived cannot provide meaningful consent because they have not been adequately informed about the features and consequences of options. By contrast, merely providing factual information to people or having a discussion are not objectionable from the point of view of consent. In fact, they are key elements of the very notion of informed consent.
Tech companies often point to agreements that permit them to control users’ information flows or even influence their behaviour. Facebook’s now infamous “social contagion” experiments that sought to influence the prevalence (although not the substance) of voting behaviour, and their emotional states, for example, were defended on the grounds that users offered blanket consent to such activities when they accepted Facebook’s user agreement. Yet, as Zuboff and others have noted, user agreements do not offer evidence of informed consent, because they are too long and complicated for users to understand and meaningfully sign. Zuboff points to research by Aleecia M. McDonald and Lorrie Faith Cranor conducted in 2008 — long before clicking user agreements became a daily activity in the digital age — which showed that a “reasonable reading of all the privacy policies that one encounters in a year would require 76 full working days.” No company or society that relies on this “click-wrap” understanding of informed consent respects agency.
Thus, while the concept of informed consent seems critical in distinguishing between legitimate persuasion and objectionable manipulation, its practical utility is compromised by contemporary commercial and legal practice. But notice that why we think contemporary user agreement and consent practices are deficient reveals the importance of agency.
Enhancing Agency
Where does this leave us? While deliberative autonomy and informed consent provide us with important clues for distinguishing between objectionable and unobjectionable manipulation, a robust notion of agency is at the foundation of both and offers a better lens for thinking about freedom of thought.
Coercion, lying and deception are usually objectionable not simply because they bypass deliberative autonomy and undermine informed consent. They are (usually) objectionable because they fail to treat people as agents — as responsible individuals with equal right and responsibility to shape their own lives. Providing factual information and engaging in rational, reason-driven discussion, by contrast, are permissible not simply because they support deliberation, respect well-being and allow for informed consent. They are permissible because they are consistent with treating individuals as agents.
There are precedents in advertising regulation and consumer protection that could guide agency-centred regulation to protect freedom of thought.
But respecting agency means recognizing that people can choose to be persuaded or even manipulated (by advertising, entertainment, pharmaceuticals and therapeutics, for example), even when we think such choices might diminish their well-being. Philosophers from John Stuart Mill to Ronald Dworkin have argued that treating people as agents who are primarily responsible for their own lives means we often need to err on the side of permitting people to make seemingly unwise choices rather than always protecting them from themselves. Alegre and Shull seem to want to err on the other side of the equation — insulating people from certain kinds of influence, even at the risk of treating them paternalistically.
Does this leave us without any tools to protect people from objectionable manipulation — from experiments, activities and attacks that fail to respect or undermine their agency? Not at all. There are precedents in advertising regulation and consumer protection that could guide agency-centred regulation to protect freedom of thought, including requiring truth in advertising, prohibiting certain techniques (for example, subliminal ads) and protecting vulnerable audiences (for example, children). There is also a range of emerging laws and regulations being applied to the use of algorithms — including requirements related to transparency and explainability. In the case of digital therapeutics — software-based products and technologies to assist with diagnosing, preventing and treating a variety of health conditions — we can adapt existing risk assessment procedures and regulations that currently apply to pharmaceuticals, health technologies and psychological services.
There is work to be done to better protect freedom of thought in the digital age. But we are not starting from scratch. While we have not yet reached the right balance in the digital age, liberal democracies have long track records of adopting regulations that create space for reasonable adults to make choices about their own lives in choice environments that are not intolerably littered with snake oil and snake oil salespeople. As we design technologies and develop regulations for the digital age that respect and protect freedom of thought, our touchstone should be agency. Objectionable manipulation should be regulated, but not all manipulation is objectionable.
Author’s Note
I thank Graeme Moffat for helping me work out some initial ideas, and Creig Lamb and Aaron Shull for comments on earlier versions of those ideas.
This essay is a response to the CIGI special report Freedom of Thought: Reviving and Protecting a Forgotten Human Right by Susie Alegre and Aaron Shull.
Copyright © 2024 by The Centre for International Governance Innovation