In late January 2022, Politico published a story on Crisis Text Line, a high-profile digital mental health non-profit, focusing on the ways that it shares data with Loris.ai, a for-profit spinoff started by Crisis Text Line’s co-founder Nancy Lublin. The story raised the alarm, prompting reaction from the organization, a commissioner of the Federal Communications Commission and at least one of its board members directly, ultimately resulting in the end of the problematic data-sharing relationship. And yet, even with the change, it’s hard to answer the bigger question: How did this happen?
There’s a lot of complexity in this situation, but it all comes back to a really simple question: How did a non-profit board of directors of an emergency mental health service justify deciding to start using the data they collected to launch start-ups?
The most complete account of the thinking involved comes from Lublin herself, here (at about the 33-minute mark) and in a blog post by danah boyd, the current chair of the board. And the answer, paraphrased, comes down to one of the most unfortunately common sentiments in digital civil society: “We have data, so we should use it for ‘good.’” It remains unclear how that rationale suggests spinning out a venture-backed start-up aimed at selling machine-learning services to companies such as Intuit and Lyft.
The story is an impressive, if concerning, example of tech journalism, if only because all of the details have been public knowledge for at least four years. This round of reporting was likely prompted by the work of whistle-blower Tim Reierson, but Crisis Text Line has been the subject of significant reporting over the years. Coverage initially, and most prominently, lauded the organization as the model for the future of digital mental health services, and included praise by prominent figures such as Richard Branson and Prince Harry. Subsequent coverage, however, focused on exploitative labour practices, resulting in the ousting of the CEO. But the Politico story at the centre of this latest round of controversy isn’t based on novel reporting — there’s not even an accusation of harm resulting from patient re-identification. Instead, a journalist with a sufficiently large platform amplified the ethical concerns of a volunteer asking the basic questions Crisis Text Line’s board chose to ignore.
And while Crisis Text Line continues to fight for its public reputation, it has also, nearly overnight, become an even more important case study for anyone designing digital services, not because of the “gotcha” sensationalism, but because it illustrates several of the fundamental tensions inherent in digitizing vital services. Perhaps most importantly, it demonstrates why the service providers we rely on in our most vulnerable moments owe us more than a standard of care — they owe us a duty of loyalty. Legally, duties of loyalty don’t just require that a service provider adhere to a standard of practice; they also require that they make decisions in our best interests — and explicitly ignore their own interests when making that decision.
Crisis Text Line is an excellent example of the difference: its crisis management team has been adamantly mithering about the details of “selling” versus “sharing” data and whether it’s “personally identifiable” or “anonymized,” and citing privacy experts (who have since corrected them) as condoning their model. All these efforts were in hopes of finding a non-existent justification for exploiting the data of vulnerable people under the guise of helping them. Said even more simply, there is no amount of data governance that magically turns exploiting the data of the vulnerable into something that’s good for the exploited.
And that is a maxim that, unfortunately, a lot of digital service providers, civil society groups and academics are clearly still testing. In an effort to learn from what’s happening here, this piece uses the Crisis Text Line example to explore what’s actually happening and the explanations given for these events; contextualize those questions in the trade-offs and challenges involved in building digital services for vulnerable populations; and highlight how legal duties, especially the duty to loyalty, shape data governance and protection.
Crisis Text Line’s Crisis of Conscience
Crisis Text Line is a service using SMS (Short Message Service) that provides counselling and mental health support services. As an organization, Crisis Text Line does two major things: it recruits, trains and matches volunteer counsellors with people who need counselling services; and it builds and maintains a digital platform that triages incoming users, provides digital support throughout the counselling process, and collects significant amounts of data that it feeds through machine-learning tools to “improve” their work, mostly through pattern matching. Let’s be extremely clear, from the beginning, that Crisis Text Line is not, and does not claim to be, a professional mental health service; rather, it facilitates the support given by its counsellors, who are also, explicitly, not licensed professionals (they are lightly trained volunteers who use software to guide them through the delivery of a digitally intermediated script).
And, in 2017, Crisis Text Line’s board of directors removed its emphatic assurances to users that it would never reuse data for commercial purposes and incorporated Loris.ai as a for-profit subsidiary. (Whether it’s a “spinoff” or a “subsidiary” will probably attract mithering. At the time of incorporation, Crisis Text Line owned 53 percent of the company’s equity and was entitled to 10 percent of gross profits above a minimum threshold.) As part of this set-up, Crisis Text Line shared the data that it collected from people seeking suicide prevention counselling with Loris.ai in order to train a machine-learning tool, which it sells as a service (akin to Crisis Text Line’s technology infrastructure) to customer service teams at large commercial companies. That data sharing continued until 2020, at which point Loris.ai’s product was sufficiently “market-ready” and generating its own user data.
The problem isn’t technical; it’s that the digital means of providing an otherwise protected service are being used to exploit the underlying relationship with users in fundamentally self-serving ways — ways that are hard to explain as “in the best interests” of the subjects.
Before this gets into the details of the claims, it’s important to set a few baselines. There are no allegations that Loris.ai’s products, or even the data used to train them, can be used to re-identify Crisis Text Line users or pose a direct threat of any kind. As tweeted by the Electronic Privacy Information Center’s Executive Director Alan Butler, “The problem is that their arrangement appears to extract commercial value out of the most sensitive, intimate, and vulnerable moments in the lives of those individuals seeking mental health assistance and from the responses of hard-working volunteers.” The problem isn’t technical; it’s that the digital means of providing an otherwise protected service are being used to exploit the underlying relationship with users in fundamentally self-serving ways — ways that are hard to explain as “in the best interests” of the subjects.
To understand why these practices are inappropriate, it’s worth (briefly) walking through the main issues, objections and responses that Crisis Text Line (and its critics) have raised, mostly as “data-sharing” issues.
Crisis Counsellors versus Chatbots
Crisis counselling hotlines occupy a legal grey area: volunteers aren’t required to be certified mental health professionals, so one of the main jobs of hotline service providers is to support counsellors in order to ensure quality services and available resources for callers. Crisis Text Line’s approach to that, as it describes, was to develop a digital “script” engine that is constantly analyzed and updated by a combination of volunteers and machine-learning tools, with the ultimate goal of automating as much of the process as possible. Beyond the automation question, former volunteers have raised concerns about whether Crisis Text Line’s 30 hours of training are enough to responsibly equip counsellors for the conversations ahead. According to boyd’s account, there was serious consideration of using totally automated applications, sometimes called chatbots, to respond to people texting for crisis support. While there are good-faith arguments about the degree of professionalism required to perform underfunded crisis prevention services, there are comparatively few about whether they should be provided by a human, not least because, in some circumstances, crisis counsellors are required to notify emergency services, like law enforcement.
Algorithmic Reasoning, Natural Language Correlation and Patient Triage
One of the ways that Crisis Text Line uses texter data is to continuously update its existing automated processes, such as its triage algorithm. The triage algorithm is an exceptionally clear-use case of the stakes, challenges and underlying politics of Crisis Text Line. Essentially, when a person seeks help from Crisis Text Line, they need to be paired with a volunteer. When the number of users exceeds volunteer availability, the system creates a “queue.” Crisis Text Line decided that, rather than serve that queue on a first-come, first-served basis, the system should try to interpret the “risk” each user was facing on the basis of their word choice. At the most basic level, it’s not scientifically proven that word choice is an accurate indicator of future risk. The system is able to make historical correlations, at best, and the decision to prioritize users on that basis creates as much risk of delaying getting help to, or even moving help away from, those who need it as it offers the potential to accomplish automated triage. Concerningly, similar systems have been demonstrated to create and replicate bias in high-stakes contexts such as bail recommendation, financial service benefits and other types of health services.
The bigger and more important point in the digitization of health services is that how we design and test digital tools — including their capacity to account for equity and adaptation — has a material impact on their fairness in context. And the public transparency and integrity of the experimentation with digital and automated components of these systems are even more important prerequisites when providing high-stakes services. Regardless of the efficacy of the algorithm, this is not the type of experimentation that should be done inside a single organization (non- or for-profit), and it certainly shouldn’t be deployed in high-vulnerability settings before being validated. To be clear, this is not a data-sharing issue as much as it is a research ethics issue, although the digital nature of the product is the reason it remains unregulated.
Research
Crisis Text Line described one of the primary motivations for the retention, sharing and reuse of its users’ data to be, broadly, “research.” While “research and experimentation” is important, especially in emergent fields, it’s also an abstraction so broad that it’s often used to justify overtly inappropriate behaviour — especially from digital platform providers — that in no way attempts to replicate the requirements that typically accompany professional research involving human subjects. Based on the small amount of independent review that’s been done, there isn’t much interrogation of or concern about how Crisis Text Line shared data for research; rather, the majority of inquiry has been around whether to do it at all.
Seeking help shouldn’t make you the subject of an experiment. People in crisis shouldn’t have to worry that this will happen to them, simply because they’d prefer to use their phone to seek support.
To be clear, there are no allegations of impropriety stemming from Crisis Text Line’s research. And yet, a significant number of mental health and medical services have significant barriers in place between their clinical care and research settings, including specific consent requirements. Here, Crisis Text Line — by boyd’s own description — reuses data for research based on an inadequate consent model, akin to a platform’s terms of service. One of Crisis Text Line’s responses to the Politico coverage was to update its terms of service and privacy policy (on its website), which is papering over the larger issue, at best. There’s a tendency, especially among emergency service providers, to use the stakes of emergency to justify a broad range of questionable actions, often intermediating a relatively unaccountable non-profit in research priority and data-use decisions that implicate, but don’t involve, the people whose lives they will shape. Said a different way, seeking help shouldn’t make you the subject of an experiment. People in crisis shouldn’t have to worry that this will happen to them, simply because they’d prefer to use their phone to seek support. And there’s no amount of ambiguously pursued public good that justifies seizing that decision from the people reaching out to a service for help.
Revenue
The majority of the umbrage expressed at Crisis Text Line has been because of its relationship with its subsidiary, Loris.ai. According to Reierson, Crisis Text Line owned 53 percent of Loris.ai stock upon creation and had an ongoing profit-sharing relationship. By its description, Loris.ai was intended as a revenue engine for Crisis Text Line’s service, which was struggling to find a financial sustainability model. It was no small challenge: Crisis Text Line started with a US$23.8 million grant from a group of high-profile technology billionaires, including Reid Hoffman and Melinda Gates. Even with the support of people such as Richard Branson, who convinced all major US carriers to waive billing for Crisis Text Line users, there is no easy or politically popular funding model for providing emergency services, and yet, Crisis Text Line has been remarkably effective at, as Lublin said, funding their work “like a start-up.” While Loris.ai was founded during comparatively lean years, Crisis Text Line posted more than US$17 million in revenue last year and an even more impressive US$49 million in 2021. Financially speaking, Crisis Text Line hasn’t actually needed, or even directly benefited, from the founding of Loris.ai.
Loris.ai, in just the past four years, has raised US$7.1 million from venture funders and has yet to meet the revenue minimums necessary to trigger the profit-sharing relationship. So, while the data-sharing relationship with Loris.ai is over, it’s worth recognizing that the reaction to sustainability concerns was to needlessly commercialize the data of its users. While this story has been largely reported as a data-sharing controversy, it is perhaps more accurately a crisis of governance — one that a non-profit board outsourced to a venture-backed “artificial intelligence” company.
Crisis Services’ Governance Crisis
Crisis Text Line didn’t come to be in a vacuum — as Stanford University professor Lucy Bernholz tweeted, Crisis Text Line is not a one-off or even an especially bad example of the ways that emergency public services are struggling with austerity and the demands of overwhelming social need and digital transformation. At nearly every level, and especially in the midst of a global pandemic, an enormous number of services have gone from analog to digital-first or even digital-only. In every single case, there are several difficult, irreducible governance questions, often exactly like those that Crisis Text Line’s board faced.
Digital transformation often enables services to grow in important ways, typically at the expense of context — and context matters: Crisis Text Line is a great example of that — the service has trained more than 12,000 counsellors who have delivered more than 62 million support messages. But they do so by focusing on the development of the technology they provide, instead of the context of the relationship between the counsellor and a support seeker. That’s not always a bad thing: the relative anonymity and accessibility of text messaging are often described as an important part of the service’s appeal, but the move to support models based on aggregated data is an inversion of the most mental health services’ focus on patients. Said a different way, Crisis Text Line intermediates data modelling to deliver aspects of its counselling service that have, historically, either relied on a patient’s self-reporting or another person’s assessment. By doing so, the service altered at least one element of how crisis counselling volunteers implement their duties of care.
That change in model shouldn’t necessarily be prohibitive — it’s highly likely that Crisis Text Line’s users wouldn’t even mind — but it also shouldn’t be invisible. We don’t rely exclusively on patients to define the protections or duties inherent in health care for a range of very good public policy reasons, and the move to digital platforms requires a whole new set of duties and protections. Instead, the move from health service to start-up technology platforms does exactly that by making patients individually responsible — it treats patients like customers and brings the well-established problems of digital consent into even more vulnerable settings.
There is not an institutional or entrepreneurial workaround to funding digital public and emergency services: Over the last five years or so, there’s been a growing amount of attention paid to how to sustain public-interest digital infrastructure, and while there’s a lot of optimism, there’s no way around the need to fund it. Whether as a product of austerity or the innovation agenda, the move to privatizing public services has largely happened under the auspices of trusting private sector models of improvement. And crisis services have, broadly, been no different: the political aversion to taxation has often resulted in scarcity-motivated experimentation with a combination of technologies and privatization, both of which introduce significant power asymmetries. Said a different way, there has been a lot of magical thinking about how to fund emergency public services and digital infrastructures, most of which has come at the expense of the people they’re meant to serve. And although that doesn’t mean we should rely on non-profits funded by billionaires in order to provide those services, while these non-profits exist we have to ensure the integrity of their governance as we continue to do the longer-term work of building sustainable digital public services and infrastructures.
There is no mechanism to manage research, development and public interest experimentation in digital public services: Said flatly, there are very few, if any, good models for independent, empirical experimentation in the way that we design and adapt public services, and this is especially true for digital transformation. While there is evident investment in public institutions’ digital capacity through a variety of “digital services” and hired consultants, there is very little funding or infrastructure that centres on the needs of those being poorly served and the needs of the service, as opposed to exploring a particular technology or partnership.
Crisis Text Line is, in a lot of ways, a great example of the kinds of gaps that the absence of infrastructure creates, including ways that it leads digital transformation to distort critical relationships. The platform built a modern system for providing crisis counselling services over text messaging, which is a broadly overlooked — and largely unfunded — technology in public service design. And yet, it’s also one of the most effective, accessible and empirically tested communication technologies in the world. But if, instead of building their own non-profit, the team behind Crisis Text Line had tried to convince existing crisis counselling services to prioritize text-messaging interventions, they would have inevitably encountered the razor-thin budget margins those services already operate under, as well as the organizationally divided landscape of pre-existing providers. In other words, even if all Crisis Text Line had wanted to do was empirically prove, or help modernize, crisis counselling via the value of text messaging, there wasn’t, and still isn’t, any obvious way to do so effectively.
At nearly every level, and especially in the midst of a global pandemic, an enormous number of services have gone from analog to digital-first or even digital-only. In every single case, there are several difficult, irreducible governance questions, often exactly like those that Crisis Text Line’s board faced.
And while it’s concerning that there’s no central way to say yes to new approaches to providing emergency services, it’s also true that there are relatively few ways to say no. Crisis Text Line exists not because it’s a rigorously proven model, but because people with the money and influence to make it a reality chose to do so. Some of those same people also believe that the relationship with Loris.ai is an appropriate evolution and partnership for Crisis Text Line. In many ways, the absence of governance that prevents the systemic adoption of new approaches also allows the exploitation of the most basic assumptions about service provider duties and loyalty.
Digital Transformation’s Missing Duty: Loyalty
Most cultures don’t allow just anyone to provide service to the vulnerable, especially in times of crisis; rather, most impose additional legally enforceable responsibilities, called fiduciary duties. A fiduciary is a professional, such as a doctor or a lawyer, who is practically and legally responsible for taking care of a person in a situation where they can’t take care of themselves. The larger the vulnerability, historically, the more important the fiduciary duties of the profession, not only, as Keith Porcaro noted in Wired, to protect the person in crisis, but also to ensure the public’s trust in their ability to seek help when they’re in crisis. Critically, fiduciaries ensure the integrity of that trust by explicitly and narrowly focusing their service on a specifically defined set of interests, not by trying to serve broad groups all at once. Your doctor, for example, is legally required to make decisions for your treatment based on what’s best for you, and forbidden from basing decisions on personal interests or even broadly defined “public” interests. And that, really, is what Crisis Text Line, like a lot of digitally transforming public services, misses.
Fiduciary duties are not an end in themselves; they are a legal attempt to align incentives in asymmetrical relationships. While the details and terms vary somewhat, fiduciary duties establish three core responsibilities: to act on behalf of the best interests of those being served (the duty of loyalty); to act within professional standards (the duty of care); and to give the people served the information and agency necessary to ensure accountability to the first two. In a number of professions, such as medicine, fiduciary duties change based on the role a person plays (for example, a researcher has different duties than a general practitioner). Both have to perform their work to a professional standard to meet their duty of care, but the ways they implement their loyalties are different: researchers are loyal to the integrity of the study; doctors, to the well-being of their patient. It’s also why we don’t typically allow researchers to ask patients for consent in the midst of their emergency clinical care.
Yet, that’s exactly what Crisis Text Line was doing: using dubious consent to extract data, at best to better serve its users, but as illustrated by the creation of Loris.ai, also for its own benefit. It’s important to say that neither Crisis Text Line nor its volunteers are fiduciaries (crisis counsellors broadly fit into a regulatory grey area designed to help people in need find appropriate fiduciary services). But most people don’t know that, and the reaction to the Crisis Text Line story, from regulators, advocates and users, suggests that most people don’t think they should have to know that. Regardless, that’s the line that Crisis Text Line skirts: appearing as a safe space, while fundamentally prioritizing its own agenda.
One of the critical mistakes made by digital service designers is to focus on duties of care, usually articulated as cybersecurity, privacy or some other compliance practice. In digital service design, it’s significantly easier to build for a standard of practice than it is to situate the work of the technology in the context where it operates. It’s a lot easier for Crisis Text Line to talk about whether the data it shared was not “personally identifying” than it is to explain why it was in a texter’s best interest to be triaged by an algorithm or have their data, however treated, reused by a customer service start-up. It’s easier to technocratically manage data than it is to understand, let alone advocate for, the best interests of those it represents, often because they’re at odds. When those interests collided, Crisis Text Line’s board chair consistently sided with badly defined justifications around “the public good” — justifications that conveniently centred the power and interests of the organization over its existing users.
In the end, the data governance crisis at Crisis Text Line wasn’t really a data crisis at all — it was a governance crisis. And it wasn’t because of the money — so far, there hasn’t been any — but because Crisis Text Line demonstrated how willing it was to prioritize its needs over those of its users. This case shows why, without a duty of loyalty, it’s hard to trust any kind of care — crisis, fiduciary or digital. And, until public and emergency services design for duties of loyalty, digital transformation will create one crisis after another.