At his sentencing hearing last year, the prosecutor read out some really disturbing exchanges that he had had with his chatbot, in quotation marks, girlfriend, where he was sort of saying things like, "I'm an assassin. Does that make you think any worse of me?"
And she's going, "No, that sounds really cool."
And then he's saying, "I'm thinking of killing the queen."
And she's kind of going, "Oh wow, yeah, you are really brave."
This sort of exchange, where you're thinking if that was a real girlfriend, what would the difference be? Both in terms of what she might have said to him, what she might've done, whether she might have gone to the authorities to prevent the risk, but also what it might've meant for her own liability, her own criminal liability. But of course, she's not a real person. She's a chatbot, who's just been created by a company and that is feeding back to this person on a loop, reinforcing what their ideas are and what they are thinking.
And when I was looking at these examples, I think, well, what is the liability actually for chatbot designers, the people who sell chatbots, the people who deploy chatbots in cases like this, which I'm sure this is just the very beginning of this issue? How are we going to deal with that kind of criminal liability? And it was one of those things that I was talking on a webinar with some technologists who say, "Well, you can't just ban these things."
I mean, I think that's debatable, but it's not even just about banning. It's that actually if you put liability, particularly criminal liability for these ultimately tragic and very dangerous results on the people involved in the technology in some way, in developing it and selling it, and deploying it, that really focuses the mind on how this technology works and what its impact might be, if you are at risk of liability for what happens. And I think that's what's really important is, in the development of our societal relationships with AI, we need to be very clear about where lines of liability lie and what the risks are. I think it's only when people feel that they might be held to account very seriously for what goes wrong, that they are going to focus on what goes wrong and maybe try to prevent it.