Suicide-by-chatbot puts Big Tech in the product liability hot seat

Suicide-By-Chatbot Puts Big Tech In The Product Liability Hot Seat

It is a sad fact of online life that users search for information about suicide. In the earliest days of the internet, bulletin boards featured suicide discussion groups. To this day, Google hosts archives of these groups, as do other services.

Google and others can host and display this content under the protective cloak of U.S. immunity from liability for the dangerous advice third parties might give about suicide. That’s because the speech is the third party’s, not Google’s.

But what if ChatGPT, informed by the very same online suicide materials, gives you suicide advice in a chatbot conversation? I’m a technology law scholar and a former lawyer and engineering director at Google, and I see AI chatbots shifting Big Tech’s position in the legal landscape. Families of suicide victims are testing out chatbot liability arguments in court right now, with some early successes.

Who is responsible when a chatbot speaks?

When people search for information online, whether about suicide, music or recipes, search engines show results from websites, and websites host information from authors of content. This chain, search to web host to user speech, continued as the dominant way people got their questions answered until very recently.

This pipeline was roughly the model of internet activity when Congress passed the Communications Decency Act in 1996. Section 230 of the act created immunity for the first two links in the chain, search and web hosts, from the user speech they show. Only the last link in the chain, the user, faced liability for their speech.

Chatbots collapse these old distinctions. Now, ChatGPT and similar bots can search, collect website information and speak out the results – literally, in the case of humanlike voice bots. In some instances, the bot will show its work like a search engine would, noting the website that is the source of its great recipe for miso chicken.

When chatbots appear to be just a friendlier form of good old search engines, their companies can make plausible arguments that the old immunity regime applies. Chatbots can be the old search-web-speaker model in a new wrapper.

a computer screen showing a small amount of text

AI chatbots engage users in open-ended dialog, and in many cases don’t provide sources for the information they provide.
AP Photo/Kiichiro Sato

But in other instances, it acts like a trusted friend, asking you about your day and offering help with your emotional needs. Search engines under the old model did not act as life guides. Chatbots are often used this way. Users often do not even want the bot to show its hand with web links. Throwing in citations while ChatGPT tells you to have a great day would be, well, awkward.

The more that modern chatbots depart from the old structures of the web, the further away they move from the immunity the old web players have long enjoyed. When a chatbot acts as your personal confidant, pulling from its virtual brain ideas on how it might…

Access the original article

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :