AI text is not “speech” (Guest Writer)

AI text is not “speech” (Guest Writer)
Photo by Kelly Sikkema on Unsplash

Martin Fleming is a philosopher of mind, BBC Thought of the Day regular, and member of the Science and Philosophy Initiative.

There was a case happening in the USA last June in which a mother is suing an AI company called Character AI for influencing her 14-year-old son to end his life. Character AI offers younger persons (generally) a chance to chat with a fictitious character. Harry Potter is popular. In the case above, it was a bot speaking exactly like the actress playing Daenerys Targaryen from Game of Thrones.

The company's lawyers argued that this is protected by the US First Amendment regarding the right to freedom of speech. This is generally the defense offered by tech and SME companies when dodgy material appears on their platform. In the situations of material posted by live persons, there may be a grey area to be clarified by the courts. However, in this specific matter, the judge overruled the petition to reject the case on the basis that she held that whatever output this AI bot made, it did not count as "speech”.

I agree and believe this is important and correct for several reasons:

1. AI can mimic the speech of humans, but it is not speech as the expression of thought. It is formulated according to complex pattern recognition. Sound patterns that resemble a person speaking cannot be regarded as speech if they are not generated as an expression of thought. We have now given those without a voice the chance to communicate through a voice-box activated in some way by the human. That counts as speech, even though not produced as sound from a mouth. Why? Because it expresses the human individual's thinking.

2. Freedom of speech is actually based on freedom to hold and express thoughts, opinions, and beliefs. It is not the noise we make that is the object or principle to be protected, it is the internal mental freedom that is critical. AI cannot be proved to hold thoughts, opinions, and beliefs in the way that we humans value them. Hence, AI has no protection for its computational output under the First Amendment.

3. Even for humans, the First Amendment and the right of free speech has its limits and boundaries. And, in some cases, going beyond is punishable by the law. So enshrined freedoms go hand in hand with the responsibility to be liable and culpable to receive censure and punishment for the misuse or abuse of a general freedom. Neither AI nor its tech masters (in this case the company) can claim that it has that level of responsibility. Perversely by the company Character AI itself trying to avoid having such liability as a matter of principle, it should lose any right to a claim of protection under the First Amendment for this case.

4. Even though AI's output cannot count as "speech” in First Amendment terms, its words and content output matter to the receiver. The intention of this app is to create the sense of a young person having an AI bot as a "friend” or something more intimate. Some folks have touted the potential for AI tutors, mentors, guides, helpers, etc., as one of the more promising benefits that AI might provide for those lonely or in need. That may be so, but if AI has that kind of positive ability and efficacy for good, then we should not be surprised if it also emits unhelpful, inaccurate and dangerous views that mislead the same sort of vulnerable people. So, if a young person takes their life and it is clear that their interaction with an AI "speech” bot played a vital part in that, then there is a case to answer. It is not the AI that is responsible (and therefore its output is not protected). Society has to consider the supply line from creator, programmer, salesmen and suppliers.

5. Does the end user have a responsibility? Did they themselves misuse the tech and encourage the app to offer advice as it did? Perhaps, but that again demonstrates the lack of human responsibility of the AI in these situations. There are plenty of cases of humans playing on vulnerable persons, grooming them into crime, self-harm, sex, or suicide. But that is a crime for which such humans should pay. Who is taking that responsibility for AI?