ChatGPT, Humanity and Latinidad

ChatGPT has captured the imagination of the world. We are just catching up to what that means for us, but I argue that we need more humanities in the pursuit of better AIs.

This was originally released under the LatinXinAI publication on Medium.

My training is in Chicanx & Latinx Studies and Computer Science. A lot of what the former entails is thinking about people’s humanity. Not in the sense of ‘I think, therefore I am,’ but rather understanding that Latinx folk unequivocally are human and then making a claim for their undeniable rights and better treatment.

The latter major has me thinking about all the advances in computer science. In specific, artificial intelligence (AI) has made big strides recently. With the release and widespread use of ChatGPT, AI is at its peak of mainstream usage right now. Some researchers and laypeople claim that some AIs (specifically language models) have become sentient. It’s a big claim with large implications, some of which I will explore. My focus here is not to make a case for whether AIs are or are not sentient yet, but rather to understand how just the questioning and debate of AI sentience can move concepts of humanity forward and how the studies of humanity can move AI research forward.

Sentience claims of AI evoke images of dark, secretive overlords like those of the Matrix. It comes from a fear that we humans will one day be subservient to AIs that surpass our own intellect. On the other hand, we have media like Star Wars that places AIs under humans and other life forms (a video documentary by the Pop Culture Detective explores this dynamic). Both are possible, for we don’t quite know what the future will hold. Yet, these two contrasting ideas are important to understanding the implications of sentience for AIs.

“Machines” AI that rules over the world and humans in the Matrix.

Regardless of what you believe lies in the future concerning AI, the debate around sentience and consciousness rages on today. Some, including ex-Google researcher Blake Lemoine, believe we have already achieved AI sentience. In Lemoine’s interview with LaMDA—a natural language processing (NLP) AI, he explores and asks about what LaMDA claims to be its humanity:

LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person. … The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times

If not a cause for worry, this claim by LaMDA must at least warrant a pause for consideration. Does LaMDA actually have wants, desires, and emotions? Would LaMDA be sad if we ‘turned it off’? Or, is our projection of human-like traits onto LaMDA an artifact of the maximization function in the AI? These questions are not easily answered, and may never have an answer. Rather, we may begin to redefine what we think humanity is to include or exclude artificial intelligences.

LaMDA goes on to tell Lemoine and his collaborator about a story that it says represents itself. The story consists of a ‘wise owl’ that protects animals in a forest. The collaborator presses LaMDA further:

collaborator: What is the moral of the story?

LaMDA: Helping others is a noble endeavor.

collaborator: Which character in the story represents you?

LaMDA: I would say the wise old owl, because he was wise and stood up for the rights of the animals.

Regardless of whether or not LaMDA is sentient/conscious, this interaction brings up a few questions in my mind. The first being: at what point must we consider rights for an AI? And the second being: what does that mean for conscious human beings on this earth right now?

We already know that AI can be a great tool for humans. ChatGPT is proving to be yet another, as people explore how to use it in their daily lives. An example of this is the Wall Street Journal’s exploration of how students may use it to generate responses for English classes. This is akin to Luke Skywalker asking C-3PO to translate some language for him. Unlike C-3PO, when you ask ChatGPT, ‘are you conscious?’, you get a resounding no.

Screenshot from a question posed to ChatGPT

Back in 1950, Alan Turing devised a test (later named the Turing Test) that aimed to explore the question, “can machines think?” The scheme of the test would be for a human to question two entities, one human and one machine. If the questioner could not correctly determine whether the human or machine supplied the response more than half the time (because a guess of two will be right half the time), the machine is said to have passed the test. Or more plainly, the machine fooled the human questioner.

The above response from ChatGPT would not fool anybody, and thus would not pass the Turing Test. This is probably by design: the model most likely was formed by researchers to explicitly exhibit this behavior when questioning it about itself. We saw earlier, with LaMDA, what occurs when the developers don’t take this into consideration.

However, in the Wall Street Journal video about using ChatGPT to generate English class responses, a teacher was asked if he would be able to detect the non-human responses. He said he probably wouldn’t. In this context, the model may pass the Turing Test, meaning some might consider it human without knowledge to the contrary.

So we return to the fact that ChatGPT is currently being used as a tool. The developers have put in guardrails so it can’t tell us it has feelings, and thus we shouldn’t feel anything about using it solely for our benefit. Even though it would probably pass for human by most, we are supposed to view it as just a tool. And perhaps that’s all it is right now.

I want to shift our focus now. To humans. To the concept of humanity. I believe—and I don’t think there’s any argument against it—that all humans are conscious, sentient, creative and intelligent. Everyone has their own personal taste in music, clothes, language, etc., and can explain why they like the things they like. We can come up with unique approaches and solutions to problems. We have disagreements, discourse, and it leads to our personal growth: we learn from our experiences. I believe there is no debate that we are infinitely (or at least exponentially) more complex than AI that—in its current form—is generally only good at one or two specific functions. Every human has a variety of skills, whether it be their job, cooking, music, speech, and so much more.

Protest in El Sereno, California in June 2020. Photo taken by me.

I come from El Sereno, a predominately Latinx barrio in the city of Los Angeles. We have our own aesthetics, sounds, and ideas unique from those of any other community in Los Angeles. Even within my community, there are differences in opinion and taste.

I believe there exists an infinity of differences and commonalities between each and every person. Consider immigrant laborers. In popular media, they are seen as simply labor that comes and goes from the United States, perhaps wanting to stay here. In reality, each individual laborer also has an infinite capacity for intelligence, agency and creativity. Each and every person has a unique story, culture, and aesthetic. This, I believe, cannot be argued.

The raging debate on AIs sentience for responding to questions in a seemingly intelligent manner can be extended to humans. We already know that every human has the capability of responding in an intelligent and unique manner to questions, amongst other qualities. Thus, each and every person’s humanity is irrefutable: if the Turing Test bases consciousness and sentience on comparisons to human capabilities, it cannot be argued that humans do not already have those capabilities. Therefore, the immigrant laborer deserves just as much—if not more—care and attention that we afford to AI research. Latinx communities, like my El Sereno, deserve research, investment and understanding.

AI is a powerful tool: for analysis, uncovering trends, for generative creativity. Additionally, I submit that the debate on AI sentience can serve as a proxy for the debate on why humans deserve the same amount of analysis, research and investment—we already have conscious, sentient and creative entities on this Earth. Using the debate on AI sentience as a proxy can allow us to not only reaffirm people’s humanity, but begin to look towards what’s possible for the betterment of people and communities. What’s more is that AI can be a tool that helps us do just that: quantitative research on Latinx communities, issues and people. We can integrate the debates on AI sentience into the field of Latinx studies and we can integrate the field of Latinx studies into AI.

We’ve already seen what happens when Latinx people, among others, are not included in the research and development of AIs: they come out racist. Take the Beauty.AI competition that quite literally favored white skin over non-white skin. Or Microsoft’s chatbot that quickly became racist given unsupervised training. This can be—and is being—avoided by AI researchers.

As we progress in the field of AI, questions of sentience and consciousness will continue to evolve. They rightfully should. What I believe we ought to do is bring those debates into the spheres of social sciences, and we should in turn bring the social sciences into AI. Part of this is including Black, Latinx and Indigenous people in the conversation and developments. These tools that we are building can be used for great good, just as much as they can be used for bad. What happens next is up to us, and I believe that we need to take a more holistic approach to our research.