Creating Human-Like Chatbots

by Leila Okahata

PHONLAMAIPHOTO/ISTOCK

How can we make large language models more human-like? How far do we go? What does it really mean to be “like a human”? “I think the most distinguishing feature of our species is our creative ability to use tools,” advanced Jonathan May, principal scientist at USC Viterbi’s Information Sciences Institute (ISI). “We’re not the strongest species, the fastest species nor the only species that use tools, but we’re very good at using and developing tools. It’s our superpower.”

From the printing press to smartphones to artificial intelligence, humans have created machines for convenience and connection, with large language models (LLMs) being one of the newest of these innovations. LLMs are a class of AI systems that process and generate natural language, and throughout 2023, ISI explored the possibilities of creating chatbots that are more conversational and human-like.

Personalized Tutor

Emmanuel Dorley, a former postdoc at ISI, built life-like characters for K-12 tutoring systems. With a lack of representation of underserved communities in STEM education in America, Dorley and his colleagues at the INVITE Institute (INclusive and innoVative Intelligent Technologies for Education) brainstormed how AI tutors can be more personalized and supportive for young learners.

The team is creating a customization toolkit for learners to tailor their agent’s physical appearance and augment the computerized tutor to be more conversational and observant. “We want agents that can engage more naturally with the student. If a student feels frustrated or tired, we want the agent to motivate them to keep going,” Dorley said. “This requires understanding context and generating very specific language and feedback at a very specific time. To do that, we need human-like agents.”

Chatbots are often designed to speak to you, but what if they could speak for you in your writing style and tone? May, who is also a research associate professor at the Thomas Lord Department of Computer Science, investigated whether chatbots can mimic a user’s persona based on what they have previously written to adopt their personality.

Generating responses similar in content and spirit to the user, autofill can become more of a personal representative than just an assistant. “It won’t completely proxy users, but it’s convenient to have an auto-response that’s in your voice,” May said. “If the model is good at understanding the way you respond given a particular input, then you would be able to push the tab button more often and save time.”

Automated Dungeon Master

Telling an interactive, narrative story—a critical job of the Dungeon Master in the role-playing, decision-based game Dungeons & Dragons—is usually thought of as a creative domain where humans excel and technology fails. But by constructing AI to understand and anticipate how people act based on their motivations, beliefs and desires, a compelling automated Dungeon Master may be possible, said Jay Pujara, a research assistant professor at the Thomas Lord Department of Computer Science and ISI principal scientist. “To make AI more human, it needs to think about us—what we want, what we’re going to do and the world we live in,” Pujara said. “This project has taught us that any good, engaging conversation requires thinking about the person you’re talking to.” Without human-like capabilities, the conversations between AI and people tend to be robotic and unreliable. For the aforementioned projects to come to life, these chatbots need to learn the unstated: common sense.

How Do We Make Bots More Human-Like and How Far Do We Go?

Pujara, who is also the director of the Center on Knowledge Graphs at ISI, created the Commonsense Knowledge Graph. Knowledge graphs (KGs) consist of people, places, things and ideas, all connected by their relationships to one another.

“For example, if we want to represent John Lennon as a member of The Beatles in a KG, we would have an entity called “The Beatles”, an entity called “John Lennon”, and then a link between them,” Pujara said. Creating a Commonsense KG for chatbots requires a forest of data encompassing billions of branches of human conversations, decisions and concepts.

Regardless of how much data a chatbot digests, can it truly achieve human-like intelligence and reasoning? Mayank Kejriwal, a principal scientst at ISI and research assistant professor at the Daniel J. Epstein Department of Industrial & Systems Engineering, is unsure. Kejriwal tested whether LLMs could make bets but found no convincing evidence that they could make decisions when faced with uncertainty.

Humans, however, are not the gold standard either, Kejriwal added. We are biased, irrational and imperfect, so would a “human-like” chatbot also carry these characteristics Sometimes we also want technology to be superhuman, but AI is only as good as the human-made data it is fed. “‘Human-like’ is a very loaded word. That’s why, at least in the computer science community, we tend to not use it too much because how do we even measure it?” Kejriwal said.

The limits of advanced AI intelligence remain undefined, but with the growing rise of chatbots, Kejriwal hopes to make them more accountable and reflective of their decisions and dialogue. “We live in a world where we expect things to be very personalized and done quickly, but as new technology gets introduced, there’s always the fear of what it’s going to do,” Dorley said. “But for reassurance, you’re always going to need humans involved. Teaming humans with AI works a lot better than just letting AI work on its own.”

Published on April 29th, 2024

Last updated on May 16th, 2024

Want to write about this story?