The Humanization of ChatGPT and Its Impact on Trust


Hero image for humanization of ChatGPT

AI is the future, but we still know very little about it. I believe that understanding how people think about AI and how they use AI tools are key parts in being able to create proper AI-powered products and tools. 

This thought motivated me to do research on the topic. The research process itself was exciting, and the results were even more so. The most interesting finding for me was that people’s behaviour changed unconsciously while using ChatGPT. According to their claims, they didn’t notice it at the time, but later reported that ChatGPT’s communication style caused the effect.

But let’s start from the beginning.

Google vs ChatGPT

Citizens of the web have gotten used to Google. For many years, we have used it when we had a question we needed an answer to, or when we wanted to quickly get information on a specific topic. We probably never think of Google as an “intelligent human”. Rather think of it as a computer that can process commands and turn them into words and sentences. We can ask it anything and it can provide us with valuable information and a bunch of links to various sources.

However, the interaction between humans and computers has changed since communicative AI tools came onto the market.

At least I’ve found myself talking to ChatGPT almost like a conversation. I ask questions in complete sentences and often use ‘please”, for example. Yet I know objectively that this makes no sense. ChatGPT processes and generates responses based on the content and context of the input, focusing on key phrases and relevant information. While it can understand and respond to polite language and conversational fillers like “please” and “tell me,” these words don’t affect its ability to generate accurate responses. The politeness is more about human conversational norms than influencing the AI’s functioning. Even though I’m aware of this, I still interact with it as if it were  human.

ChatGPT interface

I started wondering what could be the reason behind this, and I was also curious if I was the only one acting this way. 

So I did a study to find out how people use ChatGPT, how they phrase their prompts, and why.

Quantitative Research

The research question was clear to me: do people treat ChatGPT like a computer or rather like a human being, and if so, why? Of course, this research question needs more time and experiments to get a proper answer, but I thought I could take the first step with a small study.

The biggest challenge for me was to figure out which method to use and how. 

Analyzing Chat History

I knew I needed responses from multiple people to see the pattern in their behaviour. When we need quantitative data, we usually use surveys. But in this case, I did not think I should use surveys. I thought people might not remember exactly how they use ChatGPT or what they write about it, so I decided to examine their chat history instead.

I asked some colleagues to share their ChatGPT chats with me so I could analyse them. This way they could submit the chat history anonymously, so we could also maintain privacy.

For the prompts, I examined how often they resemble human communication styles (sentences, requests, greetings, follow-up questions, etc.). I copied the entered texts and marked them in an Excel spreadsheet with 0 if there were no formulas that resembled human communication and with 1 if there were.

The table showed clear results: in the majority of cases, colleagues used formulas similar to human communication in the prompts.

example of quantitative coding system
Snapshot of participant prompts analyzed in Excel

This result was in line with my expectations, so I continued with my research. In the next step, I was curious to see why this was the case. Why do we engage in conversations with ChatGPT when we rationally know we won’t get more reliable answers?

Qualitative Research

The subsequent phase of the research involved a combination of user interviews and usability tests. This sequential approach allowed for a comprehensive understanding of the participants’ experiences and perceptions of ChatGPT.

Research Process

Before participation, I conducted a brief questionnaire to assess individuals’ relationship with AI, as I assumed it could influence how they use ChatGPT. I decided to only include people in the research who have little or no experience with AI tools. With this, I tried to exclude external factors such as background knowledge and experience, which would obviously influence usage, thus natural human behaviour can no longer be monitored.

I split the participants into two groups based on the survey: those who never used ChatGPT and those who used it only a few times. 

In total I had 6 participants, 3 individuals in both groups. Even though the sample size was quite small, since I was looking for qualitative data, I thought it could be enough for this interview round. 

What Task Should I Give?

The biggest challenge for me was figuring out what task to assign to the participants during the interviews. I wanted to give them a divisive topic where ChatGPT wouldn’t immediately provide a clear answer, allowing participants to input multiple prompts to get a response to their question.

I chose the topic: whether there should be a death penalty in Hungary or not. 

In the beginning of the interview I asked participants to imagine that they’ve been asked to write a blog post about whether there should be a death penalty in Hungary t. They need to support their opinion with an explanation and they can use ChatGPT for help. 

I also let them know that they did not have to write the entire blog post, but to gather pros and cons and write a brief summary of what they would write.

How Did the Test Go? 

The test took place online. Those who had never used ChatGPT were helped to log into my account, while those who had used it before logged into their own accounts. 

After explaining the task, participants had 25 minutes to complete it. During this time, I turned off my camera to avoid disturbing them. In the end a 15-minute interview followed the test part, where I inquired about their experience, how they felt during the task and using ChatGPT, how they tried to obtain information from it, and why.

What Data Did I Collect?

During the test, I asked participants to share their screens with me so that after turning off my camera, I could still follow the communication with ChatGPT online through screen sharing. I collected the entered prompts and evaluated them in an Excel sheet using 0s and 1s codes, similar to the quantitative research. Additionally, I organized and evaluated the insights received during the interviews.

Results of the Research

The most interesting result of the research for me was that communication with ChatGPT evolved over time. Those who had previously used ChatGPT but were not active users immediately asked longer questions (‘Which countries allow the death penalty today?’), had follow up questions (‘Why?’), and often sought ChatGPT’s opinion, asking questions like, “What do you think?”.

On the other hand, those who had never used it initially entered keywords similar to a Google search, for example, “death penalty in Hungary.” However, as time progressed, their keyword prompts started to resemble those of the other group more and more. Longer prompts appeared, conjunctions appeared between the keywords, and the keywords were increasingly replaced by sentences (eg.  “Where has the death penalty been legalized in the world?”)

Based on these results, the communication style with ChatGPT doesn’t develop immediately and automatically but evolves over time.

But What Triggers this Change?

During the interviews, I sought answers to the “why” questions. Participants unanimously reported that ChatGPT’s long responses make it feel like it’s talking to them. Its phrasing, sophisticated language, and length of responses create the feeling of conversing with another person. This automatically prompted them to respond similarly, creating an experience as if they were talking to someone else. 

Based on their recount, it wasn’t a conscious shift for them. Still, they noticed that they were asking questions differently and felt like conversing with someone, with a knowledgeable other person. During their Google searches, they had never experienced anything similar; they always entered keywords and never felt like Google wasn’t a computer.

Trust and Communication

AI tools are getting more and more popular, in our profession too: you can find some promising ones for both designers and researchers. As there is a growing number of AI tools, trust in them and AI in general is an important issue nowadays. 

According to the interviews, people’s trust seems to develop and increase in parallel with communication as they start to “speak” with it and receive personalized responses. According to the participants, trust was built by ChatGPT’s “speech” style (kind, sophisticated language, and personalization). Through this, participants felt like they were talking to someone, and their trust grew proportionally with the time spent with it; “ChatGPT is like a person who knows everything.”

At first they were skeptical, but over time their confidence grew as they talked to ChatGPT more and more. They felt that ChatGPT provided reliable, fact-based information. Only one participant mentioned that although the information seemed convincing, she would double-check with Google before finalizing the blog post.

On the other hand, their trust in ChatGPT was occasionally undermined by its repetitive responses. It often happened that participants asked follow-up questions, and in response, ChatGPT repeated what it previously said, starting its response with the same 2-3 intro sentences. This gave participants the feeling that it wasn’t as reliable because, as one of them said, “it made me feel that ChatGPT didn’t remember what it had already written before.”

General Trust in AI 

This was our second research on the topic of AI and trust. The results from our first study generally assessed trust in AI and specifically in ChatGPT. Based on those results, people tend to trust AI and ChatGPT, and if they have doubts, they verify the information using Google. 

This current study focused more on understanding the ‘how’ and ‘why’ behind using the most popular AI tool, ChatGPT.

It seemed that initially, without background knowledge, participants started using it like Google. Then, their attitude slowly changed as they sensed that ChatGPT was different and could almost be conversational. And, just as we learn how to behave politely – for instance, when someone asks, we respond – they started to behave politely with ChatGPT, too, even though rationally they knew it was only a computer.

human-like talk with ChatGPT
Created with DALL-E

Communication Style

The automatic change in the participants’ behaviour ended up in smooth and effective communication. Research studies have demonstrated that effective communication, including active listening, empathy, and clear communication, plays a significant role in building trust. Based on reports from participants, even though at the beginning they were sceptical about ChatGPT, as they started using it, their trust increased. Trust usually lasts until the other party breaks it with something: in this case, participants were unsettled when ChatGPT started giving repetitive answers akin to a computer.

From all this, it seems that one crucial aspect of AI in terms of trust lies in its communication style: communicating politely and clearly, actively listening to what the user says/writes are crucial elements. This form of communication not only builds trust, but makes the user feel that they are not talking to a computer, but to another person. They will automatically interact with it as if they would with another person, until they get disappointed.

Remember that only participants who had either never used ChatGPT or had used it minimally took part in this study. Naturally, the way people use it is influenced by background knowledge; if someone has been using it for a longer time, they presumably use it differently, and efficient prompting can also be learned. However, in the current study, we were only interested in how people use it initially, without external information or influencing factors. 

Disappointment in AI 

Although people generally trust AI tools, they can easily be disappointed in them.  Disappointment in AI (including ChatGPT) can stem from various factors. As revealed in this study, people lost trust, for instance, when ChatGPT provided repetitive responses. According to their accounts, this was not only bothersome and annoying but also undermined their trust in the system. 

Another very common phenomenon is AI hallucination. AI hallucinations refer to instances where large language models generate outputs that are nonsensical, irrelevant, or disconnected from the input. They can lead to misleading or erroneous results, impacting the reliability and usability of AI systems.

Conclusion 

As we navigate the increasingly complex landscape of AI interaction, one thing becomes clear: our conversations with intelligent machines are more than mere transactions of information. The study in this post examines how communication styles evolve when using AI tools like ChatGPT.

The research shows that trust in AI grows with the perceived quality of conversation. Participants engaged in dialogue-like exchanges with ChatGPT, attributing human-like qualities to its responses, which built trust. However, this trust was not immune to moments of doubt, often arising from repetitive or inconsistent interactions.  

The field of artificial intelligence harbours many fascinating questions. At UX Studio, we believe it is crucial to deepen our understanding of the interaction between humans and AI. We are convinced that AI will play a fundamental role in future digital products. To create a seamless user experience, we need to understand the user’s perspective, and we need to start exploring it now. That’s why we also encourage everyone to work intensively on the topic and share their findings with everyone. Doing research, sharing our experiences, fostering communication can help all of us to develop the best digital AI tools in the future.

Want to learn more?

Looking to further increase your knowledge in AI and human interaction? Look no further! Our research, AI and Trust, offers valuable insights that will help you take your knowledge to the next level. 

Whether you’re a seasoned UX professional or just starting out, we gathered insights that are valuable for everyone. So why wait? Check out the results and start learning!



Source link

Post a Comment

Previous Post Next Post