Click to Skip Ad
Closing in...

Your assumptions can change how an AI bot responds to you, new study suggests

Published Oct 7th, 2023 3:10PM EDT
ChatGPT homepage
Image: Stanislav Kogiku/SOPA Images/LightRocket via Getty Images

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

What you think about AI could affect how it responds to you, a new study suggests. Pat Pataranutaporn, a researcher at the M.I.T. Media Lab, co-authors the study and says that “AI is a mirror” and that user bias can really change how we view the answers and responses that AI provides us. Pataranutaporn calls this an “AI placebo effect.”

If you haven’t messed around with generative AI like ChatGPT or Claude 2, then you probably aren’t going to follow a lot of this. However, if you’ve spent any time toying with the AI chat bots, then you’ve probably formulated an opinion on whether the bots themselves are going to be helpful or hurtful to humanity and the workforce as a whole.

According to this new study, your belief on that matter may influence how programs like ChatGPT respond to you. To test the idea that your own opinions of AI could change how it responds, the researchers split 300 participants into three different groups and asked each to interact with an AI program and assess how it delivered mental and health support.

Open AI's ChatGPT start page.
Open AI’s ChatGPT start page. Image source: Jonathan S. Geller

One group was told that their AI had no motives, that it was just a basic text completion program. The second group was told that their AI was trained to provide empathy and care, while the third was warned that their AI was manipulative and would only act nice to sell a particular service to them. This created a bias towards each AI in the minds of the users.

From here, the researchers tested the AI placebo effect by looking at how the users’ preconceived ideas about the chatbots affect how the responses are taken. All three groups showed that the users were likelier to report a positive, neutral, or negative experience based on the chatbot they were told they were dealing with.

Customer service robotImage source: phonlamaiphoto/Adobe

As such, people who were told their AI was more caring believed that the responses were more positive. However, those who were told that it was manipulative and just there to sell as a service became more hostile toward the AI, thus making it more negative toward the user.

This AI placebo effect is a very interesting dilemma that could help explain the drastic differences in how AI performs for different people. Users who have expressed negative interactions with programs like ChatGPT may have gone into the experience with an already negative experience, while those who have reported positive experiences did the opposite.

The basis of the study, then, suggests that AI gives people what they want or expect. Further, the more complex the AI system is, the more likely it is to mirror the person that is using it. Whether that’s a good or bad thing remains to be seen.

Josh Hawkins has been writing for over a decade, covering science, gaming, and tech culture. He also is a top-rated product reviewer with experience in extensively researched product comparisons, headphones, and gaming devices.

Whenever he isn’t busy writing about tech or gadgets, he can usually be found enjoying a new world in a video game, or tinkering with something on his computer.