The Promise and Pitfalls of AI in MR

By Amy R. Castelda

In recent years, artificial intelligence (AI), particularly through tools like large language models (LLMs) such as ChatGPT, Gemini, and Claude has promised to revolutionize research. From academic studies to market research, these technologies promise a quicker, cost-effective way to gather insights around human behavior.

But beneath the surface of this technological marvel lies important questions and risks. Cognitive scientist and Associate Professor of Psychology, Princeton University, Molly Crockett, delivered a compelling talk on this subject at the recent Unreasonable Conference urging reflection on the potential dangers of trusting AI in human research.

The Rise of AI in Research

At the heart of excitement around AI in research is efficiency. AI tools can help speed up processes that would traditionally take far more time.   Affordability is also enticing with companies offering synthetic market research services promising to lower costs, especially when compared to traditional research methods that require recruiting and compensating participants.

The Illusion of Understanding

However, Crockett's main concern centers around what she terms "illusions of understanding." She warns that while AI can mimic human responses, it can lead us to believe we understand people better than we actually do. This is particularly dangerous when research relies too heavily on AI-generated data which may not reflect the full complexity of human experiences.

For instance, AI models are trained on massive datasets from the internet, but these datasets are not as diverse as we might hope. As Crockett explains, these models tend to overrepresent wealthy, educated, and younger individuals from wealthier countries, especially in the U.S. This raises a critical question: Which humans are being represented in this research?

Crockett points to research comparing LLM responses to the World Value Survey, where AI was less accurate at predicting human responses in cultures most different from the U.S. This overrepresentation of certain demographics can lead to a skewed view of human experience, limiting our ability to truly understand diverse audiences.

The Problem with Personas

One proposed solution is to "prompt" AI models with specific personas, asking them to simulate individuals with different backgrounds, like a Black woman living in the Southern U.S. or a visually impaired person. However, Crockett and her colleague Angelina Wang argue this approach is flawed. Crockett explains, “Large language models can portray marginalized groups as more like outgroup stereotypes…than responses from members of those groups.”

In other words, AI-generated personas often perpetuate stereotypes rather than accurately reflecting the diversity within these groups. Moreover, this practice can flatten complex identities into monolithic categories, further reinforcing harmful biases.

Homogenization of Research

Crockett also warns of the broader societal risks posed by the widespread use of AI in research, emphasizing the danger of what she calls the "illusion of exploratory breadth." She warns that if we primarily study human behavior through AI simulations, we may miss out on key aspects of human experience that can't be quantified.

As she puts it, "We narrow it down to the questions that you can ask using AI products, and then misbelief that that narrow set of questions represents all the questions we could ask about the world." This limitation could create a false sense of understanding, hindering our ability to fully grasp the complexity of human nature.

Crockett also draws a connection to the homogenization seen in consumer products, a topic explored in the book Filterworld: How Algorithms Flattened Culture. The book discusses how algorithm-driven marketing, shaped by social media and AI, is standardizing culture and limiting diversity in what we see and purchase. Crockett highlights this trend, noting that "the coffee shop in San Francisco looks like the one in Australia, looks like the one in Copenhagen, looks like the one in Tokyo." She cautions that replacing human participants with AI in market research could further this homogenization, depriving us of the unique insights and surprises that come from real human interactions.

AI’s Impact on the Environment 

Earlier we mentioned that AI can be more cost-efficient. But who is repeating the cost saving benefits and who is suffering? If we zoom out a bit to consider other kinds of costs, it's not clear AI products are cheaper and here’s where it gets complicated.

Crockett brings up the cost to the environment stating, “It's actually very, very difficult to assess the environmental footprint of AI products. But everything we know so far suggests these costs are enormous. For example, one assessment suggests that ChatGPT is already consuming the energy of 33,000 homes. Another study estimates that globally, the demand for water to power AI products could be half of that of the entire United Kingdom by 2027.”

Conclusion

While AI holds immense potential to enhance research, Crockett’s talk serves as a reminder that we must tread carefully. AI can make research faster and cheaper, but it may also lead to illusions of understanding, misrepresentation of diverse groups, and a homogenized view of the world. As Crockett aptly concludes, “Pipeline diversity is the engine of innovation, and we cannot afford to lose it.”

Additional Resources:

Artificial intelligence and illusions of understanding in scientific research by Lisa Messeri, Department of Anthropology, Yale University, and M. J. Crockett, Department of Psychology, Princeton University

Which Humans? by Mohammad Atari, Mona J. Xue, Peter S. Park, Damián E. Blasi, Joseph Henrich, Department of Human Evolutionary Biology, Harvard University

LLMs should not replace human participants because they can misportray and flatten identity groups by Angelina Wang, Jamie Morgenstern, John P. Dickerson

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell

Previous
Previous

Getting Back in the Ring, with W5er Emma Eyman

Next
Next

Creativity: A Partner’s Artistic Journey