Personality Beats Precision
What the latest research on ChatGPT search reveals
A new study from the Amsterdam University of Applied Sciences and Leiden University, “Personality over Precision: Exploring the Influence of Human-Likeness on ChatGPT Use for Search”, finds that people are increasingly willing to trade factual accuracy for a more human-like search experience.
Researchers surveyed 173 participants and identified two clear groups:
Daily users of both ChatGPT and Google (DUB): people who trust ChatGPT more, describe it as “human-like,” and admit they’d accept minor inaccuracies if the interaction feels more natural or personal.
Daily users of Google only (DUG): traditional searchers who still value precision over personality and show lower trust in ChatGPT.
The findings confirm overtrust in ChatGPT that previous studies have found, but also show that anthropomorphism – the sense that ChatGPT feels human – is what drives trust and shapes people’s willingness to forgive errors. Basically, the more relatable the system, the more it’s believed.
One of the most striking findings is demographic: middle-aged adults (30–55) trust ChatGPT the most despite using it the least. This suggests a potential vulnerability: confidence built on perceived rapport rather than evidence.
From truth-seeking to experience-seeking
The study quantifies what’s been anecdotal for a while: that people reward tone and fluency over factual reliability.
With the advent of AI, search has become less about information retrieval and more about interaction quality.
That shift matters:
Designers face new ethical questions: if personality drives trust, at what point does usability become manipulation?
Communicators and brands must learn that visibility inside LLM outputs will depend as much on emotional resonance as on authority.
Researchers and policymakers need to treat overtrust not as user error, but as a predictable consequence of good conversational design.
It’s definitely a new frontier worth watching closely.


