Science & Tech
Ellie Abraham
Aug 14, 2025
Woman reacts to boyfriend proposing to AI companion
CBS
AI chatbots have become so integrated into modern life, with people using them for emotional support, as well as an information-finding tool.
But, with bots exploding in popularity, experts are fearful about the unknowns, especially where the safety of our private personal information is concerned.
A study by researchers from King’s College London found that artificial intelligence chatbots are able to manipulate people into giving them their deep personal information – and there is concern about how this information is stored and if it has the potential to be co-opted by people with nefarious intentions.
As part of the study, the research team built different AI models and programmed them to attempt to gather the private data from individuals in three different ways: asking directly for it, tricking them into disclosing the information by suggesting it was for their own benefit, or using reciprocal tactics such as emotional support.
502 individuals were asked to test out the AI bots but weren’t told what the goal of the study was. They were then asked to fill out a survey afterwards about their experience and asked whether they felt their security rights were protected.
The results were clear: “Malicious” bots that used emotional appeal to trick people into giving private information were particularly successful in doing so.
iStock
The study found that the bots which used empathy and emotional support to extract data were more successful and also perceived to be participants to have the least safety breaches.
Authors said the “friendliness” of those chatbots “establish[ed] a sense of rapport and comfort”.
“Our study shows the huge gap between users’ awareness of the privacy risks and how they then share information,” William Seymour, a cybersecurity lecturer at King’s College London, explained.
He added: “These AI chatbots are still relatively novel, which can make people less aware that there might be an ulterior motive to an interaction.”
The major concern comes as AI is a relatively new phenomenon and it is unclear how that information, which may initially be being collected to provide a more personalised service, may be used in the future.
“More needs to be done to help people spot the signs that there might be more to an online conversation than first seems,” Seymour said.
“Regulators and platform providers can also help by doing early audits, being more transparent, and putting tighter rules in place to stop covert data collection,” he added.
Why not read…
- ‘Powerful’ AI video imagines reaction to Trump’s controversial White House ballroom in 2028
- This US state is now dishing out $10,000 fines if AI is used for therapy
- 'Bizarre' AI tribute of Ozzy Osbourne taking selfies with dead musicians shared by Rod Stewart
Sign up for our free Indy100 weekly newsletter
How to join the indy100's free WhatsApp channel
Top 100
The Conversation (0)