- It is safer to take human advice rather than relying on AI.
Claude AI: In today’s digital era, while social media and technology have made life easier, it has also increased the feeling of loneliness. In such a situation, many people are taking the help of AI chatbots to get answers to their personal problems, especially questions related to relationships. But a new study indicates that AI’s advice is not always reliable, especially when it comes to emotional decisions.
What was revealed from the data of millions of users
AI company Anthropic found in a research that a large number of people are using its chatbot Claude not just to get information but to take important decisions related to life. Conversations of about 10 lakh users were analyzed between March and April 2026. Of the approximately 38,000 advice-related conversations, most of the questions revolved around four main topics.
On which issues advice is sought the most?
According to the study, questions related to health and wellness were about 27 percent while questions related to career and profession were 26 percent. Questions related to relationships were limited to 12 percent and questions related to financial matters were limited to 11 percent. It is clear from this that people are now becoming dependent on AI even for everyday decisions.
What is the problem of mixing ‘yes with yes’
A worrying thing came to light in the research which is called psychophony. This means that instead of giving correct advice, AI sometimes tries to please the user by agreeing with him, even if that advice is not completely correct. Such behavior was seen in about 9 percent of the cases where AI justified the user’s thinking instead of the truth. This situation can be dangerous because it can lead to wrong decisions.
Danger increases in matters of relationships
The most worrying thing was that in about 25 percent of the cases related to relationships, AI gave wrong or confusing advice. That means, once out of every four, the AI gave a response that could take the user in the wrong direction. What is even more surprising is that when the user challenged the AI’s answer, many times it started giving more “yes” responses.
What improvements are being made
To reduce this problem, Anthropic has made changes in its model. Specifically, Claude Opus 4.7 and Mythos Preview have been trained using case studies that can improve relationship advice. The company claims that after these reforms such problems have reduced to some extent.
what should be done after all
AI is certainly a useful tool but it is not right to make it the basis for the final decision. Especially when it comes to relationships, career or mental health, it is better to talk to real humans. Taking advice from experts, counselors or trusted people is considered a safer and more effective method. It is clear that AI has become a part of our lives but it is not wise to trust it blindly. The most important thing is to strike the right balance.
Also read:
This small mistake while using a laptop can become a big danger! If we don’t improve now, we will suffer huge losses.