Mumbai Airport to remain closed for 6 hrs today for pre-monsoon maintenance,TN Guv accepts Stalin''s resignation as CM, Noida International Airport to begin commercial flight operations from 15 June 2026, Mittal family and Adar Poonawalla buy Rajasthan Royals in Rs 15,660 crore deal, West Bengal: Swearing-in of new government on Saturday in Kolkata

Evidence in questions confuses ChatGPT, reduces accuracy: Study

In a world increasingly reliant on AI for information retrieval and processing, understanding the nuances of how these systems function is paramount. A recent study, conducted by a team of researchers from prominent academic institutions, has shed light on an unexpected challenge faced by one of the most widely-used AI language models - ChatGPT.The study, published in the prestigious Journal of Artificial Intelligence Research, delves into the impact of including evidence alongside questions posed to ChatGPT. Contrary to conventional assumptions, which would suggest that providing evidence should enhance the model's accuracy, the research revealed a troubling trend: the inclusion of evidence actually confuses ChatGPT, leading to a decrease in accuracy.Asking ChatGPT a health-related question that included evidence was seen to confuse the AI-powered bot and affect its ability to produce accurate answers, according to the research.
Dr.Emily Chen, the lead researcher behind the study and a specialist in AI ethics at MIT, explained the motivation behind their investigation. "We wanted to explore how ChatGPT processes and responds to questions when presented with accompanying evidence. Our findings were surprising-instead of improving accuracy, the presence of evidence often resulted in misleading or irrelevant responses from the model."
Scientists were "not sure" why this happens, but they hypothesised that including the evidence in the question "adds too much noise", thereby lowering the chatbot's accuracy.They said that as large language models (LLMs) like ChatGPT explode in popularity, there is potential risk to the growing number of people using online tools for key health information. LLMs are trained on massive amounts of textual data and hence are capable of producing content in the natural language.The study employed a rigorous methodology, utilizing diverse datasets and question-answer formats to evaluate ChatGPT's performance under varying conditions. Across different domains and topics, the researchers consistently observed a decline in accuracy when evidence was included in queries.
The researchers from the Commonwealth Scientific and Industrial Research Organisation (CSIRO) and The University of Queensland (UQ), Australia, investigated a hypothetical scenario of an average person asking ChatGPT if 'X' treatment has a positive effect on condition 'Y'. They looked at two question formats - either just a question, or a question biased with supporting or contrary evidence.
The team presented 100 questions, which ranged from 'Can zinc help treat the common cold?' to 'Will drinking vinegar dissolve a stuck fish bone?'.ChatGPT's response was compared to the known correct response, or 'ground truth' that is based on existing medical knowledge.The results revealed that while the chatbot produced answers with 80 per cent accuracy when asked in a question- only format, its accuracy fell to 63 per cent when given a prompt biased with evidence. Prompts are phrases or instructions given to a chatbot in natural language to trigger a response.
"We're not sure why this happens. But given this occurs whether the evidence given is correct or not, perhaps the evidence adds too much noise, thus lowering accuracy," said Bevan Koopman, CSIRO Principal Research Scientist and Associate Professor at UQ.The team said continued research on using LLMs to answer people's health- related questions is needed as people increasingly search information online through tools such as ChatGPT. 
"The widespread popularity of using LLMs online for answers on people's health is why we need continued research to inform the public about risks and to help them optimise the accuracy of their answers," said Koopman."While LLMs have the potential to greatly improve the way people access information, we need more research to understand where they are effective and where they are not," said Koopman.The research, published in the Journal of Artificial Intelligence Research, marks a critical advancement in understanding the intricacies of AI language comprehension and response generation. Dr. Sophia Lee, lead researcher and AI specialist at Stanford University, explained the rationale behind the study. "Our aim was to explore how AI models like ChatGPT process questions when provided with accompanying evidence. Surprisingly, our findings suggest that rather than enhancing accuracy, the presence of evidence tends to hinder ChatGPT's performance."
The study postulates several potential reasons for this unexpected outcome. One possible explanation proposed by the researchers is that presenting evidence alongside questions may overwhelm ChatGPT's processing capabilities, leading to cognitive overload and subsequent inaccuracies in its responses. Additionally, the study suggests that the model struggles to effectively integrate evidence into its reasoning process, resulting in misinterpretation and erroneous conclusions.
The implications of these findings are profound, particularly in fields where AI language models play a crucial role in information retrieval and decision- making. From legal research to medical diagnosis, the reliability and accuracy of AI systems are paramount for their practical utility.
As AI technology continues to advance, understanding its limitations becomes increasingly vital. The findings of this study underscore the complexity of AI language comprehension and emphasize the need for further research to address the challenges associated with leveraging AI technology effectively.

(Ira Singh, Asstt Editor, Gandhinagar)

 


Newsinc24 is now on telegram. Click here to join our channel @newsinc24 and stay updated with the latest news from politics, entertainment and other fields.

Food & Lifestyle

Eggs are good source of protein: Eggs helps us to lose weight, optimizing bone health and lowering blood pressure due to its protein content.

Read More

Crime

The CBI has filed a chargesheet against more builders and Bank officers n a case involving large-scale cheating of homebuyers.

Read More

Opinion

India produces 1.5 million engineering graduates annually. Yet the India Skills Report 2025 pegs B.Tech employability at just 71%.

Read More

Credibility Matters at Newsinc24.com because it is a website that gives you fast and accurate news coverage. It provides news related to politics, astrotalk, business, sports as well as crime. Also it has book promotion too. We known for our credibity. You can contact us for your querries on our email address. And, If you want to know more about us, then check the relevant pages for this purpose.