Asking any of the popular chatbots to be more concise "dramatically impact[s] hallucination rates," according to a recent study.

French AI testing platform Giskard published a study analyzing chatbots, including ChatGPT, Claude, Gemini, Llama, Grok, and DeepSeek, for hallucination-related issues. In its findings, the researchers discovered that asking the models to be brief in their responses "specifically degraded factual reliability across most models tested," according to the...

Continue Reading More concise chatbot responses tied to increase in hallucinations, study finds