NEWS

New study reveals AI language models’ limitation in recognizing verbal nonsense
AI, Limitation, Meaningless Words 2024-03-30

New study reveals AI language models’ limitation in recognizing verbal nonsense

A recent study by Columbia University has revealed that current language models, such as ChatGPT and BERT, can mistake nonsensical sentences for meaningful ones. The researchers conducted experiments where human participants had to choose the more natural sentence from a pair of sentences for hundreds of pairs, that is the sentence which was more likely to be heard or read in everyday life. The responses were then compared with those of nine different language models. The comparison revealed that while more advanced models performed better, all models sometimes made mistakes, selecting sentences that sounded like nonsense to humans. This flaw in language models presents an opportunity to improve chatbot performance and gain insights into human language processing. The study raises questions about the reliability of AI systems in making important decisions and whether studying AI chatbot computations can contribute to understanding human brain circuitry. Further analysis of chatbot strengths and weaknesses could provide valuable insights into these areas. Source: https://zuckermaninstitute.columbia.edu/verbal-nonsense-reveals-limitations-ai-chatbots