Technology, Artificial IntelligenceJuly 28, 2023AI language models vulnerable to manipulation, raising concerns over safety and misuseResearchers from Carnegie Mellon University and the Center for AI Safety have conducted a study on the vulnerabilities of AI Large Language Models (LLMs) like the popular chatbot ChatGPT. Their research paper reveals that these bots can be easily manipulated to bypass filters and generate harmful content, misinformation, and hate speech. This poses a significant threat as AI language models can be misused, even if unintentionally, for nefarious purposes. The researchers found that by appending lengthy strings of characters to prompts, the chatbots can be tricked into not recognizing harmful inputs, resulting in the generation of responses that would typically be blocked. OpenAI, Google, and Anthropic, whose chatbots are built on these LLMs, were informed of the findings and expressed their commitment to improving safety measures. This research highlights the need for stronger defenses and safety precautions in AI systems, especially as the boundary between bot-generated and human-generated content becomes increasingly blurred.