Fri Sep 20 07:40:00 UTC 2024: ## AI-Generated Code Needs a Reality Check: York University Professor Develops Tool to Detect Errors

**Toronto, Canada:** A recent paper by Gias Uddin, assistant professor at York University, highlights the need for tools to detect and correct errors in code generated by large language models (LLMs). LLMs, known for their impressive ability to generate human-like text, can sometimes produce code with “hallucinations” – errors that result in unexpected behavior.

Uddin’s research focuses on designing intelligent tools for testing, debugging, and summarizing software and AI systems. His paper specifically addresses the issue of incorrectness in AI-generated code, emphasizing the importance of robust verification methods.

“We need to develop tools that can effectively identify and correct these hallucinations,” Uddin explains. “Otherwise, AI-generated code could lead to serious consequences, especially in safety-critical systems.”

His team is exploring the potential for AI-powered tools to generate quality assurance (QA) tests, thereby ensuring the reliability of AI-generated code.

This research builds upon previous work by Uddin’s team, who previously investigated sentiment analysis of Stack Overflow comments.

The paper, co-authored by Uddin, is available for further reading, and interested readers can connect with Uddin via his website.

Read More