Assistant professor in CSSE earns $549K NSF grant to improve deep learning model testing efficiency

Published: Jun 11, 2025 7:40 AM

By Joe McAdory

As artificial intelligence, powered by deep neural networks (DNNs), becomes increasingly embedded into critical systems — including healthcare diagnostics, traffic light cameras, and autonomous vehicles — the reliability of DNN models is under scrutiny.

What if medical imaging misled a doctor by misdiagnosing a critical disease? What if a traffic light camera misreads a license plate? What if an autonomous vehicle confuses another car with part of the open road?

To meet the demands of improved DNN reliability, improved testing methods are necessary. Mutation analysis, which involves assessing the quality of the data used for testing by mutating the DNN model, i.e., injecting artificial defects in the model, has lately been resurfaced in the research community.

This method, while being useful in many other areas other than DNN test data quality assessment, is costly, which poses a barrier to its widespread implementation.

Ali Ghanbari, an assistant professor in the Department of Computer Science and Software Engineering, has a proposed solution. His project, “Practical Mutation Analysis for Quality Assurance of Deep Learning Systems,” is about developing a set of techniques designed to speed up mutation analysis in DNNs. This project is expected to result in a set of toolsets to reduce computational costs of mutation analysis and make DNN quality assurance more efficient and affordable.

The National Science Foundation recognized the importance of this research by awarding Ghanbari $549,000 over three years.

“Dr. Ghanbari’s research has the potential to significantly improve how deep learning artificial intelligence systems are engineered, tested and deployed in the real world,” said CSSE Chair Hari Narayanan. “By using novel techniques to compress and analyze deep learning models more efficiently, his work addresses a critical challenge in widespread deployment of AI systems: how to perform quality assurance testing to ensure that the systems are both robust and reliable without expending enormous computing resources. This is especially important in high-stakes applications where even small errors can have serious consequences.

“His work exemplifies innovation that bridges software engineering and artificial intelligence, advancing theory and practice while helping to make AI safer for everyone.”

Ghanbari said a big testing problem with DNN models is … size.

“This becomes a challenge when companies or researchers want to assess how their models will perform in real world,” Ghanbari said. “The testing phase often demands expensive hardware and running huge DNN models with massive test datasets frequently becomes inefficient and costly.”

To address this, Ghanbari is implementing a technique that uses the Fast Fourier Transform (FFT) — a mathematical method commonly used to analyze and approximate function behaviors.

“A familiar example of this is image compression,” Ghanbari said. “A bitmap image takes up a lot of space, but a JPEG uses a technique like FFT, called Discrete Cosine Transform, to reduce file size while preserving key information. By applying FFT, the DNN model is reduced without losing essential characteristics. This allows researchers to test models more efficiently with fewer resources and at lower cost. It speeds the testing cycle and makes it feasible to evaluate how the model behaves in real-world environments.”

“My team’s goal is to publish the results and make our prototypes publicly available. We’re confident that some of our techniques will find their way to software industry to make positive impact in our daily lives.”

Media Contact: Joe McAdory, jem0040@auburn.edu, 334.844.3447
Ali Ghanbari is developing a software prototype designed to speed up mutation testing in deep neural networks, reduce computational costs and make deep learning quality assurance more efficient and reliable.

Ali Ghanbari is developing a software prototype designed to speed up mutation testing in deep neural networks, reduce computational costs and make deep learning quality assurance more efficient and reliable.

To fix accessbility issues

Recent Headlines