Subscribe Button 1
SUBSCRIBE

Solutions For Efficient and Trustworthy Artificial Intelligence

There is a high demand for AI solutions in industrial applications. These solutions need to be efficient, trustworthy and secure in order to be used in series production or quality control.. However, the new possibilities opened up by generative AI also raises questions. Can users rely on what a chatbot says? How can vulnerabilities in an AI model be detected early on during development? At the recent Hannover Messe 2024 exhibition, the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS and the institutes of the Fraunhofer Big Data and Artificial Intelligence Alliance presented two exhibits and several use cases centering on trustworthy AI solutions.

AI offers a wealth of potential for industrial manufacturing, spanning fields such as automation, quality checks, and process optimization. “We are currently receiving a lot of inquiries from companies that have developed AI prototypes and want to put them into series production. To make the scale-up a success, these AI solutions have to be tested systematically so it is also possible to identify vulnerabilities that did not become apparent in the prototype,” explains Dr. Maximilian Poretschkin, Head of AI Assurance and Certification at Fraunhofer IAIS.

AI Reliability for Production: Assessment tools for systematic testing of AI models

One such application is a testing tool for AI models used in production or in mechanical and plant engineering. The tool can be used to systematically pinpoint vulnerabilities in AI systems as a way to ensure that they are reliable and robust. “Our methodology is based on specifying the AI system’s scope of application detail. Specifically, we parametrize the space of possible inputs that the AI-system processes and give it a semantic structure. The AI testing tools that we have developed in the KI.NRW flagship project “ZERTIFIZIERTE KI“, among others, can then be used to detect weaknesses in AI-systems,” Poretschkin explains.

Fraunhofer and other partners involved in the research project are working to develop assessment methods concerning the quality of AI systems: The AI assessment catalog provides companies with a practically tested guide that enables them to make their AI systems efficient and trustworthy. A recent white paper also addresses the question of how AI applications developed with generative AI and foundation models can be assessed and made secure.

“Who’s Deciding Here?!”: What can artificial intelligence decide — and what decisions can it not yet make?

The collective exhibit titled “Who’s deciding here?!”, which the Fraunhofer BIG DATA AI Alliance presented at the Hannover Messe 2024, is also all about trustworthy AI. The exhibit tied in with the topic of freedom, the theme set by the German Federal Ministry of Education and Research (BMBF) for its Science Year in 2024. How do technologies like artificial intelligence influence our freedom to make decisions? How trustworthy are the AI systems that will likely be used increasingly in applications involving sensitive data, such as credit checks?

Fraunhofer researchers are making an important contribution to unlocking the potential of AI in the real world. “The current development of generative AI is a prime example of the huge potential and the many challenges surrounding this forward-looking technology in terms of security, transparency, and privacy. Many of the technologies and solutions we see today originate outside Europe. The research done by the Fraunhofer-Gesellschaft is helping to maintain and grow the technological sovereignty and independence of German companies,” says Dr. Sonja Holl-Supra, Managing Director of the Fraunhofer BIG DATA AI Alliance, which comprises over 30 Fraunhofer institutes and brings together Fraunhofer’s expertise across the field of AI.

For more information: www.fraunhofer.de

HOME PAGE LINK