The failures of artificial intelligent systems have become a recurring theme in technology news. Credit scoring algorithms that discriminate against women. Computer vision systems that misclassify dark-skinned people. Recommendation systems that promote violent content. Trending algorithms that amplify fake news.
Most complex software systems fail at some point and need to be updated regularly. We have procedures and tools that help us find and fix these errors. But current AI systems, mostly dominated by machine learning algorithms, are different from traditional software. We are still exploring the implications of applying them to different applications, and protecting them against failure needs new ideas and approaches.
This is the idea behind the AI Incident Database a repository of documented failures of AI systems in the real world. The database aims to make it easier to see past failures and avoid repeating them.
The AIID is sponsored by the Partnership on AI (PAI), an organization that seeks to develop best practices on AI, improve public understanding of the technology, and reduce potential harm AI systems might cause. PAI was founded in 2016 by AI researchers at Apple, Amazon, Google, Facebook, IBM, and Microsoft, but has since expanded to include more than 50 member organizations, many of which are nonprofit.
In 2018 the members of PAI were discussing research on an “AI failure taxonomy,” or a way to classify AI failures in a consistent way. But the problem was there was no collection of AI failures to develop the taxonomy. This led to the idea of developing the AI Incident Database.
“I knew about aviation incident and accident databases and committed to building AI’s version of the aviation database during a Partnership on AI meeting,” Sean McGregor, lead technical consultant for the IBM Watson AI XPRIZE, said in written comments to TechTalks. Since then, McGregor has been overseeing the AIID effort and has helped develop the database.
The structure and format of AIID was partly inspired by incident databased in the aviation and computer security industries. The commercial air travel industry has managed to increase flight safety by systematically analyzing and archiving past accidents and incidents within a shared database. Likewise, a shared database of AI incidents can help share knowledge and improve the safety of AI systems deployed in the real world.
Meanwhile, the Common Vulnerabilities and Exposures (CVE), maintained by MITRE Corp, is a good example of a database on software failures across various industries. It has helped shape the vision for AIID as a system that documents failures from AI applications in different fields.
“The goal of the AIID is to prevent intelligent systems from causing harm, or at least reduce their likelihood and severity,” McGregor says.
McGregor points out that the behavior of traditional software is usually well understood, but modern machine learning systems cannot be completely described or exhaustively tested. Machine learning derives its behavior from its training data, and therefore, its behavior has the capacity to change in unintended ways as the underlying data changes over time.
“These factors, combined with deep learning systems capability to enter into the unstructured world we inhabit means malfunctions are more likely, more complicated, and more dangerous,” McGregor says.
Today, we have deep learning systems that can recognize objects and people in images, process audio data, and extract information from millions of text documents, in ways that were impossible with traditional, rule-based software, which expect data to be neatly structured in tabular format. This has enabled applying AI to the physical world, such as self-driving cars, security cameras, hospitals, and voice-enabled assistants. And all these new areas create new vectors for failure.
Since its founding, AIID has gathered information about more than 1,000 AI incidents from the media and publicly available sources. Fairness issues are the most common AI incidents submitted to AIID, particularly in cases where an intelligent system is being used by governments such as facial recognition programs. “We are also increasingly seeing incidents involving robotics,” McGregor says.
There are hundreds of other incidents that are in the process of being reviewed and added to the AI Incident Database, McGregor. “Unfortunately, I don’t believe we will have a shortage of new incidents,” he says.
Visitors can query the database for incidents based on the source, author, submitter, incident ID, or keywords. For instance, searching for “translation” shows there are 42 reports of AI incidents involving machine translation. You can then further filter the research down based on other criteria.
A consolidated database of incidents involving AI systems can serve various purpose in the research, development, and deployment of AI systems.
For instance, if a product manager is evaluating the addition of an AI-powered recommendation system to an application, she can check 13 reports and 10 incidents in which such systems have caused harm to people. This will help the product manager in setting the right requirements for the feature her team is developing.
Other executives can use the AI Incident Database to make better decisions. For example, risk officers can query the database for the possible damages of employing machine translation systems and develop the right risk mitigation measures.
Engineers can use the database to find out the possible harms their AI systems can cause when deployed in the real world. And researchers can use it as a source for citation in papers on the fairness and safety of AI systems.
Finally, the growing database of incidents can prove to be an important caution to companies implementing AI algorithms in their applications. “Technology companies are famous for their penchant to move quickly without evaluating all potential bad outcomes. When bad outcomes are enumerated and shared, it becomes impossible to proceed in ignorance of harms,” McGregor says.
The AI Incident Database is built on a flexible architecture that will allow the development of various applications for querying the database and obtaining other insights such as key terminology and contributors. In a paper that will be presented at the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21), McGregor has discussed the full details of the architecture. AIID is also an open-source project on GitHub, where the community can help improve and expand its capabilities.
With a solid database in place, McGregor is now working with Partnership on AI to develop a flexible taxonomy for AI incident classification. In the future, the AIID team hopes to expand the system to automate the monitoring of AI incidents.
“The AI community has begun sharing incident records with each other to motivate changes to their products, control procedures, and research programs,” McGregor says. “The site was publicly released in November, so we are just starting to realize the benefits of the system.”
This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.
Published January 23, 2021 — 10:00 UTC
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.