Industries are becoming dependent on AI for multiple tasks. Artificial Intelligence has become a prominent part of organizations for their operations. Depending completely on AI for decisions is not a smart way to handle operations as AI operations can be biased.
Many encounters have been experienced by organizations where the decisions made by AI were biased on various basis.
What is black box AI?
Black box AI model is an approach of artificial intelligence to produce results without letting end users know about the process. For instance, you provide some pictures of flowers to an AI system and it differentiates them on the basis of their color family, but the user will not know how the system has differentiated.
In black box AI, the user is not able to explain the reason for output however, the results in black box AI are remarkably accurate. This can be due to the complex algorithms of the systems.
The approach of black box AI is achieved through deep learning models where the model takes data from end-points runs the algorithm to read and analyze the data and correlates the data feature to generate the output.
AI bias is the result of black box AI where the output of the artificial intelligence system is based on the values. This could be the result of the data provided to the model while training. In 2019, Apple co-founder Steve Wozniak accused Apple Card of gender discrimination through its algorithm for being gender biased as it gave him 10 times more credit limit than his wife even though they share all their assets.
Many organizations have faced similar encounters with artificial intelligence and hence are working towards a better approach for AI models.
AI bias not only causes biased decisions but may also compromise permanent damage in operations or an organization’s reputation.
Explainable AI Black box
Artificial Intelligence is vital to enhance organizations’ capabilities but it is crucial to remove black box AI limitations hence, explaining the process of artificial intelligence has become crucial for organizations. Explainable AI Black Box is an approach where the process of the AI/ML model can be explained by the users.
The reason why the decisions of AI are complicated to explain is that deep learning uses the concept of neural networks to work on complex decisions which are similar to the human neural network. This is how Artificial Intelligence tries to mimic human behavior.
A human neural network is also complex to explain and it is not possible to explain how humans make decisions, but humans can explain the fact of their decision after making them. A similar approach is used to incorporate the explainability of black box AI called post-hoc explainability methods.
Data scientists use a few methods to achieve explainability in AI models. An efficient method of explainability can be measured by the following points.
- The methods accurately explain the reason for the decision of AI models.
- Slight differences in the operation does not change the explanation of the model.
- The explanation should explain the importance of each variable.
- The explanation should be general to many cases or decisions.
- The explanation should be easy for humans to understand.
Open Source Approach towards AI Black Box Problem
Microsoft has Open-sourced InterpretML, a software toolkit that helps developers for training interpretable models and explaining the behavior of black box systems. It assists developers to develop the explainability of AI models by implementing intelligible models.
Intelligibility is a concept based on design, psychology, human-computer interaction, and machine learning which is achieved by Explainable Boosting Machine (EBM) algorithm.
In addition to EBM, InterpretML also supports methods like LIME, SHAP, linear models, partial dependence, decision trees and rule lists. These models help data scientists to understand the accuracy of the decision made by black box AI by checking the consistency between the methods.
Handling the black box AI problem is crucial for organizations as it can severely harm the operations or the reputation of the organization. Many cases have been filed against reputed tech companies for being biased on various bases. Having an explaination of black box AI decisions can help users understand the objective behind these decisions. Until a robust and reliable explanation model is achieved, and the black box AI issue is resolved, the outputs of these AI models should be taken as suggestions and not as final decisions.