XAI involves designing AI systems that can explain their decision-making process through various techniques. XAI should enable external observers to understand better how the output of an AI system comes about and how reliable it is. This is important because AI may bring about direct and indirect adverse effects that can impact individuals and societies.
Just as explaining what comprehends AI, explaining its results and functioning can also be daunting, especially where deep-learning AI systems come into play. For non-engineers to envision how AI learns and discovers new information, one can hold that these systems utilize complex circuits in their inner core that are shaped similarly to neural networks in the human brain.
The neural networks that facilitate AI’s decision-making are often called “deep learning” systems. It is debated to what extent decisions reached by deep learning systems are opaque or inscrutable, and to which extent AI and its “thinking” can and should be explainable to ordinary humans.
There is debate among scholars regarding whether deep learning systems are truly black boxes or completely transparent. However, the general consensus is that most decisions should be explainable to some degree. This is significant because the deployment of AI systems by state or commercial entities can negatively affect individuals, making it crucial to ensure that these systems are accountable and transparent.
For instance, the Dutch Systeem Risico Indicatie (SyRI) case is a prominent example illustrating the need for explainable AI in government decision-making. SyRI was an automated decision-making system using AI developed by Dutch semi-governmental organizations that used personal data and other tools to identify potential fraud via untransparent processes later classified as black boxes.
The system came under scrutiny for its lack of transparency and accountability, with national courts and international entities expressing that it violated privacy and various human rights. The SyRi case illustrates how governmental AI applications can affect humans by replicating and amplifying biases and discrimination. SyRi unfairly targeted vulnerable individuals and communities, such as low-income and minority populations.
SyRi aimed to find potential social welfare fraudsters by labeling certain people as high-risk. SyRi, as a fraud detection system, has only been deployed to analyze people in low-income neighborhoods since such areas were considered “problem” zones. As the state only deployed SyRI’s risk analysis in communities that were already deemed high-risk, it is no wonder that one found more high-risk citizens there (respective to other neighborhoods that are not considered “high-risk”).
This label, in turn, would encourage stereotyping and reinforce a negative image of the residents who lived in those neighborhoods (even if they were not mentioned in a risk report or qualified as a “no-hit”) due to comprehensive cross-organizational databases in which such data entered and got recycled across public institutions. The case illustrates that where AI systems produce unwanted adverse outcomes such as biases, they may remain unnoted if transparency and external control are lacking.
Besides states, private companies develop or deploy many AI systems with transparency and explainability outweighed by other interests. Although it can be argued that the present-day structures enabling AI wouldn’t exist in their current forms if it were not for past government funding, a significant proportion of the progress made in AI today is privately funded and is steadily increasing. In fact, private investment in AI in 2022 was 18 times higher than in 2013.
Commercial AI “producers” are primarily responsible to their shareholders, thus, may be heavily focused on generating economic profits, protecting patent rights and preventing regulation. Hence, if commercial AI systems’ functioning is not transparent enough, and enormous amounts of data are privately hoarded to train and improve AI, it is essential to understand how such a system works.
Ultimately, the importance of XAI lies in its ability to provide insights into the decision-making process of its models, enabling users, producers, and monitoring agencies to understand how and why a particular outcome was created.
This arguably helps to build trust in governmental and private AI systems. It increases accountability and ensures that AI models are not biased or discriminatory. It also helps to prevent the recycling of low-quality or illegal data in public institutions from adverse or comprehensive cross-organizational databases intersecting with algorithmic fraud-detection systems.
This news is republished from another source. You can check the original article here
Be the first to comment