Opening the 'black box' of artificial intelligence
Artificial intelligence is growing ever more powerful and entering people's daily lives, yet often we don't know what goes on inside these systems. Their non-transparency could fuel practical problems, or even racism, which is why researchers increasingly want to open this 'black box' and make AI explainable.
In February of 2013, Eric Loomis was driving around in the small town of La Crosse in Wisconsin, US, when he was stopped by the police. The car he was driving turned out to have been involved in a shooting, and he was arrested. Eventually a court sentenced him to six years in prison.
This might have been an uneventful case, had it not been for a piece of technology that had aided the judge in making the decision. They used COMPAS, an algorithm that determines the risk of a defendant becoming a recidivist. The court inputs a range of data, like the defendant's demographic information, into the system, which yields a score of how likely they are to again commit a crime.
How the algorithm predicts this, however, remains non-transparent. The system, in other words, is a black box—a practice against which Loomis made a 2017 complaint in the US Supreme Court. He claimed COMPAS used gender and racial data to make its decisions, and ranked Afro-Americans as higher recidivism risks. The court eventually rejected his case, claiming the sentence would have been the same even without the algorithm. Yet there have also been a number of revelations which suggest COMPAS doesn't accurately predict recidivism.
Read more here: https://techxplore.com/news/2020-12-black-artificial-intelligence.html
Posted Date : December 01 2020

