Technology, Top News

IBM reveals open source tool to stamp out AI bias

RACIST ROBOTS are not good which is probably why IBM has launched a tool for detecting bias in artificial intelligence (AI). 

The Fairness 360 Kit will analyse how machine learning algorithms make decisions in real-time and figure out if they are accidentally being biased, for example, by failing to correctly identify non-white people in photos.

Big Blue’s software boffins have made the Fairness 360 Kit available on the cloud and as open source, so it should be fairly easy for smart systems and software builders to put the tool to good use.

The way AI code changes and mutates as systems and software learn more things can make it difficult for developers to see where bias has been created and adopted.

Some bias can be traced back to the unbalanced datasets machine learning algorithms have been trained upon or sometimes the unconscious bias in some developers who may have programmed the initial instructions for the AI forgetting to address certain attributes of say a race that’s not their own.

IBM reckons its tool will open the pseudo black box in which AI learns and develops and give developers more transparency in the judgements their smart systems are coming up with.

“Machine learning models are increasingly used to inform high-stakes decisions about people. Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage. Bias in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias,” said Kush Varshney, principal research staff member and manager at IBM Research.

The AI Fairness 360 Kit will check for bias at the initial training part of AI development as well as check for bias when it is undergoing testing and deployments, and at the AI’s final stage in its lifecycle a final bias check will be carried out.

Given Ais, such as Microsoft’s Tay.ai, have shown a propensity to get pretty racist after exposure to public data, Big Blue’s tool could just be the means to stop the rise of xenophobic machines. µ

Further reading

Source : Inquirer

Previous ArticleNext Article
Founder and Editor-in-Chief of 'Professional Hackers India'. Technology Evangelist, Security Analyst, Cyber Security Expert, PHP Developer and Part time hacker.

Send this to a friend