Microsoft Is Making a Tool to Track Down AI Algorithmic Bias.

Microsoft is the latest tech company to try and tackle algorithmic bias — that is, artificial intelligence that was fed subpar data and came to mirror society’s own prejudices or unfair perspectives.

The company wants to create a tool that will detect and alert people to AI algorithms that may be treating them based on their race or gender, according to MIT Technology Review.

It’s great that Microsoft, which touts itself as a company that creates AI to bring people together, is joining the ranks of Google and Facebook to create some sort of tool to find improperly-trained AI.

But Microsoft’s new algorithm to find biased algorithms can only find and flag existing problems. That means programs that can lead to increased police prejudice, for example, will still be built and used, just maybe not for as long as they would have if undetected.

To truly create AI that is fair and benefits everyone, more care needs to be taken on the front end.

One possible way companies can cover their bases is through a third-party audit, where a tech company brings in an outside expert to review their algorithms and look for signs of bias either in the code itself or the data being fed into it.

The idea of an AI audit, mentioned in the MIT Technology Review article, has gained traction elsewhere, and some AI companies have begun hiring auditors to take a look at their code.

But this also requires that the AI be simple enough such that someone can walk in and spot problem areas, or the auditor be well-versed in the code. For more complicated deep learning algorithms, this may not always be possible.

Another possible answer is better training for the people who actually create the AI so they might be able to better detect their own opinions and prejudices, and keep them from being interpreted by the algorithm as fact.

That doesn’t mean coders are setting out to program racist machines. But, because everyone has some sorts of implicit biases, the world of technology would benefit from helping people better understand their own world views.

These are major solutions that would require an attitude shift in the way technology is developed, even at the companies that want to develop these filters and flags for biased algorithms.

In the meantime, it’s a good sign that researchers and companies are starting to pay attention to the problem.

Enjoy our free content ? Try our Legend services.

  • Star Level
  • Access to Grendz and the right to READ and SHARE our science techie green pins
  • The right to WRITE and SHARE your OWN science techie green pins
  • WEEKLY mind-blowing e-report with trends and news
  • First to know about new trends and news
  • No ads and no sales pitch
  • A special pin a month, posted by us, highlighting your own business
  • Our iOS app for free
  • Weekly social media promotions of your own pins
  • Customization on which trends categories you want us to follow closer.
  • 24h Support (via e-mail)
  • Cancel anytime your renewal
FREE
The Plain Vanilla Level
$0
LEGEND
A Lot To Gain
And Nothing To Lose
The best way to get the most out of Grendz.
$300/yearly
Share on Pinterest
More share buttons
Share with your friends










Submit