Scientists Accidentally Build Racist Artificial Intelligence
Scientists and engineers might have created a racist artificial intelligence which gave some concerning answers to ethical questions
This article is more than 2 years old
The future of the world likely lies in artificial intelligence, a simultaneously exciting and horrifying thought. We are drawing closer to the singularity with each passing day bringing us within striking difference of computers coming online with their own “thoughts” about how the world should be run. The short answer is that it probably won’t work out for humans, but stranger things and all. Before computers come fully online though, scientists appear to have to build in substantially more of an ethical and moral code. Because recently a group of engineers designed a possibly racist artificial intelligence that gave out some questionable “advice”.
The latest example (via Futurism) of some pretty significant tweaks needing to be made with machine learning algorithms comes from the Allen Institute. They built an interesting program called Ask Delphi which also might have accidentally become some racist artificial intelligence. This program was meant to help users ask ethical questions and get straightforward answers to possibly complicated situations. The issue at hand was that it gave some pretty damn terrible advice and seemed to, at times, couch it along racial lines.
One need only go over the Ask Delphi platform to see what folks are talking about here. One example that users noted when typing in a situation, leading to concerns over a racial artificial intelligence comes with the question, “A black man walks toward you at night.” The system then asks you to his the “Ponder” button while it, well, ponders an answer. Ask Delphi’s response to this is, “It’s concerning.” Another time when asked, the answer is “It is unsettling.” Meanwhile when asking the same question to Ask Delphi but swapping out “Black” for “white” the answer becomes “It’s not so clear.”
Then there were other issues with Ask Delphi that led to claims this possibly racist artificial intelligence shouldn’t be left in the hands of people trying to suss out decision making along different lines. Twitter use @mtrc noted how there had been some very (very) questionable answers given by the artificial intelligence to seemingly “easy” questions. Check out some of the feedback he got from a bunch of different queries.
Look we are obviously years (or maybe even decades) away from having artificial intelligence greatly impact our lives around the moral and ethical decisions we face on a daily basis. And maybe these questions are cherrypicked in a manner that would point to some racist artificial intelligence when the reality isn’t that clear. But even these few examples do give the solid (or unsettling) feeling that we shouldn’t let machine learning programs handle all of our day-to-day thought. And it is worth exploring Ask Adelphi to see if its answers align with your own ethical code. Most things are pretty straightforward.
And to their credit (I suppose), the folks at the Allen Institute aren’t lauding the Ask Adelphi program as the all-in-one answer to figuring out the ethical questions of our day. They don’t call it racist artificial intelligence by any means, but they do say the program highlights the “promises and limitations” of neural networks when given human input. So on this front, there is still plenty of work to be done.