SEARCH

Showing 1-1 of 1 results

  • TECH

    Don't call AI bigoted

    Life, James Hein, Published on 06/11/2019

    ยป Despite what some claim, Artificial Intelligence is not racist. Google built a system to detect hate speech or speech that exhibited questionable content. Following the rules given, it picked out a range of people with what some try to claim was a bias toward black people. Wrong. The AI simply followed the rules and a larger number of black people and some other minorities, as defined in the US, were found to be breaking those rules. It didn't matter to the machines that when one group says it, it isn't defined as hate speech by some; it simply followed the rules. People can ignore or pretend not to see rules, but machines don't work that way. What the exercise actually found was that speech by some groups is ignored while the same thing said by others isn't. As the saying goes, don't ask the question if you're not prepared to hear the answer.

Your recent history

  • Recently searched

    • Recently viewed links

      Did you find what you were looking for? Have you got some comments for us?