Showing 1-2 of 2 results
» Despite what some claim, Artificial Intelligence is not racist. Google built a system to detect hate speech or speech that exhibited questionable content. Following the rules given, it picked out a range of people with what some try to claim was a bias toward black people. Wrong. The AI simply followed the rules and a larger number of black people and some other minorities, as defined in the US, were found to be breaking those rules. It didn't matter to the machines that when one group says it, it isn't defined as hate speech by some; it simply followed the rules. People can ignore or pretend not to see rules, but machines don't work that way. What the exercise actually found was that speech by some groups is ignored while the same thing said by others isn't. As the saying goes, don't ask the question if you're not prepared to hear the answer.
» Computers are useful tools and they will emotionlessly churn through thousands of operations in the blink of an eye to produce whatever results they were programmed to do. Most of the time the results are welcomed. When it comes to malware the results generate a different reaction, and then there are those spaces in the middle. The situation surrounding the Boeing 737 Max MCAS aircraft and the recent crash is an excellent example. The latest analysis would seem to indicate that the computer engineers made some choices that have had unintended consequences. In this case overriding the wishes of the pilots by assuming the plane was crashing, when it wasn't, and not allowing the human pilots to correct the computer's decisions.
Your recent history
Recently viewed links