Search This Blog

Tuesday, April 4, 2017

Liberal cognitive dissonance making us laugh

You believe what you believe and therefore sometimes you will attempt to force evidence that conflicts with your fondly held belief into a box and call this evidence and experience misleading or false. You were lied to! They were just plain wrong! It was just an amazing coincidence multiplied by a million. This tendency to disregard the real truth in favor of false personal ideology is called cognitive dissonance and we all fall prey to it from time to time. I have never before found a more glaring paradigm of this fallacy in action that in the following example. The author's article is reporting about the commonly held fear that artificial intelligence will supplant humanity at some indefinite date in the future and gives numerous examples and thought experiments supporting his conclusion that, yes, humanity could one day be replaced. I was reading along complacently and in agreement, when suddenly one of his examples jumped out and slapped me, right in the face—HAH!
Last May, ProPublica examined predictive software used by Florida law enforcement. Results of a questionnaire filled out by arrestees were fed into the software, which output a score claiming to predict the risk of reoffending. Judges then used those scores in determining sentences.

The ideal was that the software's underlying algorithms would provide objective analysis on which judges could base their decisions. Instead, ProPublica found it was "likely to falsely flag black defendants as future criminals" while "[w]hite defendants were mislabeled as low risk more often than black defendants." Race was not part of the questionnaire, but it did ask whether the respondent's parent was ever sent to jail. In a country where, according to a study by the U.S. Department of Justice, black children are seven-and-a-half times more likely to have a parent in prison than white children, that question had unintended effects. Rather than countering racial bias, it reified it.

It's that kind of error that most worries Doshi-Velez. "Not superhuman intelligence, but human error that affects many, many people," she says. "You might not even realize this is happening." Algorithms are complex tools; often they are so complex that we can't predict how they'll operate until we see them in action. (Sound familiar?) Yet they increasingly impact every facet of our lives, from Netflix recommendations and Amazon suggestions to what posts you see on Facebook to whether you get a job interview or car loan. Compared to the worry of a world-destroying superintelligence, they may seem like trivial concerns. But they have widespread, often unnoticed effects, because a variety of what we consider artificial intelligence is already build into the core of technology we use every day.
Don't laugh! Yes there are numerous articles that attempt to libsplain why objective study results are biased against blacks because...well because the results are just not possible gosh darn it! You really have to stop...........
.
.
.

And do a little self-examination of your fondly held beliefs about human nature, racial prejudice, and its opposite which is not reverse racism but in fact nihilism. How did America become suicidal as a nation? I know when, it started about fifty years ago, it's the how that has me baffled. The only thing I've got so far is a slippery slope and a whole lot of really shitty Supreme Court Justices.

No comments:

Post a Comment