Search This Blog

Saturday, November 19, 2016

Predictive methods may accurately predict yet still be incorrect

You're probably aware of the troubling behavior exhibited by insensitive police officers of profiling potential law-breakers by taking into account such unrelated factors, as race, sex, age, and demeanor. After all, what's really more important, treating everyone equally no matter their ethnicity sex and age, or preventing crime? (That was an obvious rhetorical question which you should have answered by shouting "treating everyone equally," of course.)

All the science, the statistics, examining trends, behaviors, attitudes, patriotism, educational backgrounds, marital status, economic achievement, etc., is irrelevant, when they make us lose sight of the bigger picture. That "bigger" picture is everyone of all races, walking hand-in-hand into a brighter future. Would you rather be proud of your open-minded inclusiveness, or safe? (Another obvious rhetorical question ... who needs safety when you can have sanctimony?)

Forget for a moment that in certain areas of town you are much more likely to be beaten, raped, murdered, mugged, or maybe just randomly shot in a drive-by. Pay no attention to the overwhelming likelihood that the criminals who harm you will probably be males age thirteen to twenty-five. The natural inclination to view the overwhelmingly predominant racial make-up of more dangerous neighborhoods, the criminality exhibited by the various age groups, the criminality displayed by the respective genders, etc., is called profiling, and while it may be accurately predictive, it's still wrong!

Perusing the various new-feeds, the following headline jumped out. "TROUBLING STUDY SAYS ARTIFICIAL INTELLIGENCE CAN PREDICT WHO WILL BE CRIMINALS BASED ON FACIAL FEATURES." The article goes on to thoroughly debunk the "study." How—you might well ask—did the journalist writing for TheIntercept.com—Sam Biddle—debunk the quoted "TROUBLING STUDY"? Well, let's find out, shall we?
THE FIELDS OF artificial intelligence and machine learning are moving so quickly that any notion of ethics is lagging decades behind, or left to works of science fiction. This might explain a new study out of Shanghai Jiao Tong University, which says computers can tell whether you will be a criminal based on nothing more than your facial features.

The bankrupt attempt to infer moral qualities from physiology was a popular pursuit for millennia, particularly among those who wanted to justify the supremacy of one racial group over another. But phrenology, which involved studying the cranium to determine someone’s character and intelligence, was debunked around the time of the Industrial Revolution, and few outside of the pseudo-scientific fringe would still claim that the shape of your mouth or size of your eyelids might predict whether you’ll become a rapist or thief.

Not so in the modern age of Artificial Intelligence, apparently: In a paper titled “Automated Inference on Criminality using Face Images,” two Shanghai Jiao Tong University researchers say they fed “facial images of 1,856 real persons” into computers and found “some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle.” They conclude that “all four classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic.”

The study contains virtually no discussion of why there is a “historical controversy” over this kind of analysis — namely, that it was debunked hundreds of years ago. Rather, the Authors trot out another discredited argument to support their main claims:, that computers can’t be racist, because they’re computers: Absent, too, is any discussion of the incredible potential for abuse of this software by law enforcement.

Kate Crawford, an AI researcher with Microsoft Research New York, MIT, and NYU, told The Intercept, “I‘d call this paper literal phrenology, it’s just using modern tools of supervised machine learning instead of calipers. It’s dangerous pseudoscience.”

Crawford cautioned that “as we move further into an era of police body cameras and predictive policing, it’s important to critically assess the problematic and unethical uses of machine learning to make spurious correlations,” adding that it’s clear the authors “know it’s ethically and scientifically problematic, but their ‘curiosity’ was more important.”
Well, there you have it. This study was debunked because a quasi-related field—"phrenology"—was debunked more than a century ago. Case closed. Nothing to see here. Did they succeed? you might ask. Did the authors of the study accurately predict whether the subjects in the study exhibited criminality? You're missing the point. It doesn't matter whether the science works, what matters is how we feel about the fairness of that science.

No comments:

Post a Comment