We’ve had several people reach out to us to ask us our perspective on IBM’s decision to end facial recognition business. Did they make the right choice? Could (should) they have done things differently? Let’s take a step back and put things in perspective.
Yes, computer vision can be, and is, used in many negative ways. It can definitely become a weapon to further discriminate against others. Not just based on how people look but also on how they act.
To make matters worse, as many articles reporting on the IBM decision have mentioned, computer vision depends on algorithms to work, and many of these algorithms are fraud with bias. I recently did a presentation on how tech can help us stay healthy and covered the issue of bias a great deal. Here is a link if you want to know more .
Finally, and to keep on piling on the problem, computer vision also relies on -you guessed it- cameras to operate. And some of these cameras do not always deliver the right image quality. For example, while temperature scanners are all the rage given Covid-19, we should keep in mind that many of the cameras on the market were not only never designed to handle human temperatures, they also have an error range of +- 5 degrees which can easily sway you in one group or another.
So, when you look at all these factors, it can be easy to see why IBM decided end its facial recognition business. The timing is a bit unfortunate to me. The problems brought up are not new, the way their tech has been used for policing and discriminating is not new. So it does sound quite a bit self-serving to make this decision now and does seem hypocritical. Having said this, I do hope that IBM is not ceasing all work on computer vision because the technology can also enhance people’s lives tremendously.
And that, is the core of the problem. The same technology that can be used to protect people can be used to discriminate against them. All that computer vision does is report traits about a person.
It’s not the algorithm that is biased, it’s us.
An image is an image. It’s neutral. The interpretation we do of it is never neutral.
We’re seeing a first layer of bills being introduced to control the use of computer vision. These are great as they first focus on policing abuse and racism.
But that’s step 1. That’s only dealing with the tip of the iceberg. Underneath the water looms the scary fact that most of the algorithms we rely on daily have a ton of bias put into them and no bill is going to solve that problem.
Computer vision is easy to spot as a problem but don’t fool yourself, many algorithms discriminate even more (credit, housing…) and are seldom talked about. Bias is everywhere.
Rather it’s up to all of us who work in the field to improve how we identify and correct bias in our technologies. And, it’s also up to us to make sure we track, understand, and enforce how people are using our technologies.
In the case of IBM it seems that they decided that ignoring the problem was better than trying to solve it.