Over the summer, Microsoft President Brad Smith called for governments to take a closer look at how facial detection technology is being implemented across the globe. This week, he returned with a similar message — only this time the executive is calling out fellow technology purveyors to help address myriad issues around the technology before…
Over the summer, Microsoft President Brad Smith called for governments to take a closer look at how facial detection technology is being implemented across the globe. This week, he returned with a similar message — only this time the executive is calling out fellow technology purveyors to help address myriad issues around the technology before it becomes too pervasive.
It’s easy enough to suggest that the ship has sailed. After all, facial recognition is already fairly ubiquitous on everything from Facebook to Apple Animojis. But if the past year has taught us anything, it’s that the governments of the world can’t wait to implement the tech in a broader way — and plenty of tech firms are more than happy to help.
Smith points to a trio of potential pitfalls for the tech: biased outcomes, invasion of privacy and mass surveillance. The ACLU has been raising red flags on that first point for some time, asking Congress to implement a moratorium on surveillance technologies. The group found that Amazon’s Rekognition software wrongly associated headshots of members of Congress with criminal mugshots.
The new letter finds Microsoft frustrated at regulatory foot-dragging, instead placing the burden on tech regulation on the companies themselves. “We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition,” writes Smith. “And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.”
In other words, as Smith puts it, “you can’t put the genie back in the bottle.” So Microsoft is looking to set the tone here, committing to its own code, which it plans to implement by the first quarter of next year.
The piece details a number of safeguards and vetting that companies can implement to help avoid some of the more troubling pitfalls here. Among the recommendations are some fairly straightforward suggestions, like transparency, third-party testing, technology reviews by humans and properly identifying where and when the technology is being implemented. All of the above honestly sound pretty straightforward and doable.
Microsoft is set to follow up these suggestions with a more detailed document arriving next week that will more thoroughly detail its plans, while soliciting suggestions from people and groups about how to more broadly implement them.