Big Tech juggles ethical pledges on facial recognition with corporate interests

evan

Over the course of four days last week, three of America’s largest technology companies — IBM, Amazon and Microsoft — announced sweeping restrictions to their sale of facial recognition tools and called for federal regulation amid protests across the United States against police violence and racial profiling.

In terms of headlines, it was a symbolic shift for the industry. Researchers and civil liberties groups who have been calling for strict controls or outright bans on the technology for years are celebrating, although cautiously.

They doubt, however, that much has changed. The careful wording of these public pledges leaves plenty of room for oppressive uses of the technology that exacerbate human biases and infringe on people’s constitutional freedoms, critics say.

“It shows that organizing and socially informed research works,” said Meredith Whittaker, co-founder of the AI Now Institute, which researches the social implications of artificial intelligence. “But do I think the companies have really had a change of heart and will work to dismantle racist oppressive systems? No.”

Facial recognition has emerged in recent years as a major area of investment, both in terms of developing technology and in lobbying for law enforcement and private companies to be allowed to use it. The technology began to show up in government contracts, with some companies like Clearview AI scraping billions of photos of unwitting members of the public from social media in an effort to build a near-universal facial recognition system.

At the same time, critics and skeptics of the technology — including from within the companies — have pushed for transparency and regulations around its use. Some of those efforts have been successful, with cities like San Francisco, Oakland and Berkeley in California and Somerville, Massachusetts, banning the use of the software by police and other agencies.

Now, Whittaker along with other technology researchers and civil rights groups, including the American Civil Liberties Union and Mijente, an immigrant rights group, say that the technology companies’ pledges have more to do with public relations at a time of heightened scrutiny of police powers in the United States than any serious ethical objections to the deployment of facial recognition as a whole.

They seek a total ban on government use of the technology, arguing that neither companies nor law enforcement agencies can be ethically trusted to deploy such a powerful tool.

“Facial recognition technology is so inherently destructive that the safest approach is to pull it out root and stem,” said Woody Hartzog, professor of law and computer science at the Northeastern University School of Law.

While the companies make timely public calls for regulation, they have armies of lobbyists working to shape that regulation to ensure that they can continue to bid for government surveillance contracts, said Shankar Narayan, former director of the ACLU’s technology and liberty project and now co-founder of MIRA, a community engagement agency.

Facial recognition software is demonstrated at the Intel booth at CES 2019 consumer electronics show on Jan.10, 2019 at the Las Vegas Convention Center in Las Vegas. (Robyn Beck / AFP - Getty Images file)
Facial recognition software is demonstrated at the Intel booth at CES 2019 consumer electronics show on Jan.10, 2019 at the Las Vegas Convention Center in Las Vegas. (Robyn Beck / AFP – Getty Images file)

“This isn’t a shift but part of the optics pivot that big tech was doing well before this,” Narayan said. “These companies have been saying, ‘Hey, we care so much about these issues that we will write the rules and regulations ourselves that will allow the technology to be widely embraced.’”

Microsoft and Amazon have earned some of the skepticism by calling for limits on facial recognition technology while also pursuing its development and deployment.

With last week’s announcement, Microsoft said it wouldn’t sell facial recognition to police in the United States until there was federal regulation. But the company has spent months lobbying state governments to pass bills to permit the use of facial recognition by police.

In Washington state, a Microsoft employee wrote a facial recognition bill that was signed into law in April. The law requires basic transparency and accountability mechanisms surrounding government use of the technology, but beyond preventing “mass surveillance” does little to restrict the way it can be used by police.

“The worry is that this weak regulation would be the template for a federal law and that Congress will make use of federal pre-emption to undo the laws that are stronger locally,” said Liz O’Sullivan, technology director of the Surveillance Technology Oversight Project, in reference to cities that have banned the use of the software by police.

Over the last two years, Microsoft has warned about the “sobering” applications of facial recognition technology and called for government regulation and the application of ethical principles.

“Imagine a government tracking everywhere you walked over the past month without your permission or knowledge,” said Microsoft President Brad Smith in a July 2018 blog post. “Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech.”

Around the same time, however, the company was pitching its facial recognition technology to the Drug Enforcement Administration, hosting agents at its office in Reston, Virginia, according to emails dating from September 2017 to December 2018 obtained by the ACLU and shared with NBC News.

The company’s ethical principles were also challenged in its 2019 investment in Israeli facial recognition startup AnyVision, which field-tested its surveillance tools on a captive population of Palestinians despite public pledges to avoid using the technology if it encroached on democratic freedoms. After NBC News reported on the startup’s activities in the West Bank, the company commissioned an investigation by former Attorney General Eric Holder and eventually divested from AnyVision.

Microsoft did not respond to a request for comment.

In Amazon’s announcement, the company said it would not sell its Rekognition tool to police for a year to give Congress time to regulate the technology.

It’s not clear how many law enforcement customers Amazon had for Rekognition, but the Sheriff’s Office in Washington County, Oregon, has used it since late 2017 to compare mugshots to surveillance footage — a contract that attracted criticism from civil rights groups.

For years the ACLU as well as top AI researchers and some of Amazon’s own investors and employees have urged the company to stop providing its technology to law enforcement. Studies have found the system to be less accurate at identifying dark-skinned faces. Amazon repeatedly disputed that research and continued to promote the tool to law enforcement.

“New technology should not be banned or condemned because of its potential misuse,” said Michael Punke, Amazon’s vice president of global policy, in a February 2019 blog post. “Instead, there should be open, honest, and earnest dialogue among all parties involved to ensure that the technology is applied appropriately and is continuously enhanced.”

Even with regulation, facial recognition technology could still be misused by law enforcement, said Jacinta Gonzalez, field director of Mijente.

“There’s a huge crisis of accountability with policing,” she said, pointing to the protests taking place across the United States in the aftermath of George Floyd’s death in police custody. “Until we have accountability, the continued investment in these technologies will only further the criminalization and abuse of Black and immigrant communities.”

Amazon did not respond to a request for comment.

IBM seemed to go further than Amazon and Microsoft by pledging in a letter to Congress to stop researching, developing or selling “general purpose” facial recognition.

In the letter, IBM CEO Arvind Krishna said the company “firmly opposes” the use of facial recognition for “mass surveillance, racial profiling, violations of basic human rights and freedoms.”

John Honovich, founder of IPVM, an independent website that tests and reports on surveillance systems, said he found the timing of the announcement curious since the company had pulled its video analytics product that included facial recognition from the market in May 2019.

IBM had previously attempted to develop less racially biased facial recognition software through the release in January 2019 of a diverse set of one million photos of faces of people of different skin tones, age and gender. However, as NBC News reported in March 2019, the company took those photos from Flickr without the subjects’ knowledge or informed consent.

Although the company said it was for research purposes only, IBM has a history of developing facial recognition for law enforcement. In the aftermath of 9/11, the company used NYPD surveillance camera footage to develop technology that allowed police to search video feeds for people based on attributes including their skin color.

Eliminating bias in facial recognition technology might represent scientific progress but doesn’t make the technology safer, say critics.

“It’s incredibly bad and destructive when it’s biased, but it’s even worse when it’s accurate because then it becomes more attractive to those in power that wish to use it,” Hartzog said. “We know that people of color bear the brunt of surveillance tools.”

IBM relaunched its video analytics tool in early May, but told IPVM it had removed facial recognition, race and skin tone analytics based on recommendations from its AI ethics panel.

Honovich said that IBM was a small player in the face surveillance industry so its withdrawal from the market would not make much of an impact on the company’s bottom line.

“It’s not a tough business decision especially if there’s tons of protests against it,” Honovich said.

IBM declined to comment.

Honovich and others also noted that although IBM, Microsoft and Amazon are giants in the technology industry, they aren’t market leaders within the police surveillance industry. They have plenty of competition in the form of startups selling facial recognition to law enforcement, including Briefcam and Clearview AI, which don’t have the kinds of consumer-facing brands susceptible to public pressure. This allows the big companies to take the moral high ground without limiting police surveillance capabilities.

“The small companies will quietly continue to fly under the radar and sell products exclusively to law enforcement,” O’Sullivan said. “It’s a better business model because you don’t have to worry about your brand and people buying books and toilet paper from your website if all you do is sell facial recognition to law enforcement.”

Next Post

Shoot Protesters. Human Rights Orgs Silent. ~ Elder Of Ziyon

The Computer Know-how (CT) program has been developed to provide training in the ideas underlying the design of contemporary computer methods. If It’s ‘MediaWikiGadgetsDefinitionRepo'(the default value), the record of accessible gadgets is defined on MediaWiki:Gadgets-definition In an alternate means, Gadget definitions defined on pages within the Gadget definition namespace when $wgGadgetsRepoClass […]

You May Like

Subscribe US Now