In the early days of the Russian invasion of its neighbor, Ukraine’s defense ministry announced that it had started using facial recognition technology to identify Russian military members, as well as Ukrainians, who have been killed.
The announcement followed a letter to the Ukrainian government from co-founder and CEO of Clearview AI, Hoan Ton-That, “offering free training and usage of our product, which may be of help during this time of terrible conflict.”
“My heart goes out to the Ukrainian people, and my hope is our technology could be used to prevent harm, save innocent people, and protect lives,” Hoan said, before offering assistance to the Ukrainian government.
“We would be honored to help in any way,” Hoan concluded.
How does Clearview AI work?
“Clearview AI is a facial recognition search engine that can instantly identify someone from just a photo,” Hoan explained in his letter to the Ukrainian government.
The system works by matching a given image with one of over 10 billion photos scraped from the internet.
According to Hoan, Clearview AI is currently used by “law enforcement agencies, military bases, including the FBI, Homeland Security, and more.”
Clearview AI’s image database includes a large number of photographs taken from Russian social media platforms, such as Vkontakte.
“We have over 2 billion images from the Vkontakte social network in the Clearview AI database,” Hoan claimed.
In his letter to the Ukrainian government, Hoan listed several potential use-cases for the system:
- “Identifying Infiltrators” — The Clearview AI system could allow Ukrainian authorities to use a photograph of a person (or even their ID card) to gain access to identifying information, such as “their real name, associates, friends, family, and links to any public information online about that person.” This could allow “[p]otential infiltrators who may be posing as to help Ukraine” to be “instantly vetted in real time.”
- “Deceased People” — The Clearview AI system could help with the identification of those killed during the conflict without the use of fingerprints, which are “hard to obtain.” According to Hoan, the platform “works effectively regardless of facial damage that may have occurred to a deceased person,” which is particularly important given the physical impact of the Russian invasion.
- “Fighting Misinformation” — Noting the explosive spread of misinformation and disinformation on social media, Hoan claimed that Clearview AI could be used to “help correct misinformation in real-time as it shows up.”
- “Family Reunification” — The Russian invasion of Ukraine has also sparked a refugee crisis, with millions of Ukrainian civilians fleeing the country. “In situations where there are refugees who are separated from their family,” Hoan wrote, “facial recognition technology can help identify people who may be stuck in refugee camps, who may not have paperwork or identification, and help them reunify with their families.”
In a statement to the Daily Wire, Clearview AI explained that their “bias-free facial recognition algorithm can work on photos from multiple angles, in darkness, with and without glasses and facial hair, photos of only parts of a face, due to state of the art artificial intelligence technology.”
Noting the use-cases in the context of the Russian invasion of Ukraine, Clearview AI also stated that their software “has been shown to be successful in the field when identifying deceased bodies, even with some facial damage.”
“War zones can be dangerous when there is no way to tell apart enemy combatants from civilians. Facial recognition technology can help reduce uncertainty and increase safety in these situations,” Hoan told the Daily Wire.
He then explained that Cleaview AI’s algorithm can “pick the correct face out of a lineup of over 12 million photos” at an accuracy rate of 99.85 percent, based on “NIST 1:N Face Recognition Vendor Test.”
This is “much more accurate than the human eye,” Hoan noted, adding that it would “prevent misidentifications from happening in the field.”
How is it being used?
According to Clearview AI, over 300 accounts had been created by Ukrainian authorities, with over 10,000 searches being performed. Hoan told the Daily Wire that Ukrainian officials have “expressed their enthusiasm” regarding the program, and are now being trained on safe and effective use of the facial recognition software. Hoan added that “[s]earch results from Clearview AI should not be the sole source of identification.”
In addition, Clearview AI’s platform has been translated into Ukrainian, and is reportedly being used by at least five Ukrainian government agencies.
One documented use case of Clearview AI involves the identification of dead Russian soldiers. In a Telegram post by the Ukrainian vice prime minister, Mykhailo Fedorov, he outlined an ongoing campaign to combat Russian propaganda efforts to hide the human cost of their invasion of Ukraine. By identifying Russian soldiers who have been killed, and attempting to reach their families, the Ukrainian government hopes to “dispel the myth of a ‘special operation’ in which there are ‘no conscripts’ and ‘no one dies,’” he explained.
What are critics saying?
In the United States, there are significant privacy concerns regarding Clearview AI’s data collection processes, especially since Clearview provides U.S. law enforcement with its services. Clearview AI is currently facing several lawsuits in the United States.
In fact, several countries — including the United Kingdom, Canada, Italy, France and Australia — have deemed its practices illegal, referencing the alleged use of people’s photos without their consent. Meanwhile, Clearview AI has defended its image-gathering strategy, saying it’s similar to Google Search.
There is also a concern that various tech companies may be leveraging the Russian invasion of Ukraine as a way to expand their influence with little-to-no oversight in terms of privacy.
In addition, some critics of facial recognition software argue that the technology should be banned, claiming it can be abused by governments to persecute minority communities or suppress dissidents.
“War zones are often used as testing grounds not just for weapons but surveillance tools that are later deployed on civilian populations or used for law enforcement or crowd control purposes,” Evan Greer, a deputy director for the digital rights group, Fight for the Future, said. “Companies like Clearview are eager to exploit the humanitarian crisis in Ukraine to normalize the use of their harmful and invasive software.”
“We already know that authoritarian states like Russia use facial recognition surveillance to crack down on protests and dissent,” Greer added. “Expanding the use of facial recognition doesn’t hurt authoritarians like Putin — it helps them.”
In a statement to the Daily Wire, Clearview AI’s chief executive responded to these accusations, saying that he “created the consequential facial recognition technology known the world over with the purpose of helping to make communities safer and assisting law enforcement in solving heinous crimes against children, seniors and other victims of unscrupulous acts.”
“We only collect public data from the open internet and comply with all standards of privacy and law,” Hoan continued. “I am heartbroken by the misinterpretation by some in the EU, where we do no business, of Clearview AI’s technology to society. My intentions and those of my company have always been to help communities and their people to live better, safer lives.”
Hoan then noted that “[w]ar zones can be dangerous” when there is no way to distinguish between enemy combatants and civilians, arguing that “[f]acial recognition technology can help reduce uncertainty and increase safety in these situations.”
“Unlike facial recognition technology applied in other authoritarian countries, Clearview AI’s technology is not used in a real-time way, but in a manner that is after the incident has occured,” Hoan concluded. “Our goal is to use our technology to make our communities safer. We are honored to provide our life saving technology for free to the Ukrainian government.”
Some U.S.-based companies have even pulled back from facial recognition technology in recent years. For example, Facebook announced in November 2021 that it would be shutting down its facial recognition system, citing “societal concerns” made worse by the fact that “regulators have yet to provide clear rules.”
“We’re shutting down the Face Recognition system on Facebook,” the company announced in a blog post. “People who’ve opted in will no longer be automatically recognized in photos and videos and we will delete more than a billion people’s individual facial recognition templates.”
There are also concerns that misidentification could result in disastrous consequences. For example, Privacy International, a digital rights group, cited this risk when they called on Clearview AI to pull back from Ukraine.
“The risks and perils of facial recognition and online surveillance have been extensively aired, and in a war context the potential consequences would be too atrocious to be tolerated — such as mistaking civilians for soldiers, or Ukrainians for Russian soldiers,” Privacy International argued, before noting the potential for interference by the Russian government, given “Putin’s vast record of perpetrating online manipulation.”
The views expressed in this piece are the author’s own and do not necessarily represent those of The Daily Wire.
This article has been updated to include additional comment from Clearview AI chief executive, Hoan Ton-That.