Friday, April 5, 2024

Google Silent on Israel’s Use of Photo Software for Gaza “Hit List”

Date:

The Israeli military has reportedly implemented a facial recognition dragnet across the Gaza Strip, scanning ordinary Palestinians as they move throughout the ravaged territory, attempting to flee the ongoing bombardment and seeking sustenance for their families.

Facial Recognition Tools Used by Israeli Military:
The program relies on two different facial recognition tools, according to the New York Times: one made by the Israeli contractor Corsight, and the other built into the popular consumer image organization platform offered through Google Photos. An anonymous Israeli official told the Times that Google Photos worked better than any of the alternative facial recognition tech, helping the Israelis make a “hit list” of alleged Hamas fighters who participated in the October 7 attack.

Mass Surveillance and Human Rights Violations:
The mass surveillance of Palestinian faces resulting from Israel’s efforts to identify Hamas members has caught up thousands of Gaza residents since the October 7 attack. Many of those arrested or imprisoned, often with little or no evidence, later said they had been brutally interrogated or tortured. The use of Google Photos’s machine learning-powered analysis features to place civilians under military scrutiny is at odds with the company’s stated rules against promoting activities that cause serious harm to people.

Corporate Human Rights Commitments:
Google’s terms of service ban against using Google Photos to cause harm to people and its long-standing public commitments to human rights standards are being questioned due to their association with the Israeli military’s actions in Gaza. The Conflict-Sensitive Human Rights Due Diligence for ICT Companies framework recommends that tech companies avoid the misuse of their products and services in war zones, raising concerns about Google’s involvement in providing advanced tools to the Israeli military.

Project Nimbus and Contradictions:
Google’s sale of sophisticated tools to the Israeli military through Project Nimbus contradicts its AI Principles, which forbid AI uses likely to cause harm. The company’s narrow interpretation of these principles raises questions about its accountability for how its technology is used by third parties. Former employees and activist groups have criticized Google for its silence on issues related to human rights violations and genocide.

Conclusion:
The use of facial recognition technology by the Israeli military in Gaza raises serious ethical concerns about human rights violations and corporate responsibility. Google’s involvement in providing tools that contribute to surveillance and potential harm to civilians calls into question its commitment to upholding international human rights standards. As the debate continues, it remains to be seen how technology companies like Google will address these complex issues moving forward.

Latest stories