Friday, October 27, 2023

Europe Bans Racist and Deadly AI at Borders

Date:

The European Union is currently in the process of creating legislation to regulate the harmful uses of artificial intelligence (AI). However, the proposed law, known as the EU AI Act, has a significant blind spot: it does not ban the many harmful and dangerous uses of AI systems in the context of immigration enforcement. A coalition of human rights organizations is calling on EU lawmakers to ensure that this landmark legislation protects everyone, including asylum seekers and others on the move at Europe’s borders from dangerous and racist surveillance technologies. They are calling on them to ensure AI technologies are used to #ProtectNotSurveil.

The use of AI in border control is making Europe’s borders deadlier with each passing day. Border and policing authorities are employing predictive analytics, risk assessments through colossal interoperable biometrics databases, and AI-augmented drones to surveil people on the move and push them away from the EU’s borders. These technologies are proven to push people towards more precarious and deadly routes, strip them of their fundamental privacy rights, and unjustifiably prejudice their claims to immigration status. They are also known to criminalize and racially profile people on the move and facilitate unlawful deportations that are in violation of humanitarian protection principles.

At a time when EU member states are racing to craft anti-migration policies in an affront to their domestic and international legal obligations, limiting and regulating the use of artificial intelligence in migration control is crucial to prevent harm. It’s also an unmissable opportunity to prevent the accumulation of deadly, inhumane powers in the hands of authoritarian governments – both in the EU and in states where the EU seeks to externalize its borders.

The EU AI Act can provide key red lines and accountability mechanisms to help protect the fundamental rights of people subjected to AI systems in the migration control context. These could include bans on the use of racist algorithms and predictive analytics to label people as “threats” as well as dubious AI-based “lie detectors” and other emotion recognition tools to unlawfully push people away from borders. The EU has long been working towards protecting its citizens from biometric mass surveillance, and such protections are expected to be part of the final EU AI Act. These efforts should not discriminate based on nationality and racialized ideas of risk and should be expanded to include all people in Europe.

The coalition of human rights organizations also fears that leaving the use of AI in migration control up to EU member states will lead to a global race towards more intrusive technologies to prevent or deter migration – technologies that would fundamentally change or, at worst, end the lives of real people. If the EU AI Act fails to regularize and limit the use of AI technologies in migration enforcement, private actors will be quick to exploit the loophole to forcefully push new products. They will send their products to our borders without proper checks, just as uses that will fall within the scope of the AI Act become subject to more stringent regulation and barriers to entry.

This is a lucrative multibillion-dollar industry. Frontex spent 434 million euros ($476m) on military-grade surveillance and IT infrastructure from 2014 to 2020. Technologies will be deployed and trained at the expense of people’s fundamental rights and later repurposed in other contexts beyond migration control, evading crucial scrutiny at the design stage.

Private actors – such as Palantir, G4S, and the lesser-known Buddi Ltd – have already taken advantage of governments’ desire for more surveillance to sell tech that facilitates inhumane practices at borders and violations of the fundamental rights of people on the move.

The coalition of human rights organizations is calling on the EU to ensure that unacceptable uses of AI in the migration context are banned and all loopholes are closed so that EU standards on privacy and other fundamental rights apply equally to all.

Latest stories