Saturday, November 4, 2023

Jordanian World Bank Aid Program Algorithm Fails the Poorest

Date:

Why Algorithmic Decision-Making in Poverty Relief Programs Can Be Harmful

The World Bank’s Unified Cash Transfer Program, also known as Takaful, is a poverty relief program that uses algorithmic decision-making to determine who is most deserving of relief. However, a recent report by Human Rights Watch found that the algorithm relies on stereotypes and faulty assumptions about poverty, ultimately failing the very people it’s intended to protect.

Flattening Economic Complexity into a Crude Ranking

The algorithm used by Takaful analyzes household income and expenses of every applicant, along with nearly 60 socioeconomic factors like electricity use, car ownership, business licenses, employment history, illness, and gender. These responses are then ranked using a secret algorithm to automatically determine who are the poorest and most deserving of relief. However, the Human Rights Watch investigation found that car ownership seems to be a disqualifying factor for many Takaful applicants, even if they are too poor to buy gas to drive the car. Similarly, applicants are penalized for using electricity and water based on the presumption that their ability to afford utility payments is evidence that they are not as destitute as those who can’t.

The knock-on effects of digital means-testing are also problematic. Many people in dire need of relief money lack the internet access to even apply for it, requiring them to find, or pay for, a ride to an internet café, where they are subject to further fees and charges to get online. The rigidity of Takaful’s insistence that applicants’ self-reported income match up exactly with their self-reported household expenses forces people to simply fudge the numbers so that their applications would even be processed, undermining the algorithm’s illusion of objectivity.

Algorithmic Means-Tested Social Benefits vs. Universal Programs

The report emphasizes that the system is part of a broader trend by the World Bank to popularize algorithmically means-tested social benefits over universal programs throughout the developing economies in the so-called Global South. However, the increasingly held naïve assumption that automated decision-making software is so sophisticated that its results are less likely to be faulty is problematic. Artificial intelligence ethicists warn that the veneer of automated intelligence surrounding automated welfare distribution leads to a similar myopia.

Technical Obfuscation

While the people who designed Takaful insist on its fairness and functionality, they refuse to let anyone look under the hood. Though it’s known Takaful uses 57 different criteria to rank poorness, the report notes that the Jordanian National Aid Fund, which administers the system, “declined to disclose the full list of indicators and the specific weights assigned, saying that these were for internal purposes only and ‘constantly changing.’”

Conclusion

While fantastical visions of “Terminator”-like artificial intelligences have come to dominate public fears around automated decision-making, other technologists argue civil society ought to focus on real, current harms caused by systems like Takaful, not nightmare scenarios drawn from science fiction. So long as the functionality of Takaful and its ilk remain government and corporate secrets, the extent of those risks will remain unknown.

Latest stories