Wednesday, April 10, 2024

Microsoft pitches OpenAI’s DALL-E as U.S. military tool | TOME

Date:

Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military

Microsoft last year proposed using OpenAI’s mega-popular image generation tool, DALL-E, to help the Department of Defense build software to execute military operations, according to internal presentation materials reviewed by The Intercept. The revelation comes just months after OpenAI silently ended its prohibition against military work.

Generative AI with DoD Data

The Microsoft presentation deck, titled “Generative AI with DoD Data,” provides a general breakdown of how the Pentagon can make use of OpenAI’s machine learning tools, including the immensely popular ChatGPT text generator and DALL-E image creator, for tasks ranging from document analysis to machine maintenance. Microsoft invested $10 billion in the ascendant machine learning startup last year, and the two businesses have become tightly intertwined. In February, The Intercept and other digital news outlets sued Microsoft and OpenAI for using their journalism without permission or credit.

The Microsoft document is drawn from a large cache of materials presented at an October 2023 Department of Defense “AI literacy” training seminar hosted by the U.S. Space Force in Los Angeles. The event included a variety of presentations from machine learning firms, including Microsoft and OpenAI, about what they have to offer the Pentagon.

The publicly accessible files were found on the website of Alethia Labs, a nonprofit consultancy that helps the federal government with technology acquisition, and discovered by journalist Jack Poulson. On Wednesday, Poulson published a broader investigation into the presentation materials. Alethia Labs has worked closely with the Pentagon to help it quickly integrate artificial intelligence tools into its arsenal and has contracted with the Pentagon’s main AI office since last year. The firm did not respond to a request for comment.

Potential Military Use of DALL-E

One page of the Microsoft presentation highlights a variety of “common” federal uses for OpenAI, including for defense. One bullet point under “Advanced Computer Vision Training” reads: “Battle Management Systems: Using the DALL-E models to create images to train battle management systems.” Just as it sounds, a battle management system is a command-and-control software suite that provides military leaders with a situational overview of a combat scenario, allowing them to coordinate things like artillery fire, airstrike target identification, and troop movements. The reference to computer vision training suggests artificial images conjured by DALL-E could help Pentagon computers better “see” conditions on the battlefield, a particular boon for finding — and annihilating — targets.

In an emailed statement, Microsoft told The Intercept that while it had pitched the Pentagon on using DALL-E to train its battlefield software, it had not begun doing so. “This is an example of potential use cases that was informed by conversations with customers on the art of the possible with generative AI.” Microsoft did not explain why a “potential” use case was labeled as a “common” use in its presentation.

OpenAI’s Stand

OpenAI spokesperson Liz Bourgeous said OpenAI was not involved in the Microsoft pitch and that it had not sold any tools to the Department of Defense. “OpenAI’s policies prohibit the use of our tools to develop or use weapons, injure others or destroy property,” she wrote. “We were not involved in this presentation and have not had conversations with U.S. defense agencies regarding the hypothetical use cases it describes.”

Ethical Concerns

The presentation document provides no further detail about how exactly battlefield management systems could use DALL-E. The reference to training these systems suggests that DALL-E could be used to furnish the Pentagon with so-called synthetic training data: artificially created scenes that closely resemble germane, real-world imagery. Military software designed to detect enemy targets on the ground could be shown a massive quantity of fake aerial images of landing strips or tank columns generated by DALL-E in order to better recognize such targets in the real world.

Even putting aside ethical objections, the efficacy of such an approach is debatable. “It’s known that a model’s accuracy and ability to process data accurately deteriorates every time it is further trained on AI-generated content,” said Heidy Khlaaf, a machine learning safety engineer who previously contracted with OpenAI. Khlaaf questioned the accuracy of DALL-E images and their ability to reflect reality accurately.

Military Applications

In an interview last month with the Center for Strategic and International Studies, Capt. M. Xavier Lugo of the U.S. Navy envisioned a military application of synthetic data exactly like the kind DALL-E can produce, suggesting that faked images could be used to train drones to better see and recognize the world beneath them.

The Department of Defense’s Response

The Department of Defense did not answer specific questions about the Microsoft presentation but stated that its mission is to accelerate the adoption of data, analytics, and AI across DoD. As part of that mission, they lead activities to educate the workforce on data and AI literacy and how to apply existing and emerging commercial technologies to DoD mission areas.

Conclusion

While Microsoft has long reaped billions from defense contracts, OpenAI only recently acknowledged it would begin working with the Department of Defense. The collaboration between tech companies and the military raises ethical concerns about the use of advanced AI technologies in warfare and their potential impact on civilian populations. As technology continues to advance, it becomes crucial for companies and governments to consider the ethical implications of their actions and ensure that these technologies are used responsibly for the benefit of all humanity.

Latest stories