The Department of Homeland Security has seen first-hand the opportunities and threats related to artificial intelligence. Years later, he found a human trafficking victim using an artificial intelligence tool that brought up the image of a baby a decade older. But it has also been duped in investigations by false images created by artificial intelligence
Now, the department becomes the primary federal agency to implement the technology, with plans to enable generative AI models across a wide selection of departments. In partnership with OpenAI, Anthropic and Meta, it’ll launch pilot programs using chatbots and other tools to assist combat drug and human trafficking crimes, train immigration officials and prepare for crisis management across the country.
The rush to deploy the still unproven technology is part of a bigger effort to maintain pace with changes driven by generative artificial intelligence, which may create hyper-realistic images and videos and mimic human speech.
“You can’t ignore this,” Alejandro Mayorkas, secretary of the Department of Homeland Security, said in an interview. “And unless someone looks ahead, recognizing and preparing to address its potential for good and harm, it will already be too late, and that is why we are acting quickly.”
The plan to enable generative AI across the agency is the most recent example of how latest technology like OpenAI’s ChatGPT is forcing even probably the most staid industries to re-evaluate the way in which they do their jobs. Still, government agencies like DHS are more likely to face the hardest scrutiny over how they use the technology, which has sparked fierce debate since it has sometimes been shown to be unreliable and discriminatory.
Members of the federal government have rushed to make plans within the wake of President Biden’s executive order issued late last 12 months requiring the creation of security standards for artificial intelligence and its adoption across the federal government.
DHS, which employs 260,000 people, was created after the September 11 terrorist attacks and is tasked with protecting Americans inside the country’s borders, including policing human and drug trafficking, protecting critical infrastructure, responding to disasters and patrolling the border.
As part of its plan, amongst others: the agency plans to rent 50 artificial intelligence experts to work on solutions to guard the country’s critical infrastructure from attacks generated by artificial intelligence and to combat the use of the technology to generate child sexual abuse material and create biological weapons.
In $5 million in pilot programs, the agency will use artificial intelligence models resembling ChatGPT to assist investigate child exploitation material and human and drug trafficking. It may also work with corporations to scour troves of text data to seek out patterns that may help investigators. For example, a detective in search of a suspect driving a blue van will have the opportunity to go looking for a similar type of vehicle for the primary time in homeland security investigations.
DHS will use chatbots to coach immigration officials who worked with other employees and contractors posing as refugees or asylum seekers. Artificial intelligence tools will enable officials to receive more extensive training through mock interviews. Chatbots may also mine details about communities across the country to assist them develop disaster relief plans.
The agency will report results from its pilot programs by the tip of the 12 months, said Eric Hysen, the department’s chief information officer and AI chief
The agency has chosen OpenAI, Anthropic and Meta to experiment with different tools, and can use cloud service providers Microsoft, Google and Amazon in its pilot programs. “We can’t do this alone,” he said. “We need to work with the private sector to help define the responsible use of generative AI.”