Monday, January 26, 2026

 
HomePOLITICSTrump DOT Plans to Use Google Gemini AI to Write Regulations —...

Trump DOT Plans to Use Google Gemini AI to Write Regulations — ProPublica


The Trump administration is planning to use artificial intelligence to write federal transportation regulations, according to U.S. Department of Transportation records and interviews with six agency staffers.

The plan was presented to DOT staff last month at a demonstration of AI’s “potential to revolutionize the way we draft rulemakings,” agency attorney Daniel Cohen wrote to colleagues. The demonstration, Cohen wrote, would showcase “exciting new AI tools available to DOT rule writers to help us do our job better and faster.”

Discussion of the plan continued among agency leadership last week, according to meeting notes reviewed by ProPublica. Gregory Zerzan, the agency’s general counsel, said at that meeting that President Donald Trump is “very excited about this initiative.” Zerzan seemed to suggest that the DOT was at the vanguard of a broader federal effort, calling the department the “point of the spear” and “the first agency that is fully enabled to use AI to draft rules.”

Zerzan appeared interested mainly in the quantity of regulations that AI could produce, not their quality. “We don’t need the perfect rule on XYZ. We don’t even need a very good rule on XYZ,” he said, according to the meeting notes. “We want good enough.” Zerzan added, “We’re flooding the zone.” 

These developments have alarmed some at DOT. The agency’s rules touch virtually every facet of transportation safety, including regulations that keep airplanes in the sky, prevent gas pipelines from exploding and stop freight trains carrying toxic chemicals from skidding off the rails. Why, some staffers wondered, would the federal government outsource the writing of such critical standards to a nascent technology notorious for making mistakes?

The answer from the plan’s boosters is simple: speed. Writing and revising complex federal regulations can take months, sometimes years. But, with DOT’s version of Google Gemini, employees could generate a proposed rule in a matter of minutes or even seconds, two DOT staffers who attended the December demonstration remembered the presenter saying. In any case, most of what goes into the preambles of DOT regulatory documents is just “word salad,” one staffer recalled the presenter saying. Google Gemini can do word salad.

Zerzan reiterated the ambition to accelerate rulemaking with AI at the meeting last week. The goal is to dramatically compress the timeline in which transportation regulations are produced, such that they could go from idea to complete draft ready for review by the Office of Information and Regulatory Affairs in just 30 days, he said. That should be possible, he said, because “it shouldn’t take you more than 20 minutes to get a draft rule out of Gemini.”

The DOT plan, which has not previously been reported, represents a new front in the Trump administration’s campaign to incorporate artificial intelligence into the work of the federal government. This administration is not the first to use AI; federal agencies have been gradually stitching the technology into their work for years, including to translate documents, analyze data and categorize public comments, among other uses. But the current administration has been particularly enthusiastic about the technology. Trump released multiple executive orders in support of AI last year. In April, Office of Management and Budget Director Russell Vought circulated a memo calling for the acceleration of its use by the federal government. Three months later, the administration released an “AI Action Plan that contained a similar directive. None of those documents, however, called explicitly for using AI to write regulations, as DOT is now planning to do.

Those plans are already in motion. The department has used AI to draft a still-unpublished Federal Aviation Administration rule, according to a DOT staffer briefed on the matter.

Skeptics say that so-called large language models such as Gemini and ChatGPT shouldn’t be trusted with the complicated and consequential responsibilities of governance, given that those models are prone to error and incapable of human reasoning. But proponents see AI as a way to automate mindless tasks and wring efficiencies out of a slow-moving federal bureaucracy.

Such optimism was on display in a windowless conference room in Northern Virginia earlier this month, where federal technology officials, convened at an AI summit, discussed adopting an “AI culture” in government and “upskilling” the federal workforce to use the technology. Those federal representatives included Justin Ubert, division chief for cybersecurity and operations at DOT’s Federal Transit Administration, who spoke on a panel about the Transportation Department’s plans for “fast adoption” of artificial intelligence. Many people see humans as a “choke point” that slows down AI, he noted. But eventually, Ubert predicted, humans will fall back into merely an oversight role, monitoring “AI-to-AI interactions.” Ubert declined to speak to ProPublica on the record.

A similarly sanguine attitude about the potential of AI permeated the presentation at DOT in December, which was attended by more than 100 DOT employees, including division heads, high-ranking attorneys and civil servants from rulemaking offices. Brimming with enthusiasm, the presenter told them that Gemini can handle 80% to 90% of the work of writing regulations, while DOT staffers could do the rest, one attendee recalled the presenter saying.

To illustrate this, the presenter asked for a suggestion from the audience of a topic on which DOT may have to write a Notice of Proposed Rulemaking, a public filing that lays out an agency’s plans to introduce a new regulation or change an existing one. He then plugged the topic keywords into Gemini, which produced a document resembling a Notice of Proposed Rulemaking. It appeared, however, to be missing the actual text that goes into the Code of Federal Regulations, one staffer recalled.

The presenter expressed little concern that the regulatory documents produced by AI could contain so-called hallucinations — erroneous text that is frequently generated by large language models such as Gemini — according to three people present. In any case, that’s where DOT’s staff would come in, he said. “It seemed like his vision of the future of rulemaking at DOT is that our jobs would be to proofread this machine product,” one employee said. “He was very excited.” (Attendees could not clearly recall the name of the lead presenter, but three said they believed it was Brian Brotsos, the agency’s acting chief AI officer. Brotsos declined to comment, referring questions to the DOT press office.)

A spokesperson for the DOT did not respond to a request for comment; Cohen and Zerzan also did not respond to messages seeking comment. A Google spokesperson did not provide a comment.

The December presentation left some DOT staffers deeply skeptical. Rulemaking is intricate work, they said, requiring expertise in the subject at hand as well as in existing statutes, regulations and case law. Mistakes or oversights in DOT regulations could lead to lawsuits or even injuries and deaths in the transportation system. Some rule writers have decades of experience. But all that seemed to go ignored by the presenter, attendees said. “It seems wildly irresponsible,” said one, who, like the others, requested anonymity because they were not authorized to speak publicly about the matter. 

Mike Horton, DOT’s former acting chief artificial intelligence officer, criticized the plan to use Gemini to write regulations, comparing it to “having a high school intern that’s doing your rulemaking.” (He said the plan was not in the works when he left the agency in August.) Noting the life-or-death stakes of transportation safety regulations, Horton said the agency’s leaders “want to go fast and break things, but going fast and breaking things means people are going to get hurt.”

Academics and researchers who track the use of AI in government expressed mixed opinions about the DOT plan. If agency rule writers use the technology as a sort of research assistant with plenty of supervision and transparency, it could be useful and save time. But if they cede too much responsibility to AI, that could lead to deficiencies in critical regulations and run afoul of a requirement that federal rules be built on reasoned decision-making.

“Just because these tools can produce a lot of words doesn’t mean that those words add up to a high-quality government decision,” said Bridget Dooling, a professor at Ohio State University who studies administrative law. “It’s so tempting to try to figure out how to use these tools, and I think it would make sense to try. But I think it should be done with a lot of skepticism.”

Ben Winters, the AI and privacy director at the Consumer Federation of America, said the plan was especially problematic given the exodus of subject-matter experts from government as a result of the administration’s cuts to the federal workforce last year. DOT has had a net loss of nearly 4,000 of its 57,000 employees since Trump returned to the White House, including more than 100 attorneys, federal data shows.

Elon Musk’s Department of Government Efficiency was a major proponent of AI adoption in government. In July, The Washington Post reported on a leaked DOGE presentation that called for using AI to eliminate half of all federal regulations, and to do so in part by having AI draft regulatory documents. “Writing is automated,” the presentation read. DOGE’s AI program “automatically drafts all submission documents for attorneys to edit.” DOGE and Musk did not respond to requests for comment.

The White House did not answer a question about whether the administration is planning to use AI in rulemaking at other agencies as well. Four top technology officials in the administration said they were not aware of any such plan. As for DOT’s “point of the spear” claim, two of those officials expressed skepticism. “There’s a lot of posturing of, ‘We want to seem like a leader in federal AI adoption,’” one said. “I think it’s very much a marketing thing.”



This story originally appeared on ProPublica

RELATED ARTICLES

Most Popular

Recent Comments