Google Secretly Signed a Classified AI Deal With the Pentagon — Then 600 of Its Own Employees Revolted

Nandini Roy Choudhury, writer

● BREAKING — April 29, 2026

TECH • ARTIFICIAL INTELLIGENCE • NATIONAL SECURITY • ETHICS

Google has signed a classified agreement with the US Department of Defense allowing the Pentagon to use its Gemini AI models for “any lawful government purpose” — including mission planning and weapons targeting. The deal was revealed by The Information on Tuesday. Hours before the revelation, more than 600 Google employees — including senior directors and vice presidents at DeepMind — had sent an open letter to CEO Sundar Pichai demanding he refuse. He went ahead anyway. The report sparked immediate debate across the tech industry.

April 29, 2026 • By World Affairs Desk, techsunnews.com • 11 min read • Updated 9:00 AM IST • Sources: The Information, CBS News, Washington Post, 9to5Google, Gizmodo, AndroidHeadlines, TechBriefly

DASHBOARD — Google vs Pentagon vs Employees

Employees who signed

600+

Including DeepMind VPs

Deal scope

Any lawful purpose

Weapons targeting included

Anthropic’s fate

Blacklisted

For refusing same deal

Pentagon AI budget

$200M each

Per top AI lab signed

KEY POINTS

  • The Information reported on Tuesday April 28 that Google has signed a classified deal allowing the Pentagon to use Google’s Gemini AI models for “any lawful government purpose” — including mission planning and weapons targeting on classified networks
  • The deal requires Google to adjust its AI safety filters at the government’s request — meaning the Pentagon can effectively turn down the ethical guardrails Google applies to civilian users of Gemini
  • More than 600 Google employees — including directors and vice presidents at DeepMind — sent an open letter to CEO Sundar Pichai on Monday demanding he refuse the Pentagon deal
  • The employee letter read: “We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses. The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads.”
  • Anthropic tried to impose ethical restrictions on a similar Pentagon deal in February 2026 — and was immediately declared a “supply chain risk” by Defence Secretary Pete Hegseth and blacklisted by the Trump administration
  • Google joins OpenAI and Elon Musk’s xAI as the three AI companies that have signed classified Pentagon AI agreements. OpenAI renegotiated to keep some limits; xAI signed without any restrictions at all
  • Google’s statement: “We are proud to be part of a broad consortium of leading AI labs providing AI services in support of national security… we remain committed to the consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.”

 

Image credit corporate.viously.com

In the summer of 2018, thousands of Google employees signed a petition that forced the company to walk away from Project Maven — a Pentagon contract to build AI for drone surveillance. Google’s then-CEO publicly promised the company would not “design or deploy AI” for weapons systems or for applications that violated international law. It was one of the most celebrated moments of employee activism in Silicon Valley history. On Tuesday April 28, 2026, that promise came under its most serious test yet.

Google has signed a classified agreement with the US Department of Defense that allows the Pentagon to use its Gemini AI models for “any lawful government purpose” on classified networks — the same networks that handle mission planning and weapons targeting. The deal was first reported by The Information, citing a person familiar with the matter. It was revealed just hours after more than 600 Google employees sent an open letter to CEO Sundar Pichai demanding he refuse the Pentagon deal entirely. Pichai went ahead. The juxtaposition — employees writing a moral objection letter at the same moment their CEO was signing it — has shocked the technology world.

“We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses. The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads.”
— 600+ Google employees, open letter to CEO Sundar Pichai, April 27, 2026

What the deal actually allows — and what Google says it prevents

The agreement is an amendment to an existing arrangement under which Google’s Gemini was already approved for use on unclassified government data. The new deal expands access to classified systems — a fundamentally different category. Classified networks sit at the heart of America’s most sensitive military operations: intelligence analysis, mission planning, target identification and weapons deployment. The Pentagon is authorised to use Gemini for “any lawful government purpose” on these systems.

Critically, the agreement requires Google to “assist in adjusting its AI safety settings and filters at the government’s request.” In plain English: the Pentagon can ask Google to remove or reduce the ethical guardrails that prevent Gemini from helping with, say, targeting calculations or surveillance analysis — and Google has agreed to comply. The deal also contains language stating that Google’s agreement “does not confer any right to control or veto lawful Government operational decision-making.” Google has no veto. The Pentagon has the final say.

Google’s official statement tried to draw limits: the company said it “remains committed to the consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.” But critics — including the 600 employee signatories — note that no external body verifies these commitments once AI models are deployed on classified networks, where the very definition of “lawful” is determined by the government itself.

The employee revolt — who signed and what they said

The letter sent to Pichai on Monday April 27 was not an anonymous grassroots petition. CBS News and the Washington Post confirmed it was signed by more than 600 Google workers, including directors and vice presidents at Google DeepMind — the company’s flagship AI research lab. These are not junior developers. They are senior AI researchers and engineers who work directly with the models that were just signed over to the Pentagon.

The letter’s language was stark. “We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways. This includes lethal autonomous weapons and mass surveillance but extends beyond,” the employees wrote. They argued that “the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads” — because once AI enters classified environments, there is no external oversight, no transparency and no accountability. They also warned the deal could cause “irreparable harm” to Google’s global reputation among researchers and engineers who chose to work there precisely because of its ethical AI commitments.

“We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways. This includes lethal autonomous weapons and mass surveillance but extends beyond.”
— Google employees’ open letter to Sundar Pichai, April 27, 2026

 

The Anthropic lesson — what happened when one company said no

To figureout why Google said yes, you have to understand what happened to Anthropic in February 2026 when it said no. Anthropic, maker of the Claude AI assistant and one of the most ethics-focused AI companies in the world, initially insisted on keeping its standard guardrails in its Pentagon contract — prohibiting use of its AI for fully autonomous weapons or domestic mass surveillance without human oversight. The Trump administration’s response was rapid and forceful

Defence Secretary Pete Hegseth declared Anthropic a “supply chain risk.” President Trump signed a directive ordering government agencies to stop working with Anthropic. The company was effectively blacklisted from the entire US defence sector overnight. Every other major AI lab watched this happen in real time. The message from Washington was unmistakable: either sign without restrictions, or lose access to the Pentagon’s $200 million per lab AI contracts and risk being labelled an adversary of national security.

HOW EACH AI COMPANY RESPONDED TO THE PENTAGON

Company Stance taken Outcome
xAI (Musk) Signed without any ethical restrictions ✅ Active Pentagon partner
OpenAI Renegotiated — kept some limited guardrails ✅ Active Pentagon partner
Google Signed — agreed to adjust safety filters on request ✅ Active Pentagon partner
Anthropic Refused to remove autonomous weapons / surveillance limits 🚫 Blacklisted as supply chain risk

The table above tells a clear story. In 2026, saying no to the Pentagon’s AI demands costs you your government contracts and your national security credibility. Saying yes means deploying your AI on classified networks where its use is invisible to the public, to your own employees and to any external oversight body. Every major AI lab in America — except Anthropic — has now chosen to say yes. That is the world that Google’s 600 employees were trying to prevent when they wrote their letter on Monday.

What this means for AI ethics globally — and for India

The Google Pentagon deal is not just an American story. It marks a turning point in the global governance of artificial intelligence. For years, leading AI companies — Google most prominently — have argued that ethical AI is not just a moral position but a competitive advantage. Researchers choose Google over defence contractors because Google promised to use AI for search, maps and productivity, not for weapons targeting.

That argument is now significantly weakened. If Google, OpenAI and xAI are all building classified military AI, the distinction between Silicon Valley and the defence-industrial complex collapses. For countries like India, which has its own AI sovereignty ambitions and is simultaneously a strategic partner of the US, the question becomes: when Indian users interact with Gemini, are they using a consumer product or an extension of the US military’s AI infrastructure? The answer, after this week, is less clear than it was before.

 

Frequently asked questions (People Also Ask)

What did Google sign with the Pentagon?

Google signed a classified agreement with the US Department of Defense on April 28, 2026 allowing the Pentagon to use its Gemini AI models on classified networks for “any lawful government purpose.” This includes mission planning, weapons targeting and intelligence analysis. The deal also requires Google to adjust its AI safety filters at the Pentagon’s request.

Why are Google employees angry about the Pentagon AI deal?

More than 600 Google employees — including senior researchers and DeepMind executives — oppose the deal because they believe AI used in classified military settings cannot be properly overseen and could be deployed for lethal autonomous weapons or mass domestic surveillance. They argue refusing classified work entirely is the only way to prevent these harms.

What happened to Anthropic when it said no?

Anthropic was blacklisted by the Pentagon in February 2026 after it refused to remove ethical restrictions from its military AI contract. Defence Secretary Pete Hegseth declared it a “supply chain risk,” and a Trump directive ordered federal agencies to stop using Anthropic’s products. This effectively forced every other AI company to choose between signing without restrictions or being shut out of the US defence sector entirely.

Does OpenAI also have a Pentagon AI deal?

Yes. OpenAI has signed a Pentagon classified AI agreement and renegotiated its terms to keep some limited guardrails — prohibiting mass domestic surveillance and fully autonomous weapons with no human oversight. Elon Musk’s xAI signed without any such restrictions. Google’s deal includes similar language to OpenAI but also requires Google to adjust safety filters on government request.

What is Gemini and why does this deal matter?

Gemini is Google’s flagship AI model — equivalent to OpenAI’s ChatGPT or Anthropic’s Claude. It powers Google Search AI features, Google Workspace tools and is used by hundreds of millions of people globally. The Pentagon deal means the same Gemini model used for email, documents and consumer search is now also cleared for use on the US military’s most classified networks — a fact Google did not publicly announce and that only became known through The Information’s reporting.

What happens next

Sundar Pichai has not publicly responded to the employee letter or the Pentagon deal revelation. The 600 employees who signed the letter are watching to see whether there will be consequences for their dissent — or whether Google’s leadership will offer any engagement with their concerns. In 2018, the Project Maven protest led to Google walking away from a contract. In 2026, the dynamic has reversed: the government has made it clear that walking away means being labelled a national security risk. The employees who tried to stop this are now confronting a world where Silicon Valley ethics and Washington defence policy are no longer compatible positions to hold simultaneously.

SOURCES — 9 verified global news portals

1. The Information — Google Signs Classified Pentagon AI Deal (reported April 28, 2026)

2. CBS News — Hundreds of Google workers urge CEO to refuse classified AI work with Pentagon

3. Washington Post — Google employees ask CEO Sundar Pichai to bar classified military AI work (April 27)

4. 9to5Google — Google’s updated Pentagon deal uses Gemini for ‘any lawful government purpose’ (April 28)

5. Gizmodo — Google Signs Pentagon AI Deal Despite Employee Backlash (April 28, 2026)

6. AndroidHeadlines — Google & the Pentagon Sign Military AI Deal Despite Employee Backlash

7. TechBriefly — Google signs classified Pentagon AI deal (April 28, 2026)

8. Democracy Now — Google Workers Urge CEO to Reject Classified AI Work with the Pentagon

9. AutoGPT — Google Joins OpenAI and xAI in Handing AI to the Pentagon (April 28, 2026)

DISCLAIMER: This article is based on 9 verified global news sources as of April 29, 2026, 9:00 AM IST. The original deal was reported by The Information citing a single source; Google has not officially confirmed all specifics of the classified agreement. Employee letter details sourced from CBS News and the Washington Post. This article does not constitute investment or legal advice.

 

Leave a Reply

Your email address will not be published. Required fields are marked *