AI Tools·

First Look: Testing Google's New Antigravity AI Coding Platform

Google just launched Antigravity alongside Gemini 3. I'm testing it as a daily driver to see if it's better than Claude Code for managing autonomous AI agents.
Google Antigravity IDE interface

Breaking: Google Just Launched Antigravity

Google announced Gemini 3 Pro and Google Antigravity this morning—November 18, 2025.

I'm writing this as I test it.

After Andrej Karpathy's tweet sparked my interest in taking gemini 3 pro for a test drive as my daily driver. I'm exploring agentic AI coding in general i thought i'd combine the model with Google's full IDE with autonomous agents that have direct access to your editor, terminal, and browser.

I've tried GitHub Copilot, Cursor, the Gemini CLI, and Windsurf. None were more productive than Claude Code. I'm testing Antigravity to see if it changes that—specifically for multi-agent management.

What Google Antigravity Actually Is

Antigravity is an AI IDE powered by Gemini 3, Claude Sonnet 4.5, and GPT-OSS (yes, you can choose your model).

Two modes:

  • Editor View: Inline AI assistance, like every other AI IDE
  • Manager View: Multiple autonomous agents working simultaneously

The "agent-first architecture" means agents aren't just chatbots—they control the editor, terminal, and browser directly. They create "artifacts" (task lists, implementation plans, screenshots) that are easier to verify than raw tool calls.

Free public preview. Available for Linux, macOS, Windows.

Source: https://antigravity.google

Installation on Linux

Install was easy, register the keyring, then installed via apt. Launched with antigravity command.

Installation took about 3 minutes total.

I chose the recommended defaults:

  • Agent mode: Agent-assisted development (recommended)
  • Terminal execution: Auto
  • Review policy: Agent Decides
  • Browser allowlist: Enabled

Then the default extensions installed.

Then logged in with my Google account.

First Impressions: The Interface

So starting up Antigravity, you're greeted with a clean interface, which really means it's another VSCode fork with AI baked in. Not like i don't love VSCode, and if I were tasked to make a AI agent UI, its exactly what I'd do.

Browser

So one interesting feature is the built-in browser. You can open a browser tab right in the IDE, and agents can use it to look up documentation or search the web.

Agent Manager

The Agent Manager is where you can see and control all your autonomous agents. You can create new agents, assign them tasks, and monitor their progress.

This is a big differentiator from other AI coding tools. The ability to have multiple agents working on different parts of a project simultaneously could be a game-changer for complex tasks. Currently I'm using git worktrees to manage multiple branches, so having agents handle different branches or features could streamline my workflow.

Testing Real Coding Tasks

I gave Antigravity the same task I gave Claude with Sonnet 4.5. I'm in the middle of moving my blog hosting from cloudflare to GCP (after a quick try at AWS which was way way overpriced). The current issues is the docker container builds and hosts the site correctly but the images aren't loading correctly.

Prompt

Here's the awful prompt I gave each of them.

Lets finish the deployment to GCP, the docker image works but the images don't correctly load.

No more context than that. And its completely in Clauds favor. All the CLAUDE.md vs what ever gemini can find. Let's see how each tool handles it.

Result with Antigravity

What worked:

  • TODO

What didn't:

  • TODO

Result with Claude Code

TODO: Add your experience

Agent Autonomy in Practice

The big question: What does "autonomous agents" actually mean?

[TODO: Test scenarios where agents work independently:

  • Can they handle multi-step tasks without constant guidance?
  • Do they validate their own work effectively?
  • How do you manage multiple agents at once?
  • What happens when they conflict or make mistakes?]

Gemini 3 Coding Ability

Google claims Gemini 3 Pro hits 1501 on LMArena with PhD-level reasoning scores.

[TODO: In practice, how does the code quality compare to:

  • Claude Sonnet 4.5 (which you can also use in Antigravity)
  • Your experience with other models
  • Specific examples of good/bad suggestions]

The Claude Code Comparison

I use Claude Code extensively. Here's how Antigravity stacks up:

Installation:TODO: Which was easier to set up?

UI/UX:TODO: Which interface feels more natural for your workflow?

Agent Coordination:TODO: Does Manager View actually improve multi-agent tasks, or is it complexity without benefit?

Code Quality:TODO: Compare the actual output quality

Speed:TODO: Which is faster for real work?

Context Management:TODO: How do they handle large codebases and maintaining context?

What Surprised Me

TODO: What unexpected things did you discover—good or bad?

The Real Question: Should You Switch?

After X hours of testing, here's my honest take:

Use Antigravity if:TODO: Based on your experience, what specific scenarios make it worth using?

Stick with Claude Code if:TODO: Where does Claude Code still win?

Try both if:TODO: When might you use each tool for different tasks?

What I'm Still Testing

This is a first look, not a comprehensive review. Things I need more time with:

  • TODO: Long-term reliability
  • TODO: Complex multi-agent scenarios
  • TODO: Performance on large codebases
  • TODO: Edge cases and error handling

Final Thoughts

TODO: Your honest bottom-line assessment after initial testing

Google's timing is interesting—dropping this the same day as the announcement. It's either confidence or hype. I'm testing which one it is.

I'll update this post as I learn more.

Resources


Update Log:

  • 2025-11-18: Initial testing and first impressions
  • TODO: Add updates as you test more