Google ADK for Java Just Hit 1.0: What Every Engineer Needs to Know About Building Production AI Agents

| Reading Time: 3 minutes

Authored & Published by
Nahush Gowda, senior technical content specialist with 6+ years of experience creating data and technology-focused content in the ed-tech space.

| Reading Time: 3 minutes

TL;DR
Google released ADK for Java 1.0.0 on March 30, 2026. The release adds an App and plugin architecture, external tools including browser and code execution, human-in-the-loop approval workflows, event compaction for long-running sessions, structured session and memory services, and native Agent2Agent protocol support. This is a production-readiness milestone, not an experimental update, and it signals that agentic AI development is becoming a mainstream Java engineering discipline.

Table of Contents

On March 30, 2026, Google released ADK for Java 1.0.0, its open-source, code-first toolkit for building, evaluating, and deploying AI agents in Java. The release is notable for a specific reason: the features it adds are not experimental scaffolding for agent demos. They are production-oriented capabilities: structured governance through an App and plugin architecture, human-in-the-loop approval flows, context engineering for long-running sessions, external tool integrations for environment-aware agents, and native Agent2Agent interoperability.

For Java engineers and enterprise platform teams, this signals that agentic AI development is becoming a mainstream software architecture concern in the language ecosystem most of them already work in. The question is no longer whether Java will be a first-class option for agent systems. With this release, Google has answered that directly.

Why This 1.0 Release Matters Beyond the Headline

In the agent framework market, a 1.0 release carries a specific meaning. It signals that the project team considers the core APIs stable enough to build on, that the architecture has been validated through real use, and that the framework is making an intentional commitment to backward compatibility. That is a meaningful shift from a pre-1.0 project where anything can change between releases.

ADK for Java is Google’s open-source, code-first toolkit for building agents that can perceive their environment, use tools, maintain state across multi-turn conversations, and orchestrate complex workflows. It supports both LLM-based agents and deterministic workflow agents, including sequential, parallel, and loop-based orchestration, and is designed for building multi-agent systems rather than simple chatbots.

The Java edition is particularly relevant for enterprises that have standardized on Java for reliability, type safety, strong testing culture, and easier fit with governed, compliance-sensitive environments. While early agentic AI work was overwhelmingly Python-first, Java 1.0 represents Google’s explicit commitment to making Java a first-class option for production agent systems. For engineers thinking about the key challenges shaping software engineering right now, this is one of the more concrete developments to understand.

What Google ADK for Java Actually Is

Before unpacking what is new, it is worth being precise about what ADK is, because the term “AI agent framework” covers an enormous range of products, from thin wrappers around an LLM API to full orchestration runtimes.

ADK is built around a set of composable primitives: agents, tools, sessions, memory, events, and workflow components. Agents are the reasoning units; they observe context, decide which tools to use, and produce outputs. Tools are the action layer: external APIs, code executors, web fetchers, and anything else an agent can invoke to interact with the world. Sessions manage conversational state across multi-turn interactions. Agent memory provides persistence beyond individual sessions. Events are the communication mechanism between components, representing each step in agent execution.

ADK is built around a flexible, async-first, event-driven architecture. The design emphasizes modular components that can be combined and reconfigured, extensibility through new tools, models, agent types, and service backends, and a clear separation of concerns between reasoning in the agents, capabilities in the tools, execution in the runners, and state management in the sessions.

iExpert Insight
ADK Is an Agent Operating Environment, Not a Prompt Library
The distinction matters practically. A prompt orchestration library chains LLM calls together. An agent operating environment manages tool execution, state persistence, context window engineering, human approval flows, and cross-agent communication. ADK is the second category. Treating it as the first will cause you to under-invest in the architecture decisions that determine whether your agents are reliable in production.

The Biggest New Features in ADK for Java 1.0

The major additions in the 1.0 release are:

  • A new App and plugin architecture for centralized configuration and cross-cutting concerns
  • Enhanced external tools including GoogleMapsTool, UrlContextTool, ContainerCodeExecutor, VertexAICodeExecutor, and ComputerUseTool
  • Event compaction for context window management in long-running sessions
  • Human-in-the-Loop (HITL) ToolConfirmation workflows for governed automation
  • Structured session and memory services with defined persistence contracts
  • Native Agent2Agent (A2A) protocol support with the official A2A Java SDK client

Each of these deserves a closer look.

1. The App and Plugin Architecture

The most structurally significant addition in 1.0 is the App container and the plugin system it hosts. Previously, ADK agents carried their configuration inline; cross-cutting concerns like logging, instruction injection, and guardrails had to be wired manually into individual agent definitions. The App abstraction changes this.

An App is the top-level container that holds the root agent, manages global configuration, and hosts a collection of plugins that apply across the entire agent hierarchy. Each plugin can intercept the agent lifecycle at defined points, injecting behavior without modifying individual agent logic.

Google ships several prebuilt plugins: LoggingPlugin for structured observability, ContextFilterPlugin for controlling what information reaches the model at any given turn, and GlobalInstructionPlugin for injecting system-level instructions that apply across all agents in the hierarchy regardless of where they sit. Teams can build custom plugins for domain-specific cross-cutting requirements such as rate limiting, audit logging, PII redaction, or policy enforcement.

💡 Bonus Tip

For Java engineers familiar with Spring Boot: the App container is architecturally analogous to an application context. The plugin system resembles a servlet filter chain or Spring interceptor stack. Cross-cutting concerns go in plugins; agent-specific logic stays in agents. That separation is what makes the system maintainable as the number of agents grows.

The App container is also where event compaction is configured, covered in detail below. Having a single top-level container that owns both cross-cutting concerns and session behavior is a cleaner design than distributing those responsibilities across individual agents.

Diagram showing where plugins and callbacks can interact with the flow of the agent. Source: Google

2. External Tools That Move Agents Beyond Text Generation

The tool ecosystem is what separates serious agent frameworks from thin LLM wrappers, and the 1.0 release expands ADK’s tool support substantially. Understanding how function calling and tool use work at the model level is useful context before diving into what each tool enables.

GoogleMapsTool provides location-based grounding, letting agents answer questions involving geography, routing, and place data by calling the Google Maps Platform rather than relying on the model’s training knowledge. For enterprise use cases in logistics, field operations, or location-aware customer workflows, this is the difference between a plausible-sounding answer and a correct one.

UrlContextTool fetches and summarizes web content at runtime, giving agents the ability to work with current information from specified URLs. This is particularly useful for agents that need to reason about documentation, news, or any content that changes faster than a model’s training cutoff.

ContainerCodeExecutor runs code in a local Docker container, providing a sandboxed execution environment for agents that generate and execute code as part of their workflow. VertexAICodeExecutor provides the same capability in a managed cloud environment via Vertex AI, removing the need to manage local Docker infrastructure.

ComputerUseTool integrates with Playwright to give agents the ability to control a browser, navigate web interfaces, and interact with applications that do not expose an API. This is the foundation for agents that automate tasks across systems that were never designed for programmatic access.

Why Tool Ecosystems Define Agent Quality

An agent equipped with UrlContextTool, ContainerCodeExecutor, and ComputerUseTool is doing genuinely different work than one that only calls a model with a prompt. Tools are the mechanism by which agents interact with the real world. The richness of the tool ecosystem determines whether an agent can solve real problems or only generate text about them.

3. Human-in-the-Loop Workflows for Safer Automation

ADK 1.0 includes built-in support for human-in-the-loop workflows. Agents can pause before taking critical actions, request human approval, and then resume execution with the confirmation reflected in context.

A registered tool accesses its ToolContext and calls requestConfirmation(), which automatically intercepts the run and pauses the LLM flow until input is received. ADK then cleans up intermediate events and explicitly injects the confirmed function call into the subsequent LLM request context, ensuring the model understands the action was approved without re-triggering the same reasoning loop.

This is important for enterprise use cases where agent autonomy needs governance guardrails. Reimbursement approvals, customer account changes, infrastructure provisioning, legal document generation: any workflow where a mistake is costly and autonomous action without human sign-off is unacceptable now has a first-class mechanism in the framework.


The Event Handling Detail That Actually Matters
Naive implementations of human-in-the-loop pause the agent but leave the context in an inconsistent state, causing the model to re-reason the same decision on resumption. ADK’s explicit event injection avoids this by making the approval visible to the model as a completed context event rather than a gap. If you are evaluating other frameworks for HITL support, this is the implementation detail worth verifying.

4. Event Compaction and Context Engineering for Long-Running Agents

Context window management is one of the most practical challenges in production agent systems, and ADK 1.0 addresses it directly through event compaction.

Rather than abruptly truncating conversations when a session approaches token limits, the system maintains a sliding window of recent events and summarizes older data. This prevents context windows from exceeding token limits while directly reducing latency and lowering compute costs during long-running sessions.

Event compaction is configured via the eventsCompactionConfig method on the App container. The configuration controls the compaction interval, which determines how frequently summarization triggers; the overlap size, which determines how many events from the previous window are retained in full; and the summarizer to use for generating the compressed representation of older events.

Java code example:

App app = App.builder()
  .name("my-agent")
  .rootAgent(rootAgent)
  .eventsCompactionConfig(EventsCompactionConfig.builder()
    .compactionInterval(3)  // Trigger every 3 new invocations
    .overlapSize(1)         // Retain last invocation from previous window
    .build())
  .build();

The operational benefits are concrete: lower token usage per session, more stable multi-turn behavior as sessions grow, and predictable cost scaling for workflows that span many turns. For agents handling customer support, document review, or any workflow with unbounded session length, this is not an optimization; it is a prerequisite for production viability.

5. Session, State, and Memory Services

ADK 1.0 formalizes the contracts for conversational context management through structured SessionService and MemoryService interfaces with defined persistence backends.

Sessions manage the live context of an active interaction: what has been said, what tools have been called, and what decisions have been made. State within a session can be persisted to Vertex AI or Firestore, enabling agents to recover from interruptions, resume long workflows, and provide auditable records of their decision history.

Agent memory provides persistence beyond individual sessions, letting agents accumulate knowledge over time that carries forward into future interactions. This is the layer that enables an agent serving a returning user to remember context from previous conversations rather than starting from scratch.

For enterprise environments, the structured service contracts matter as much as the capabilities themselves. A defined interface for SessionService means teams can swap persistence backends, implement their own, or mock them for testing without modifying agent logic. That testability is what makes the framework compatible with mature Java software development practices.

6. Native Agent2Agent Support Expands the Multi-Agent Story

ADK for Java now natively supports the Agent2Agent (A2A) protocol, enabling different agents to communicate and collaborate even across different languages or frameworks.

The framework uses the official A2A Java SDK Client. Teams can resolve an AgentCard, which is the URL representing a remote agent’s identity, abilities, and communication preferences, from a remote endpoint, construct the A2A client, and wrap it in a RemoteA2AAgent. This remote agent is placed directly into the local ADK agent hierarchy and acts exactly like a local agent, natively streaming events back to the Runner. To expose an ADK agent to other systems, teams create an A2AAgentExecutor that wraps the agent and exposes it via a JSON-RPC REST endpoint.

The significance is strategic as well as technical. Most enterprise organizations will not run a single-framework agent estate. They will have Python agents from one team, Java agents from another, and potentially agents from third-party vendors. The A2A protocol is the mechanism for those agents to compose into coherent systems without requiring a shared framework or language. Native support in ADK for Java 1.0 means Java agents are full participants in that cross-framework ecosystem from day one. This is closely related to how multi-agent orchestration patterns are evolving in enterprise environments.

“The future of enterprise agent systems is heterogeneous rather than single-framework. Native A2A support is what makes Java agents full participants in that ecosystem.”

What This Means for Java Engineers Specifically

Java teams have a set of properties that make them well-suited to the kind of production agent development ADK 1.0 is designed for: strong static typing for catching integration errors at compile time, mature testing infrastructure including JUnit and Mockito, established observability practices through Micrometer and OpenTelemetry, and deep familiarity with the application server and enterprise service patterns that the App and plugin architecture mirrors.

The practical implication is that Java engineers can bring their existing patterns to agent development rather than learning a new programming model from scratch. A Spring Boot developer who understands application context, bean lifecycle, and filter chains will find the ADK App and plugin architecture conceptually familiar. An engineer who writes well-tested service code will find the structured SessionService and MemoryService interfaces straightforward to test and mock.

Building proficiency in Java’s core language features remains the foundation, but engineers who layer agent system design on top of that foundation are positioning themselves for some of the most in-demand work in enterprise AI. The technical skill set for senior engineers is expanding to include exactly this kind of production agentic systems work.

Production Readiness: Where ADK for Java Looks Strong, and Where Caution Is Warranted

A balanced read of the 1.0 release requires separating the features that indicate genuine production intent from the areas where caution is still warranted.

Where It Looks Strong
  • Plugin system: clean separation of cross-cutting concerns
  • HITL confirmation: correct event handling on resumption
  • Event compaction: addresses a real production constraint
  • Code execution tools: both local and cloud-managed options
  • Native A2A support with the official Java SDK client
  • Structured session and memory service contracts
Where Caution Is Warranted
  • GitHub README already shows 1.1.0 dependencies; project is moving fast
  • Some documentation features still flagged Pre-GA or “coming soon”
  • Known Dev UI / OpenTelemetry dependency conflict in release notes
  • The 1.0 API stability commitment does not cover every ecosystem component

For teams evaluating ADK for Java, the right framing is: the architecture and core abstractions are solid enough to build on, but plan for ongoing iteration rather than treating the current release as fully frozen. That is a reasonable expectation for a 1.0 in a fast-moving domain.

How ADK for Java Compares to the Broader Agent Framework Trend

The broader agent framework market is in the middle of a significant maturation cycle. The first generation of agent tooling was largely prompt orchestration: chain some prompts together, call some APIs, return a result. That generation is being replaced by a second that treats agent development as a software architecture discipline with proper concerns around state management, tool design, governance, context engineering, and protocol-based interoperability.

ADK for Java 1.0 is a clear example of the second generation. The features it ships are not about making LLM calls easier. They are about making multi-agent systems reliable, governable, and composable in enterprise environments. The plugin architecture addresses cross-cutting concerns. HITL addresses governance. Event compaction addresses operational cost and stability. A2A support addresses interoperability in heterogeneous environments.

This shift also connects to a broader theme in what AI engineering roles are starting to require: the ability to design and govern agent systems at the architecture level, not just use AI tools to write faster. Understanding how tools, sessions, memory, and workflow orchestration compose into production systems is becoming a core engineering competency. That is what ADK for Java 1.0 is designed to support, and it is the skill set that IK’s Agentic AI for Software Engineers course is built to develop, including the multi-agent architectural patterns that this release reflects directly.

Practical Takeaways for Teams Exploring Agentic AI

For engineering leaders and developers evaluating ADK for Java, several concrete steps are worth taking now.

Map your existing Java services to potential tool endpoints. The most practical way to start with ADK is to identify internal services that could be wrapped as agent tools. Any service with a well-defined API is a candidate. Exposing it as an ADK tool or via the A2A protocol makes it part of the composable agent ecosystem without requiring a full rewrite.

Identify one bounded internal use case. The teams that get the most signal from early agentic AI pilots are those that pick a specific, contained problem: a support triage workflow, an internal document Q&A, an approval routing process. A bounded use case makes it possible to measure whether the agent is actually working rather than producing qualitative impressions.

Think about HITL before you think about full autonomy. The enterprises that will trust agents in production fastest are the ones that design appropriate checkpoints from the start. ADK’s requestConfirmation() is a tool for building that trust incrementally rather than asking stakeholders to accept full autonomy upfront.

Build skills around context management and tool design. These are the two engineering disciplines that determine agent quality most directly. An agent with well-designed tools and a correctly tuned context window will consistently outperform a poorly tooled agent with a more capable model. For engineering managers building the internal case for investment, the Agentic AI for Engineering Managers course covers the governance and ROI framing alongside the technical depth.

Design your plugin layer before you have ten agents. The App and plugin architecture is the right place to put logging, instruction injection, and guardrails. Designing that layer before you have ten agents in production is significantly easier than retrofitting it afterward.

💡 Bonus Tip

Start your ADK pilot with a workflow that already has a human approval step in it. Replacing an email-based or Slack-based approval gate with an ADK HITL confirmation flow is a low-risk entry point that demonstrates value immediately and builds organizational confidence in the governance model before you push toward higher-autonomy workflows.

Frequently Asked Questions

What is Google ADK for Java?

Google ADK for Java is an open-source toolkit for building production AI agents in Java. It covers agents, tools, sessions, memory, workflow orchestration, and multi-agent coordination.

How is ADK different from a simple LLM wrapper?

ADK provides full agent infrastructure: tool execution, state management, context engineering, human approval workflows, and cross-agent communication. A wrapper just makes API calls.

Do I need to use Google Cloud to run ADK for Java?

No. Local execution works without Google Cloud. Cloud-specific features like VertexAICodeExecutor and Firestore persistence are optional, not required for core agent functionality.

What is event compaction and why does it matter?

Event compaction summarizes older session history to keep context windows manageable. It reduces token costs, lowers latency, and prevents long-running agents from degrading over time.

Can ADK Java agents communicate with Python or other framework agents?

Yes. Native Agent2Agent protocol support lets ADK Java agents connect to remote agents built in any language or framework via standardized JSON-RPC endpoints.

Is ADK for Java 1.0 ready for enterprise production use?

Core architecture and APIs are stable enough to build on. Some features remain Pre-GA. Plan for ongoing iteration and monitor the GitHub repository for updates.

Key Facts at a Glance

Item Detail
Release date March 30, 2026
Version 1.0.0 (GitHub README currently shows 1.1.0 dependencies)
Announced by Google Developers Blog (Guillaume Laforge, Developer Advocate)
License Open source (Apache 2.0)
Repository github.com/google/adk-java
Major new additions App/plugin architecture, external tools, event compaction, HITL, session/memory services, A2A protocol
External tools added GoogleMapsTool, UrlContextTool, ContainerCodeExecutor, VertexAICodeExecutor, ComputerUseTool
Persistence options Vertex AI, Firestore
Protocol support Agent2Agent (A2A) via official Java SDK client
Known issue Dev UI / OpenTelemetry dependency conflict noted in release notes
Pre-GA items Some features still flagged “coming soon” in documentation
Register for our webinar

Uplevel your career with AI/ML/GenAI

Loading_icon
Loading...
1 Enter details
2 Select webinar slot
By sharing your contact details, you agree to our privacy policy.

Select a Date

Time slots

Time Zone:

IK courses Recommended

Master AI tools to build autonomous, decision-making agents that streamline business tasks across any domain.

Fast filling course!

Master Multi-Agent Systems, LLM Orchestration, and real-world application, with hands-on projects and FAANG+ mentorship.

Build AI agents, automate workflows, deploy AI-powered solutions, and prep for the toughest interviews.

Master Agentic AI to build, optimize, and deploy intelligent AI workflows to drive efficiency and innovation.

Learn how to apply Multi-Agent Systems and LLM Orchestration with hands-on projects and mentorship from FAANG+ experts.

Get hands-on with multi-agent systems, AI-powered roadmaps, and automated decision tools—guided by FAANG+ experts.

Select a course based on your goals

Learn to build AI agents to automate your repetitive workflows

Upskill yourself with AI and Machine learning skills

Prepare for the toughest interviews with FAANG+ mentorship

Register for our webinar

How to Nail your next Technical Interview

Loading_icon
Loading...
1 Enter details
2 Select slot
By sharing your contact details, you agree to our privacy policy.

Select a Date

Time slots

Time Zone:

Almost there...
Share your details for a personalised FAANG career consultation!
Your preferred slot for consultation * Required
Get your Resume reviewed * Max size: 4MB
Only the top 2% make it—get your resume FAANG-ready!

Registration completed!

🗓️ Friday, 18th April, 6 PM

Your Webinar slot

Mornings, 8-10 AM

Our Program Advisor will call you at this time

Register for our webinar

Transform Your Tech Career with AI Excellence

Transform Your Tech Career with AI Excellence

Join 25,000+ tech professionals who’ve accelerated their careers with cutting-edge AI skills

25,000+ Professionals Trained

₹23 LPA Average Hike 60% Average Hike

600+ MAANG+ Instructors

Webinar Slot Blocked

Interview Kickstart Logo

Register for our webinar

Transform your tech career

Transform your tech career

Learn about hiring processes, interview strategies. Find the best course for you.

Loading_icon
Loading...
*Invalid Phone Number

Used to send reminder for webinar

By sharing your contact details, you agree to our privacy policy.
Choose a slot

Time Zone: Asia/Kolkata

Choose a slot

Time Zone: Asia/Kolkata

Build AI/ML Skills & Interview Readiness to Become a Top 1% Tech Pro

Hands-on AI/ML learning + interview prep to help you win

Switch to ML: Become an ML-powered Tech Pro

Explore your personalized path to AI/ML/Gen AI success

Your preferred slot for consultation * Required
Get your Resume reviewed * Max size: 4MB
Only the top 2% make it—get your resume FAANG-ready!
Registration completed!
🗓️ Friday, 18th April, 6 PM
Your Webinar slot
Mornings, 8-10 AM
Our Program Advisor will call you at this time

Get tech interview-ready to navigate a tough job market

Best suitable for: Software Professionals with 5+ years of exprerience
Register for our FREE Webinar

Next webinar starts in

00
DAYS
:
00
HR
:
00
MINS
:
00
SEC

Your PDF Is One Step Away!

The 11 Neural “Power Patterns” For Solving Any FAANG Interview Problem 12.5X Faster Than 99.8% OF Applicants

The 2 “Magic Questions” That Reveal Whether You’re Good Enough To Receive A Lucrative Big Tech Offer

The “Instant Income Multiplier” That 2-3X’s Your Current Tech Salary

Transform Your Tech Career with AI Excellence

Join 25,000+ tech professionals who’ve accelerated their careers with cutting-edge AI skills

Join 25,000+ tech professionals who’ve accelerated their careers with cutting-edge AI skills

Webinar Slot Blocked

Loading_icon
Loading...
*Invalid Phone Number
By sharing your contact details, you agree to our privacy policy.
Choose a slot

Time Zone: Asia/Kolkata

Build AI/ML Skills & Interview Readiness to Become a Top 1% Tech Pro

Hands-on AI/ML learning + interview prep to help you win

Choose a slot

Time Zone: Asia/Kolkata

Build AI/ML Skills & Interview Readiness to Become a Top 1% Tech Pro

Hands-on AI/ML learning + interview prep to help you win

Switch to ML: Become an ML-powered Tech Pro

Explore your personalized path to AI/ML/Gen AI success

Registration completed!

See you there!

Webinar on Friday, 18th April | 6 PM
Webinar details have been sent to your email
Mornings, 8-10 AM
Our Program Advisor will call you at this time