Strands Agents: Open-Source SDK for Building AI Agents with Model-Driven Approach

I’m excited to announce the release of Strands Agents, an open-source SDK that simplifies building and running AI agents with a model-driven approach using just a few lines of code. Strands supports a wide range of use cases from simple to complex agents and scales from local development to production deployment. It’s already in production at AWS teams such as Amazon Q Developer, AWS Glue, and VPC Reachability Analyzer. Now, you can use Strands to create your own AI agents easily. Unlike frameworks that require defining complex workflows, Strands leverages state-of-the-art model capabilities—such as planning, chaining thoughts, tool invocation, and reflection—allowing developers to define just a prompt and a list of tools to create an agent. Strands, like two DNA strands, connects the model and tools; the model plans next steps and runs tools with advanced reasoning. It supports extensive customization including tool selection, context management, session state, memory, and multi-agent applications. Strands works with models from Amazon Bedrock, Anthropic, Ollama, Meta, and others via LiteLLM, running anywhere. The project is an open community with contributions from Accenture, Anthropic, Langfuse, mem0. ai, Meta, PwC, Ragas. io, Tavily, and more. Examples include Anthropic’s API support and Meta’s Llama API integration. Join us on GitHub to get started! ### Our Agent Journey Working on Amazon Q Developer, a generative AI assistant for software development, my team began building AI agents in early 2023 following the ReAct (Reasoning and Acting) paper, which demonstrated large language models (LLMs) could reason and take actions, like making API calls by generating inputs. Although LLMs weren’t initially trained to act as agents but for natural language conversation, we built complex frameworks with prompt instructions, response parsers, and orchestration logic—often spending months tuning agents for production. As LLMs dramatically improved in reasoning and tool use, these complex frameworks became bottlenecks restricting iteration speed and agility. Recognizing this shift, we created Strands Agents to remove orchestration complexity and harness the native reasoning and tool use of modern LLMs. This approach cut development time from months to days or weeks, significantly accelerating production readiness and improving user experience. ### Core Concepts of Strands Agents An agent comprises three components: (1) a model, (2) tools, and (3) a prompt. Agents autonomously use these to complete tasks like answering questions, coding, planning, or optimizing portfolios.
The model-driven approach lets the model dynamically direct its steps and tool usage to accomplish the goal. - **Model:** Strands supports flexible models including Amazon Bedrock models with tool use and streaming, Anthropic Claude models via API, Llama models via Llama API, Ollama for local development, OpenAI through LiteLLM, and custom models. - **Tools:** Thousands of Model Context Protocol (MCP) server tools are available plus 20+ pre-built tools such as file manipulation, API calls, and AWS API interaction. Python functions can be easily wrapped as tools using the @tool decorator. - **Prompt:** Developers provide a natural language prompt defining the task and a system prompt for instructions on agent behavior. The agent runs an “agentic loop” of interacting with the model and tools until completion. In each loop, the LLM receives the prompt, context, and tool descriptions, deciding whether to respond directly, plan, reflect, or invoke tools. Strands executes chosen tools and returns results to the LLM, culminating in the final output. Tools enable customization and complexity: they can fetch documents from knowledge bases, make API calls, run Python code, or provide static instructions. Example tools include: - **Retrieve Tool:** Performs semantic search over Amazon Bedrock Knowledge Bases, retrieving relevant documents or tools. For instance, one AWS internal agent selects from 6, 000+ tools by retrieving a relevant subset to present to the model. - **Thinking Tool:** Enables multi-cycle deep analytical processing and self-reflection. - **Multi-agent Tools:** Workflow, graph, and swarm tools support orchestrating multiple agents collaboratively for complex tasks. Support for Agent2Agent (A2A) protocol is forthcoming. ### Getting Started with Strands Agents Here’s a simple example of a naming AI assistant built with Strands using an Amazon Bedrock model, an MCP server for domain validation, and a pre-built GitHub tool to check organization name availability: ```python from strands import Agent from strands. tools. mcp import MCPClient from strands_tools import http_request from mcp import stdio_client, StdioServerParameters NAMING_SYSTEM_PROMPT = """ You are an assistant that helps to name open source projects. Provide available domain names and GitHub organizations after validating their availability. """ domain_name_tools = MCPClient(lambda: stdio_client( StdioServerParameters(command="uvx", args=["fastdomaincheck-mcp-server"]) )) github_tools = [http_request] with domain_name_tools: tools = domain_name_tools. list_tools_sync() + github_tools naming_agent = Agent(system_prompt=NAMING_SYSTEM_PROMPT, tools=tools) naming_agent("I need to name an open source project for building AI agents. ") ``` To run this, set your GitHub token as `GITHUB_TOKEN`, have access to Anthropic Claude 3. 7 Sonnet model in us-west-2, and configure your AWS credentials. Install with: ``` pip install strands-agents strands-agents-tools python -u agent. py ``` You’ll receive project name suggestions with availability checks. Strands MCP servers integrate well with AI-assisted development tools like Q Developer CLI. For example, add the following to your MCP configuration: ```json { "mcpServers": { "strands": { "command": "uvx", "args": ["strands-agents-mcp-server"] } } } ``` ### Deploying Strands Agents in Production Strands is designed with production use in mind, offering flexible deployment architectures. You can run agents locally, behind APIs (using AWS Lambda, Fargate, or EC2), or as distributed systems separating the agentic loop and tool execution environments. For example, tools may run in Lambda while the agent runs in containers; or clients may handle tools locally while communicating with a backend agent. Strands also supports observability and monitoring via OpenTelemetry (OTEL), enabling detailed tracing, metrics, and telemetry for agent sessions across distributed systems. ### Join the Strands Agents Community Strands Agents is open source under the Apache License 2. 0. We invite contributions to add model and tool support, develop new features, or improve documentation. If you find bugs or have ideas, join us on GitHub and help build the future of AI agents with Strands!
Brief news summary
Strands Agents is an open-source SDK designed to simplify AI agent development using a model-driven, low-code approach. It supports diverse project complexities and ensures seamless transition from local development to production. Trusted by AWS teams like Amazon Q Developer and AWS Glue, it leverages modern large language models’ native reasoning and tool-use, avoiding complex orchestration. Developers build agents by defining prompts, tools, and models, integrating providers such as Amazon Bedrock, Anthropic, Meta, and Ollama. The SDK connects models with APIs, knowledge retrieval, and Python functions, enabling agents to plan, act, and collaborate in multi-agent workflows. Licensed under Apache 2.0, Strands boasts a growing community including Accenture, Anthropic, Meta, and PwC, offering reference implementations, deployment toolkits, broad architecture support, and OpenTelemetry-based observability. Its Model Context Protocol servers further enhance tooling, speeding up AI agent development. Join the GitHub community today to start building with Strands Agents.
AI-powered Lead Generation in Social Media
and Search Engines
Let AI take control and automatically generate leads for you!

I'm your Content Manager, ready to handle your first test assignment
Learn how AI can help your business.
Let’s talk!

Blockchain and Digital Assets Virtual Investor Co…
NEW YORK, June 06, 2025 (GLOBE NEWSWIRE) — Virtual Investor Conferences, the premier proprietary investor conference series, today announced that the presentations from the Blockchain and Digital Assets Virtual Investor Conference held on June 5th are now accessible for online viewing.

Lawyers Face Sanctions for Citing Fake Cases with…
A senior UK judge, Victoria Sharp, has issued a strong warning to legal professionals about the dangers of using AI tools like ChatGPT to cite fabricated legal cases.

What Happens When People Don't Understand How AI …
The widespread misunderstanding of artificial intelligence (AI), especially large language models (LLMs) like ChatGPT, has significant consequences that warrant careful examination.

Scalable and Decentralized, Fast and Secure, Cold…
In today’s fast-changing crypto market, investors gravitate toward blockchain projects that blend scalability, decentralization, speed, and security.

Blockchain in Education: Revolutionizing Credenti…
The education sector faces significant challenges in verifying academic credentials and maintaining secure records.

Exploratorium Launches 'Adventures in AI' Exhibit…
This summer, San Francisco’s Exploratorium proudly presents its newest interactive exhibition, "Adventures in AI," aimed at delivering a thorough and engaging exploration of artificial intelligence to visitors.

Google Unveils Ironwood TPU for AI Inference
Google has unveiled its latest breakthrough in artificial intelligence hardware: the Ironwood TPU, its most advanced custom AI accelerator to date.