Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.
CONTRIBUTING
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-18 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| automatika_embodied_agents | 0.5.1 |
README
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) [](docs/README.zh.md) [](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
Overview
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.
Core Features
-
Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.
-
Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).
-
Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.
-
Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Quick Start
EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.
Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
Answer the following about this image: {{ text0 }}"""
)
# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Note: Check out the Quick Start Guide or dive into Example Recipes for more details.
Complex Component Graphs
The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.