Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro ardent showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro bouncy showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro crystal showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro eloquent showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro dashing showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro galactic showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro foxy showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro iron showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro lunar showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro jade showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro indigo showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro hydro showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro kinetic showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro melodic showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)
No version for distro noetic showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-02-18
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.1

README

EmbodiedAgents Logo
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![ROS2](https://img.shields.io/badge/ROS2-Humble%2B-green)](https://docs.ros.org/en/humble/index.html) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.gg/B9ZU6qjzND) [![简体中文](https://img.shields.io/badge/简体中文-gray.svg)](docs/README.zh.md) [![日本語](https://img.shields.io/badge/日本語-gray.svg)](docs/README.ja.md) **The production-grade framework for deploying Physical AI** [**Installation**](#installation) | [**Quick Start**](#quick-start) | [**Documentation**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)

Overview

EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Unlike standard chatbots, this framework provides an orchestration layer for Adaptive Intelligence designed specifically for autonomous systems in dynamic environments.

Core Features

  • Production Ready Designed for real-world deployment. Provides a robust orchestration layer that makes deploying Physical AI simple, scalable, and reliable.

  • Self-Referential Logic Create agents that are self-aware. Agents can start, stop, or reconfigure their components based on internal or external events. Trivially switch planners based on location, or toggle between cloud and local ML (See: Gödel machines).

  • Spatio-Temporal Memory Utilize embodiment primitives like hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow. No need to use bloated “GenAI” frameworks on your robot.

  • Pure Python, Native ROS2 Define complex asynchronous graphs in standard Python without touching XML launch files. Under the hood, it is pure ROS2—fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.


Quick Start

EmbodiedAgents provides a pythonic way to describe node graphs using Sugarcoat.

Copy the following recipe into a python script (e.g., agent.py) to create a VLM-powered agent that can answer questions like “What do you see?”.

from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher

# 1. Define input and output topics
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")

# 2. Define a model client (e.g., Qwen via Ollama)
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)

# 3. Define a VLM component
# A component represents a node with specific functionality
vlm = VLM(
    inputs=[text0, image0],
    outputs=[text1],
    model_client=qwen_client,
    trigger=text0,
    component_name="vqa"
)

# 4. Set the prompt template
vlm.set_topic_prompt(text0, template="""You are an amazing and funny robot.
    Answer the following about this image: {{ text0 }}"""
)

# 5. Launch the agent
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()

Note: Check out the Quick Start Guide or dive into Example Recipes for more details.


Complex Component Graphs

The quickstart example above is just an amuse-bouche of what is possible with EmbodiedAgents. We can create arbitrarily sophisticated component graphs and configure the system to change or reconfigure itself based on events, both internal or external to the system. Check out the code for the following agent here.

File truncated at 100 lines [see the full file](https://github.com/automatika-robotics/ros-agents/tree/main/README.md)