Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro ardent showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro bouncy showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro crystal showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro eloquent showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro dashing showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro galactic showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro foxy showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro iron showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro lunar showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro jade showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro indigo showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro hydro showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro kinetic showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro melodic showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file

No version for distro noetic showing humble. Known supported distros are highlighted in the buttons above.

Repository Summary

Checkout URI https://github.com/automatika-robotics/ros-agents.git
VCS Type git
VCS Version main
Last Updated 2026-01-19
Dev Status DEVELOPED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
automatika_embodied_agents 0.5.0

README

EmbodiedAgents Logo.


🇨🇳 简体中文 🇯🇵 日本語

EmbodiedAgents is a production-grade framework, built on top of ROS2, designed to deploy Physical AI on real world robots. It enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

  • Production Ready Physical Agents: Designed to be used with autonomous robot systems that operate in real world dynamic environments. EmbodiedAgents makes it simple to create systems that make use of Physical AI. It provides an orchestration layer for Adaptive Intelligence.
  • Self-referential and Event Driven: An agent created with EmbodiedAgents can start, stop or reconfigure its own components based on internal and external events. For example, an agent can change the ML model for planning based on its location on the map or input from the vision model. EmbodiedAgents makes it simple to create agents that are self-referential Gödel machines.
  • Semantic Memory: Integrates vector databases, semantic routing and other supporting components to quickly build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
  • Pure Python, Native ROS2: Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2 compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Join our Discord 👾

Checkout Installation Instructions 🛠️

Get started with the Quickstart Guide 🚀

Get familiar with Basic Concepts 📚

Dive right in with Example Recipes

Installation 🛠️

Install a model serving platform

The core of EmbodiedAgents is agnostic to model serving platforms. It supports Ollama, RoboML and all platforms or cloud provider with an OpenAI compatible API (e.g. vLLM, lmdeploy etc.). For VLA models EmbodiedAgents supports policies severed on the Async Inference server from LeRobot. Please install either of these by following the instructions provided by respective projects. Support for new platforms is being continuously added. If you would like to support a particular platform, please open an issue/PR.

Install EmbodiedAgents (Ubuntu)

For ROS versions >= humble, you can install EmbodiedAgents with your package manager. For example on Ubuntu:

sudo apt install ros-$ROS_DISTRO-automatika-embodied-agents

Alternatively, grab your favorite deb package from the release page and install it as follows:

sudo dpkg -i ros-$ROS_DISTRO-automatica-embodied-agents_$version$DISTRO_$ARCHITECTURE.deb

If the attrs version from your package manager is < 23.2, install it using pip as follows:

pip install 'attrs>=23.2.0'

Install EmbodiedAgents from source

Get Dependencies

Install python dependencies

pip install numpy opencv-python-headless 'attrs>=23.2.0' jinja2 httpx setproctitle msgpack msgpack-numpy platformdirs tqdm websockets

Download Sugarcoat🍬

git clone https://github.com/automatika-robotics/sugarcoat

Install EmbodiedAgents

git clone https://github.com/automatika-robotics/embodied-agents.git
cd ..
colcon build
source install/setup.bash
python your_script.py

Quick Start 🚀

Unlike other ROS package, EmbodiedAgents provides a pure pythonic way of describing the node graph using Sugarcoat🍬. Copy the following recipe in a python script and run it.

```python from agents.clients.ollama import OllamaClient from agents.components import VLM from agents.models import OllamaModel from agents.ros import Topic, Launcher

Define input and output topics (pay attention to msg_type)

text0 = Topic(name=”text0”, msg_type=”String”) image0 = Topic(name=”image_raw”, msg_type=”Image”) text1 = Topic(name=”text1”, msg_type=”String”)

Define a model client (working with Ollama in this case)

OllamaModel is a generic wrapper for all Ollama models

llava = OllamaModel(name=”llava”, checkpoint=”llava:latest”) llava_client = OllamaClient(llava)

Define a VLM component (A component represents a node with a particular functionality)

mllm = VLM( inputs=[text0, image0], outputs=[text1], model_client=llava_client, trigger=text0, component_name=”vqa” )

File truncated at 100 lines see the full file