|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged automatika_embodied_agents at Robotics Stack Exchange
|
automatika_embodied_agents package from automatika_embodied_agents repoautomatika_embodied_agents |
ROS Distro
|
Package Summary
| Version | 0.7.3 |
| License | MIT |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/automatika-robotics/ros-agents.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-08 |
| Dev Status | DEVELOPED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Automatika Robotics
Authors
Part of the [EMOS](https://github.com/automatika-robotics/emos) ecosystem [](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://docs.ros.org/en/humble/index.html) [](https://discord.gg/B9ZU6qjzND) **The production-grade framework for deploying Physical AI** [**EMOS Documentation**](https://emos.automatikarobotics.com) | [**Developer Docs**](https://automatika-robotics.github.io/embodied-agents/) | [**Discord**](https://discord.gg/B9ZU6qjzND)
What is EmbodiedAgents?
EmbodiedAgents is the intelligence layer of the EMOS (Embodied Operating System) ecosystem. It enables you to create interactive, physical agents that don’t just chat, but understand, move, manipulate, and adapt to their environment.
For full documentation, tutorials, and recipes, visit emos.automatikarobotics.com.
Key Features
-
Production Ready – Robust orchestration layer built on native ROS 2. Deploy Physical AI that is simple, scalable, and reliable.
-
Self-Referential Logic – Agents that are self-aware. Start, stop, or reconfigure components based on internal or external events. Switch between cloud and local ML on the fly.
-
Run Fully Offline – Built-in local models for LLM, VLM, STT, and TTS. No server required. Optimized for edge devices and NVIDIA Jetson.
-
Spatio-Temporal Memory – Hierarchical spatio-temporal memory and semantic routing. Build arbitrarily complex graphs for agentic information flow.
Quick Start
Create a VLM-powered agent that can answer questions about what it sees:
from agents.clients.ollama import OllamaClient
from agents.components import VLM
from agents.models import OllamaModel
from agents.ros import Topic, Launcher
text0 = Topic(name="text0", msg_type="String")
image0 = Topic(name="image_raw", msg_type="Image")
text1 = Topic(name="text1", msg_type="String")
qwen_vl = OllamaModel(name="qwen_vl", checkpoint="qwen2.5vl:latest")
qwen_client = OllamaClient(qwen_vl)
vlm = VLM(
inputs=[text0, image0],
outputs=[text1],
model_client=qwen_client,
trigger=text0,
component_name="vqa"
)
launcher = Launcher()
launcher.add_pkg(components=[vlm])
launcher.bringup()
Run Fully Offline
Every AI component can run with a built-in local model – no server, no cloud, no heavy frameworks. Just set enable_local_model=True:
```python from agents.components import LLM from agents.config import LLMConfig from agents.ros import Topic, Launcher
config = LLMConfig( enable_local_model=True, device_local_model=”cpu”, # or “cuda” ncpu_local_model=4, )
llm = LLM( inputs=[Topic(name=”user_query”, msg_type=”String”)], outputs=[Topic(name=”response”, msg_type=”String”)], config=config, trigger=Topic(name=”user_query”, msg_type=”String”), component_name=”local_brain”, )
File truncated at 100 lines see the full file
Changelog for package automatika_embodied_agents
0.7.3 (2026-05-08)
- (fix) Fixes the helper method typo
- (fix) Fixes validate topics check to include comparing the associated ROS2 message type, resolves #30
- (fix) Runs callback disambiguation on layers topics using Sugarcoat reparse_inputs_callbacks, resolves #31
- Contributors: ahr, mkabtoul
0.7.2 (2026-05-06)
- (chore) Updates complete agent recipes
- (chore) Updates tool calling and go to x recipes to use memory
- (feature) Enables passing component actions as tools to the llm
component
- Executed through service calls
- (refactor) Makes response handling from component action execution a utility
- (chore) Updates go to x recipe to use the memory component
- (fix) Fixes registering memory tools with llms before lifecycle configuration
- (chore) Updates tool calling recipe to use the memory component
- (chore) Updates semantic map recipes to use the memory component
- Contributors: ahr
0.7.1 (2026-05-02)
- (chore) Bumps minimum sugarcoat version
- (chore) Adds test for memory component
- (chore) Adds an example recipe for using cortex with memory
- (chore) Adds both component level and launcher level failure recovery in multiprocessing example
- (chore) Updates docstrings. Adds deprecation for MapEncoding component in favour of memory component
- (feature) Augments memory prompt for interoception tools
- (feature) Adds mem layer alias for map layers with internal state flag for interoception topics
- (feature) Adds memory specific planning prompt augmentation
- (feature) Adds better failure handling in cortex for task summary
- (feature) Adds serialization, deserialization of clients for memory component
- (fix) Fixes init/deinit of ollama based embedding models
- (feature) Adds classification in memory component actions for planning and execution actions
- (feature) Updates cortex to utilize executed component action outputs and replan if the plan has not fully executed
- (feature) Overrides component action decorator to differentiate between planning and execution actions in agents
- (feautre) Changes component actions and fallbacks to raise errors and return meaningful responses on happy path
- (feature) Adds a store specific memory method for cortex to write arbitrary runtime memories
- (feautre) Adds information about perception layers in the memory components inspect method
- (feature) Adds memory component using emem that exposes basic memory management tools as component actions
- (chore) Removes safe_restart from base component as it has been upstreamed in sugarcoat
- Contributors: ahr
0.7.0 (2026-04-11)
- (feature) Adds an optional output topic in cortex for capturing outputs when an action is not needed
- (docs) Updates docs for component actions setup
- (chore) Bumps up required sugarcoat version
- (fix) Fixes model class type check in roboml client
- (feature) Adds Action goal final status to Cortex feedback lines
- (fix) Fixes converter for detections message to handle empty detections
- (feature) Adds tracking action to vision component
- (fix) Fixes publishing to multiple output topics in model component
- (feature) Adds a safe restart context manager to base component
- (fix) Fixes sender for tracking based on new roboml api
- (chore) Adds CI workflow to run tests
- (feature) Adds a describe action to the vlm component
- (fix) Standardizes param name in POI publishing
- (fix) Increases default max token limits
- (fix) Fixes monitoring ongoing actions and feedback status update for step decision
- (fix) Fixes return value when sending action goal to component
- (feature) Adds async execution of action clients and monitoring using the main action loop
- (feature) Adds helper methods to monitor ongoing action clients and cancel their goals
- (fix) Adds fixes for python3.8 compatibility
- (refactor) Updates examples for new roboml api
- (feature) Enables registering component additional ROS entrypoints as system tools
- (feature) Updates model definitions and default checkpoints based on new version of RoboML
- (fix) Removes unnecessary config validator for stt
- (fix) Fixes empty buffer inference call in stt
- (refactor) Simplifies text to speech playback pipeline for less jitter
- (feature) Adds internal events setup to cortex launch
File truncated at 100 lines see the full file
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| ament_cmake_python | |
| rosidl_default_generators | |
| rosidl_default_runtime | |
| builtin_interfaces | |
| std_msgs | |
| sensor_msgs | |
| automatika_ros_sugar |