Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged bob_llm at Robotics Stack Exchange
Package Summary
| Version | 1.0.3 |
| License | Apache-2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-30 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Bob Ros
Authors
- Bob Ros
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Example
Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.
Image Helper:
If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.
```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1
File truncated at 100 lines see the full file
Changelog for package bob_llm
1.0.3 (2026-02-10)
- Fixed prompt input to handle JSON dictionaries without a role by wrapping them as user messages and improved log extraction
- Added support for loading system_prompt from files and new system_prompt_file parameter
- Implemented dynamic parameter reconfiguration for LLM client and system prompt
- Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
- Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
- Enhanced tool execution logging with result previews for better debugging
- Removed prefix v1 from chat API path
- Added support for [Agentskills](https://agentskills.io/) specification
- Added native Qdrant vector database tools with environment variable configuration
- Refactored Agent Skills implementation to strictly follow progressive disclosure
- Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
- Implemented soft limit for tool calls with system hint for final response
- Added premium interactive terminal chat client with Markdown and optional boxed UI
- Cleaned up legacy scripts and modernized README documentation
1.0.2 (2026-02-01)
- Full ROS 2 Rolling and Humble compliance (fixed linter issues)
- Standardized import ordering and quote usage
1.0.1 (2026-01-26)
- Fix 270+ linter and style issues for ROS2 compliance
- Fix package.xml schema validation
- Standardize docstrings and copyright headers
- Contributors: Bob Ros
1.0.0 (2025-11-25)
- Initial release of bob_llm
- Contributors: Bob Ros
Package Dependencies
| Deps | Name |
|---|---|
| ament_cmake | |
| rclpy | |
| ament_lint_auto | |
| ament_lint_common | |
| std_msgs |