Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-04-13 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.3 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
- High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
- Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
- Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node core requires only standard Python libraries (
requests,rich,prompt_toolkit).
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML rich prompt_toolkit
- Build and Source
cd ~/ros2_ws
colcon build --packages-select bob_llm
source install/setup.bash
Usage
1. Start the Brain (LLM Node)
Ensure your LLM server is active and the api_url in your params file is correct.
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Enter Interactive Chat
Interact with Bob through a dedicated, interactive terminal client.
# Start standard chat
ros2 run bob_llm chat
# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels
CLI Arguments for chat
| Option | Default | Description |
|---|---|---|
--topic_in |
llm_prompt |
ROS Topic to send prompts to. |
--topic_out |
llm_stream |
ROS Topic to receive streamed chunks. |
--topic_reasoning |
llm_reasoning |
ROS Topic to receive model reasoning content. |
--topic_response |
llm_response |
ROS Topic to receive final complete responses. |
--topic_tools |
llm_tool_calls |
Topic for skill execution feedback. |
--panels |
False |
Enable decorative boxes around messages. |
Chat Configuration
The chat client supports the following ROS parameters and environment variables:
-
queue_size(Integer): ROS parameter to control the subscription queue depth. -
CHAT_QUEUE_SIZE(Environment Variable): Default value for thequeue_sizeparameter (default:1000).
Example usage:
export CHAT_QUEUE_SIZE=2000
ros2 run bob_llm chat --topic_in /user_query --topic_out /llm_stream --panels
Chat Example
```text Chat for https://github.com/bob-ros2/bob_llm Usage: Press Enter to send, or Alt+Enter for a new line.
YOU: Was kannst du über dieses System sagen?
[*] SKILL: list_nodes({})
LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.