Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro jazzy showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro kilted showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
Repo symbol

bob_llm repository

bob_llm

ROS Distro
rolling

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro ardent showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro bouncy showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro crystal showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro eloquent showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro dashing showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro galactic showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro foxy showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro iron showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro lunar showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro jade showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro indigo showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro hydro showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro kinetic showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro melodic showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~
No version for distro noetic showing humble. Known supported distros are highlighted in the buttons above.
Repo symbol

bob_llm repository

bob_llm

ROS Distro
humble

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-03-30
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
bob_llm 1.0.3

README

ROS 2 CI License

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

Option Default Description
--topic_in llm_prompt ROS Topic to send prompts to.
--topic_out llm_stream ROS Topic to receive streamed chunks.
--topic_tools llm_tool_calls Topic for skill execution feedback.
--panels False Enable decorative boxes around messages.

Chat Example

Chat for https://github.com/bob-ros2/bob_llm
Usage: Press Enter to send, or Alt+Enter for a new line.

YOU: Was kannst du über dieses System sagen?

[*] SKILL: list_nodes({})

LLM: Ich sehe folgende aktive Komponenten im System:
- /llm (Das Gehirn)
- /bob_chat_client (Dieser Chat)
- /eva/logic (Zustandssteuerung)

3. Advanced Input & Multi-modality

The node supports advanced input formats beyond simple text. If the input message on /llm_prompt is valid JSON, it is parsed as a message object.

Generic JSON Input: You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history.

Image Helper: If process_image_urls is enabled, the node automatically base64-encodes images from file:// or http:// URLs.

```bash ros2 topic pub /llm_prompt std_msgs/msg/String “data: ‘{"role": "user", "content": "Describe this", "image_url": "file:///tmp/cam.jpg"}’” -1

File truncated at 100 lines see the full file

CONTRIBUTING

Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:

5. Submission of Contributions. Unless You explicitly state otherwise,
   any Contribution intentionally submitted for inclusion in the Work
   by You to the Licensor shall be under the terms and conditions of
   this License, without any additional terms or conditions.
   Notwithstanding the above, nothing herein shall supersede or modify
   the terms of any separate license agreement you may have executed
   with Licensor regarding such Contributions.
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that [license](http://www.apache.org/licenses/LICENSE-2.0.html): ~~~ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. ~~~