Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
| Checkout URI | https://github.com/bob-ros2/bob_llm.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-02-01 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| bob_llm | 1.0.2 |
README
ROS Package bob_llm
The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.
Features
-
OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g.,
Ollama,vLLM,llama-cpp-python, commercial APIs). - Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
- Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
- Streaming Support: Can stream the LLM’s final response token-by-token for real-time feedback.
- Fully Parameterized: All configuration, from API endpoints to LLM generation parameters, is handled through a single ROS parameters file.
- Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
-
Lightweight: The node is simple and has minimal dependencies, requiring only a few standard Python libraries (
requests,PyYAML) on top of ROS 2.
Installation
-
Clone the Repository
Navigate to your ROS 2 workspace’s
srcdirectory and clone the repository:
cd ~/ros2_ws/src
git clone https://github.com/bob-ros2/bob_llm.git
- Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
pip install requests PyYAML
The required ROS 2 dependencies (`rclpy`, `std_msgs`) will be resolved by the build system.
- Build the Workspace Navigate to the root of your workspace and build the package:
cd ~/ros2_ws
colcon build --packages-select bob_llm
- Source the Workspace Before running the node, remember to source your workspace’s setup file:
source install/setup.bash
Usage
1. Run the Node
Before running, ensure your LLM server is active and the api_url in your params file is correct.
# Make sure your workspace is sourced
# source install/setup.bash
# Run the node with your parameters file
ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml
2. Interact with the Node
The package includes a convenient helper script, scripts/query.sh, for interacting with the node directly from the command line.
Once the llm node is running, open a new terminal (with the workspace sourced) and run the script:
$ ros2 run bob_llm query.sh
--- Listening for results on llm_response ---
--- Enter your prompt below (Press Ctrl+C to exit) ---
> What is the status of the robot?
Robot status: Battery is at 85%. All systems are nominal. Currently idle.
>
3. Advanced Input & Multi-modality
The node supports advanced input formats beyond simple text strings. If the input message on /llm_prompt is a valid JSON string, it is parsed and treated as a message object.
Generic JSON Input:
You can pass any valid JSON dictionary. If it contains a role field (e.g., user), it is treated as a standard message object and appended to the history. This allows you to send custom content structures supported by your specific LLM backend (e.g., complex multimodal inputs, custom fields).
Image Handling Helper:
For convenience, the node includes a helper for handling images. If process_image_urls is set to true, the node looks for an image_url field in your JSON input. It will automatically fetch the image (from file:// or http:// URLs), base64 encode it, and format the message according to the OpenAI Vision API specification.
Example (Image Helper):
ros2 topic pub /llm_prompt std_msgs/msg/String "data: '{\"role\": \"user\", \"content\": \"Describe this image\", \"image_url\": \"file:///path/to/image.jpg\"}'" -1
Conversation Flow
- A user publishes a prompt to the
/llm_prompttopic. - The
llm nodeadds the prompt to its internal chat history. - The node sends the history and a list of available tools to the LLM backend.
- The LLM decides whether to respond directly or use a tool.
-
If Tool: The LLM returns a request to call a specific function. The
llm nodeexecutes the function, appends the result to the history, and sends the updated history back to the LLM. This loop can repeat multiple times. - If Text: The LLM generates a final, natural language response.
-
If Tool: The LLM returns a request to call a specific function. The
- The
llm nodepublishes the final response. If streaming is enabled, it’s sent token-by-token to/llm_streamand the full message is sent to/llm_responseupon completion. Otherwise, the full response is sent directly to/llm_response.
ROS 2 API
File truncated at 100 lines see the full file
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.