Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro jazzy showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro kilted showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
rolling

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro ardent showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro bouncy showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro crystal showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro eloquent showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro dashing showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro galactic showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro foxy showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro iron showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro lunar showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro jade showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro indigo showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro hydro showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro kinetic showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro melodic showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro noetic showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-04-25
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with premium boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-13)

  • Definitive zero-latency SSE streaming parser using iter_lines for immediate delivery.
  • Fixed UTF-8 character encoding for special characters (ä, ö, ü) in raw byte streams.
  • Optimized chat UI refresh rate to improve human perception during streaming.
  • Integrated tool call detection directly into reasoning stream to eliminate pre-check delay.
  • Restored 100% flake8/PEP8 compliance (single quotes enforcement).
  • Refactored main interaction loop for robust synchronous execution.
  • Fixed JSON prompt handling and enhanced system prompt file support.
  • Added support for loading system_prompt from files and new system_prompt_file parameter
  • Implemented dynamic parameter reconfiguration for LLM client and system prompt
  • Added optional [eof]{.title-ref} parameter to signal the end of a stream on [llm_stream]{.title-ref}
  • Added [tool_choice]{.title-ref} parameter to dynamically control tool calling behavior
  • Enhanced tool execution logging with result previews for better debugging
  • Removed prefix v1 from chat API path
  • Added support for [Agentskills](https://agentskills.io/) specification
  • Added native Qdrant vector database tools with environment variable configuration
  • Refactored Agent Skills implementation to strictly follow progressive disclosure
  • Fixed Race Condition in LLMNode initialization by pre-initializing llm_client
  • Implemented soft limit for tool calls with system hint for final response
  • Added [llm_reasoning]{.title-ref} topic to support live reasoning/thinking content from models (e.g., Gemma 2, DeepSeek)
  • Updated [OpenAICompatibleClient]{.title-ref} to extract [reasoning_content]{.title-ref} from both stream chunks and blocking responses
  • Enhanced tool safety in [ros_cli_tools.py]{.title-ref} by enforcing mandatory discovery of topics, services, and parameters in docstrings
  • Improved type safety in [backend_clients]{.title-ref} with proper Tuple annotations and fixed linter issues
  • Added premium interactive terminal chat client with Markdown and optional boxed UI
  • Cleaned up legacy scripts and modernized README documentation
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange