Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro jazzy showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro kilted showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
rolling

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released UNRELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro ardent showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro bouncy showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro crystal showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro eloquent showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro dashing showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro galactic showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro foxy showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro iron showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro lunar showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro jade showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro indigo showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro hydro showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro kinetic showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro melodic showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange

No version for distro noetic showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

bob_llm package from bob_llm repo

bob_llm

ROS Distro
humble

Package Summary

Version 1.0.3
License Apache-2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/bob-ros2/bob_llm.git
VCS Type git
VCS Version main
Last Updated 2026-05-15
Dev Status MAINTAINED
Released RELEASED
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

ROS package for interfacing with LLM's and VLM's using OpenAI compatible API.

Additional Links

Maintainers

  • Bob Ros

Authors

  • Bob Ros

ROS 2 CI amd64 arm64

ROS Package bob_llm

The bob_llm package provides a ROS 2 node (llm node) that acts as a powerful interface to an external Large Language Model (LLM). It operates as a stateful service that maintains a conversation, connects to any OpenAI-compatible API, and features a robust tool execution system.

Features

  • OpenAI-Compatible: Connects to any LLM backend that exposes an OpenAI-compatible API endpoint (e.g., Ollama, vLLM, llama-cpp-python, commercial APIs).
  • Stateful Conversation: Maintains chat history to provide conversational context to the LLM.
  • Dynamic Tool System: Dynamically loads Python functions from user-provided files and makes them available to the LLM. The LLM can request to call these functions to perform actions or gather information.
  • Anthropic Agent Skills: Full support for the Anthropic Agent Skills specification, enabling modular, self-contained capabilities with documentation and execution logic.
  • High Performance Streaming: Optimized byte-stream parsing ensures zero-latency delivery of reasoning tokens and response chunks directly from the socket (no internal buffering).
  • Reasoning/Thinking Support: Real-time extraction and publishing of model reasoning (e.g., from Gemma 2 or DeepSeek) to a dedicated topic.
  • Interactive Chat CLI: Includes a premium terminal interface with Markdown rendering and multi-line support.
  • Multi-modality: Supports multimodal input (e.g., images) via JSON prompts.
  • Lightweight: The node core requires only standard Python libraries (requests, rich, prompt_toolkit).
  • Multi-arch Docker Support: Ready-to-use Docker images for amd64 and arm64, fully configurable via environment variables for easy deployment.

Docker Usage

The bob_llm node is available as a multi-arch Docker image. All ROS parameters can be configured via environment variables (prefixed with LLM_).

Running with Docker

docker run -it --rm \
  --name bob-llm \
  -e LLM_API_URL="http://192.168.1.100:8000/v1" \
  -e LLM_API_KEY="your_secret_token" \
  -e LLM_API_MODEL="llama3" \
  -e LLM_TEMPERATURE="0.5" \
  ghcr.io/bob-ros2/bob-llm:latest

Running with Docker Compose

services:
  llm:
    image: ghcr.io/bob-ros2/bob-llm:latest
    container_name: bob-llm
    environment:
      - LLM_API_URL=http://llm-backend:8000/v1
      - LLM_API_KEY=sk-12345
      - LLM_API_MODEL=gpt-4
      - LLM_SYSTEM_PROMPT="You are a helpful robot assistant named Bob."
      - LLM_TEMPERATURE=0.8
    restart: always

Installation

  1. Clone the Repository Navigate to your ROS 2 workspace’s src directory and clone the repository:
    cd ~/ros2_ws/src
    git clone https://github.com/bob-ros2/bob_llm.git
    
  1. Install Dependencies The node requires a few Python packages. It is recommended to install these within a virtual environment.
    pip install requests PyYAML rich prompt_toolkit
    
  1. Build and Source
    cd ~/ros2_ws
    colcon build --packages-select bob_llm
    source install/setup.bash
    

Usage

1. Start the Brain (LLM Node)

Ensure your LLM server is active and the api_url in your params file is correct.

ros2 run bob_llm llm --ros-args --params-file /path/to/your/ros2_ws/src/bob_llm/config/node_params.yaml

2. Enter Interactive Chat

Interact with Bob through a dedicated, interactive terminal client.

# Start standard chat
ros2 run bob_llm chat

# Start with boxed UI (visual panels)
ros2 run bob_llm chat --panels

CLI Arguments for chat

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package bob_llm

1.0.3 (2026-04-30)

  • Refactor monolithic prompt processing into event-driven architecture (removed polling timer).
  • Implement O(n) chat history trimming for improved performance during long conversations.
  • Add robust stream recovery with automatic retry for failed API connections.
  • Implement token buffering for smoother streaming and reduced ROS message overhead.
  • Integrate ThreadPoolExecutor for isolated tool execution with configurable timeouts.
  • Add [tool_timeout]{.title-ref} parameter (default 60s) for robust skill execution.
  • Improve multimodal content preservation (images) and smart text extraction for logging.
  • Refine tool budgeting to count only successful calls against the limit.
  • Add clean node shutdown logic with executor and thread cleanup.
  • Add comprehensive integration test suite for LLM flow verification.
  • Fix chat history reasoning issues and improve turns management.
  • Implement definitive zero-latency SSE streaming via iter_lines.
  • Fix UTF-8 encoding for special characters in raw byte streams.
  • Optimize chat UI refresh rate for better human perception.
  • Integrate tool call detection in reasoning stream for faster response.
  • Restore 100% flake8/PEP8 compliance and single quote enforcement.
  • Refactor main interaction loop for robust synchronous execution.
  • Fix JSON prompt handling and enhance system prompt file support.
  • Add support for dynamic system_prompt_file parameter loading.
  • Implement dynamic parameter reconfiguration for LLM client.
  • Add optional eof parameter to signal end of stream on llm_stream.
  • Add tool_choice parameter for dynamic tool call control.
  • Enhance tool execution logging with result previews.
  • Remove prefix v1 from chat API path for standard compatibility.
  • Add support for Agentskills specification and modular tools.
  • Add native Qdrant vector database tools with env configuration.
  • Refactor Agent Skills to follow progressive disclosure patterns.
  • Fix Race Condition in LLMNode node-to-client initialization.
  • Implement soft limit for tool calls with final response hint.
  • Add llm_reasoning topic for live thinking content support.
  • Update OpenAI client to extract reasoning_content from chunks.
  • Enforce mandatory discovery in tool docstrings for safety.
  • Improve type safety in backend_clients with proper annotations.
  • Add premium interactive terminal chat client with boxed UI.
  • Clean up legacy scripts and modernize README documentation.
  • Contributors: Bob Ros

1.0.2 (2026-02-01)

  • Full ROS 2 Rolling and Humble compliance (fixed linter issues)
  • Standardized import ordering and quote usage
  • Contributors: Bob Ros

1.0.1 (2026-01-26)

  • Fix 270+ linter and style issues for ROS2 compliance
  • Fix package.xml schema validation
  • Standardize docstrings and copyright headers
  • Contributors: Bob Ros

1.0.0 (2025-11-25)

  • Initial release of bob_llm
  • Contributors: Bob Ros

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged bob_llm at Robotics Stack Exchange