Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.
Repository Summary
| Checkout URI | https://github.com/robotmem/robotmem.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-03-24 |
| Dev Status | DEVELOPED |
| Released | UNRELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| robotmem_msgs | 0.1.0 |
| robotmem_ros | 0.1.0 |
README
robotmem — Let Robots Learn from Experience
Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
Quick Start
pip install robotmem
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
7 APIs
| API | Purpose |
|---|---|
learn |
Record physical experiences (parameters / strategies / lessons) |
recall |
Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
|
save_perception |
Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural) |
forget |
Delete incorrect memories |
update |
Correct memory content |
start_session |
Begin an episode |
end_session |
End an episode (auto-consolidation + proactive recall) |
Key Features
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robot experiences:
# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combine: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)
Context JSON — 4 Sections
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.
Memory Consolidation + Proactive Recall
end_session automatically triggers:
- Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
- Proactive Recall: Returns historically relevant memories for the next episode
FetchPush Demo
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to robotmem
Thanks for your interest in contributing to robotmem!
Getting Started
git clone https://github.com/robotmem/robotmem.git
cd robotmem
pip install -e ".[dev]"
Running Tests
# 如果使用 pip install -e ".[dev]" 安装:
pytest tests/ -v
# 如果从源码直接运行:
PYTHONPATH=src pytest tests/ -v
Code Style
- Python 3.10+
- Type hints on all public functions
- Docstrings on public modules and classes
Submitting Changes
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for your changes
- Run
pytest tests/ -vand ensure all tests pass - Commit with a descriptive message
- Open a Pull Request
Reporting Issues
Open an issue at https://github.com/robotmem/robotmem/issues with:
- Steps to reproduce
- Expected vs actual behavior
- Python version and OS
License
By contributing, you agree that your contributions will be licensed under the Apache-2.0 License.