No version for distro humble showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro jazzy showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro kilted showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro rolling showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro ardent showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro bouncy showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro crystal showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro eloquent showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro dashing showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro galactic showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro foxy showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro iron showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro lunar showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro jade showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro indigo showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro hydro showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

Repo symbol

asr_psm repository

asr_psm

ROS Distro
melodic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.

No version for distro noetic showing kinetic. Known supported distros are highlighted in the buttons above.
Repo symbol

asr_psm repository

asr_psm

ROS Distro
kinetic

Repository Summary

Checkout URI https://github.com/asr-ros/asr_psm.git
VCS Type git
VCS Version master
Last Updated 2019-11-14
Dev Status UNMAINTAINED
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Packages

Name Version
asr_psm 1.0.0

README

General / Acquisition:
----------------------
	1. Call "start_cameras.launch" and "start_recognition_system.launch" to start add required nodes like visualization, cameras, coordinate transformations and object detectors.
	2. When you want to record data instead of processing it online (former is necessary when learning scene models), call the script "start_recording_localizer_input_and_results.sh SEQUENCE_ID" or "start_recording_objects_only.sh SEQUENCE_ID". The resulting file is called "OBJ_SEQUENCE_ID.bag".
	
Scene Model Learning:
----------------------------------
	3. For getting one scene: Call "roslaunch asr_psm start_scene_graph_generation.launch scene_id:=SCENE_NAME" in launch directory. The scene graph generator started by the launch file collects all published object messages and processes them. This means that either online data or a rosbag file could be used. Press the key 'p' to publish the results.
	   => Please take into account, that only one object per type in each scene is currently supported. This holds especially if objects of the same type appear across multiple trajectory recordings for one scene.
	4. Use the script "start_recording_scene_graph.sh SEQUENCE_ID" in the scripts directory to capture the output. The resulting file is called "SCENEGRAPH_SEQUENCE_ID.bag".
	5. To learn the model specify the name to the bagfile(s) in the launch file "learner.launch" and launch it with "roslaunch asr_psm learner.launch".
	
Inference:
----------------------------------
	6. Start the image aquisition with "roslaunch asr_psm start_cameras.launch".
	7. Start the object detectors with "roslaunch asr_psm start_recognition_system.launch".
	8. Edit the "inference.launch" file to fit your needs and launch it using the command "roslaunch asr_psm inference.launch". The results will be visualized in RVIZ, on demand a gnuplot graph showing the scene probabilities can be summoned.