vision_msgs package from vision_msgs repo

vision_msgs

Package Summary

Tags No category tags.
Version 2.0.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/Kukanani/vision_msgs.git
VCS Type git
VCS Version ros2
Last Updated 2020-08-11
Dev Status MAINTAINED
CI status No Continuous Integration
Released RELEASED
Package Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Messages for interfacing with various computer vision pipelines, such as object detectors.

Additional Links

No additional links.

Maintainers

  • Adam Allevato
  • Adam Allevato

Authors

  • Adam Allevato

ROS Vision Messages

Introduction

This package defines a set of messages to unify computer vision and object detection efforts in ROS.

Overview

The messages in this package are to define a common outward-facing interface for vision-based pipelines. The set of messages here are meant to enable 2 primary types of pipelines:

  1. "Pure" Classifiers, which identify class probabilities given a single sensor input
  2. Detectors, which identify class probabilities as well as the poses of those classes given a sensor input

The class probabilities are stored with an array of ObjectHypothesis messages, which is essentially a map from integer IDs to float scores and poses.

Message types exist separately for 2D (using sensor_msgs/Image) and 3D (using sensor_msgs\PointCloud2). The metadata that is stored for each object is application-specific, and so this package places very few constraints on the metadata. Each possible detection result must have a unique numerical ID so that it can be unambiguously and efficiently identified in the results messages. Object metadata such as name, mesh, etc. can then be looked up from a database.

The only other requirement is that the metadata database information can be stored in a ROS parameter. We expect a classifier to load the database (or detailed database connection information) to the parameter server in a manner similar to how URDFs are loaded and stored there (see [6]), most likely defined in an XML format. This expectation may be further refined in the future using a ROS Enhancement Proposal, or REP [7].

We also would like classifiers to have a way to signal when the database has been updated, so that listeners can respond accordingly. The database might be updated in the case of online learning. To solve this problem, each classifier can publish messages to a topic signaling that the database has been updated, as well as incrementing a database version that's continually published with the classifier information.

Messages

  • Classification2D and Classification3D: pure classification without pose
  • Detection2D and Detection3D: classification + pose
  • XArray messages, where X is one of the four message types listed above. A pipeline should emit XArray messages as its forward-facing ROS interface.
  • VisionInfo: Information about a classifier, such as its name and where to find its metadata database.
  • ObjectHypothesis: An id/score pair.
  • ObjectHypothesisWithPose: An id/(score, pose) pair. This accounts for the fact that a single input, say, a point cloud, could have different poses depdending on its class. For example, a flat rectangular prism could either be a smartphone lying on its back, or a book lying on its side.
  • BoundingBox2D, BoundingBox3D: orientable rectangular bounding boxes, specified by the pose of their center and their size.
  • BoundingRect2D: A simplified bounding box that uses the OpenCV format: definition of the upper-left corner, as well as width and height of the box. The BoundingRect2D cannot be rotated.

By using a very general message definition, we hope to cover as many of the various computer vision use cases as possible. Some examples of use cases that can be fully represented are:

  • Bounding box multi-object detectors with tight bounding box predictions, such as YOLO [1]
  • Class-predicting full-image detectors, such as TensorFlow examples trained on the MNIST dataset [2]
  • Full 6D-pose recognition pipelines, such as LINEMOD [3] and those included in the Object Recognition Kitchen [4]
  • Custom detectors that use various point-cloud based features to predict object attributes (one example is [5])

Please see the vision_msgs_examples repository for some sample vision pipelines that emit results using the vision_msgs format.

References

CHANGELOG

Changelog for package vision_msgs

2.0.0 (2020-08-11)

  • Fix lint error for draconian header guard rule
  • Rename create_aabb to use C++ extension This fixes linting errors which assume that .h means that a file is C (rather than C++).
  • Add CONTRIBUTING.md
  • Fix various linting issues
  • Add gitignore Sync ros2 with master
  • Update test for ros2
  • add BoundingBox3DArray message (#30)
    • add BoundingBoxArray message
  • Make msg gen package deps more specific (#24) Make message_generation and message_runtime use more specific depend tags
  • Merge branch \'kinetic-devel\'
  • Removed \"proposal\" from readme (#23)
  • add tracking ID to the Detection Message (#19)
    • add tracking ID to the Detection
    • modify comments
    • Change UUID messages to strings
    • Improve comment for tracking_id and fix whitespace
  • Convert id to string (#22)
  • Specify that id is explicitly for object class
  • Fix dependency of unit test. (#14)
  • 0.0.1
  • Pre-release commit - setting up versioning and changelog
  • Rolled BoundingRect into BoundingBox2D Added helper functions to make it easier to go from corner-size representation to center-size representation, plus associated tests.
  • Added license
  • Small fixes in message comments (#10)
  • Contributors: Adam Allevato, Leroy R

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Recent questions tagged vision_msgs at answers.ros.org

vision_msgs package from vision_msgs repo

vision_msgs

Package Summary

Tags No category tags.
Version 2.0.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/Kukanani/vision_msgs.git
VCS Type git
VCS Version ros2
Last Updated 2020-08-11
Dev Status MAINTAINED
CI status No Continuous Integration
Released RELEASED
Package Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Messages for interfacing with various computer vision pipelines, such as object detectors.

Additional Links

No additional links.

Maintainers

  • Adam Allevato
  • Adam Allevato

Authors

  • Adam Allevato

ROS Vision Messages

Introduction

This package defines a set of messages to unify computer vision and object detection efforts in ROS.

Overview

The messages in this package are to define a common outward-facing interface for vision-based pipelines. The set of messages here are meant to enable 2 primary types of pipelines:

  1. "Pure" Classifiers, which identify class probabilities given a single sensor input
  2. Detectors, which identify class probabilities as well as the poses of those classes given a sensor input

The class probabilities are stored with an array of ObjectHypothesis messages, which is essentially a map from integer IDs to float scores and poses.

Message types exist separately for 2D (using sensor_msgs/Image) and 3D (using sensor_msgs\PointCloud2). The metadata that is stored for each object is application-specific, and so this package places very few constraints on the metadata. Each possible detection result must have a unique numerical ID so that it can be unambiguously and efficiently identified in the results messages. Object metadata such as name, mesh, etc. can then be looked up from a database.

The only other requirement is that the metadata database information can be stored in a ROS parameter. We expect a classifier to load the database (or detailed database connection information) to the parameter server in a manner similar to how URDFs are loaded and stored there (see [6]), most likely defined in an XML format. This expectation may be further refined in the future using a ROS Enhancement Proposal, or REP [7].

We also would like classifiers to have a way to signal when the database has been updated, so that listeners can respond accordingly. The database might be updated in the case of online learning. To solve this problem, each classifier can publish messages to a topic signaling that the database has been updated, as well as incrementing a database version that's continually published with the classifier information.

Messages

  • Classification2D and Classification3D: pure classification without pose
  • Detection2D and Detection3D: classification + pose
  • XArray messages, where X is one of the four message types listed above. A pipeline should emit XArray messages as its forward-facing ROS interface.
  • VisionInfo: Information about a classifier, such as its name and where to find its metadata database.
  • ObjectHypothesis: An id/score pair.
  • ObjectHypothesisWithPose: An id/(score, pose) pair. This accounts for the fact that a single input, say, a point cloud, could have different poses depdending on its class. For example, a flat rectangular prism could either be a smartphone lying on its back, or a book lying on its side.
  • BoundingBox2D, BoundingBox3D: orientable rectangular bounding boxes, specified by the pose of their center and their size.
  • BoundingRect2D: A simplified bounding box that uses the OpenCV format: definition of the upper-left corner, as well as width and height of the box. The BoundingRect2D cannot be rotated.

By using a very general message definition, we hope to cover as many of the various computer vision use cases as possible. Some examples of use cases that can be fully represented are:

  • Bounding box multi-object detectors with tight bounding box predictions, such as YOLO [1]
  • Class-predicting full-image detectors, such as TensorFlow examples trained on the MNIST dataset [2]
  • Full 6D-pose recognition pipelines, such as LINEMOD [3] and those included in the Object Recognition Kitchen [4]
  • Custom detectors that use various point-cloud based features to predict object attributes (one example is [5])

Please see the vision_msgs_examples repository for some sample vision pipelines that emit results using the vision_msgs format.

References

CHANGELOG

Changelog for package vision_msgs

2.0.0 (2020-08-11)

  • Fix lint error for draconian header guard rule
  • Rename create_aabb to use C++ extension This fixes linting errors which assume that .h means that a file is C (rather than C++).
  • Add CONTRIBUTING.md
  • Fix various linting issues
  • Add gitignore Sync ros2 with master
  • Update test for ros2
  • add BoundingBox3DArray message (#30)
    • add BoundingBoxArray message
  • Make msg gen package deps more specific (#24) Make message_generation and message_runtime use more specific depend tags
  • Merge branch \'kinetic-devel\'
  • Removed \"proposal\" from readme (#23)
  • add tracking ID to the Detection Message (#19)
    • add tracking ID to the Detection
    • modify comments
    • Change UUID messages to strings
    • Improve comment for tracking_id and fix whitespace
  • Convert id to string (#22)
  • Specify that id is explicitly for object class
  • Fix dependency of unit test. (#14)
  • 0.0.1
  • Pre-release commit - setting up versioning and changelog
  • Rolled BoundingRect into BoundingBox2D Added helper functions to make it easier to go from corner-size representation to center-size representation, plus associated tests.
  • Added license
  • Small fixes in message comments (#10)
  • Contributors: Adam Allevato, Leroy R

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Recent questions tagged vision_msgs at answers.ros.org

vision_msgs package from vision_msgs repo

vision_msgs

Package Summary

Tags No category tags.
Version 2.0.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/Kukanani/vision_msgs.git
VCS Type git
VCS Version ros2
Last Updated 2020-08-11
Dev Status MAINTAINED
CI status No Continuous Integration
Released RELEASED
Package Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Messages for interfacing with various computer vision pipelines, such as object detectors.

Additional Links

No additional links.

Maintainers

  • Adam Allevato
  • Adam Allevato

Authors

  • Adam Allevato

ROS Vision Messages

Introduction

This package defines a set of messages to unify computer vision and object detection efforts in ROS.

Overview

The messages in this package are to define a common outward-facing interface for vision-based pipelines. The set of messages here are meant to enable 2 primary types of pipelines:

  1. "Pure" Classifiers, which identify class probabilities given a single sensor input
  2. Detectors, which identify class probabilities as well as the poses of those classes given a sensor input

The class probabilities are stored with an array of ObjectHypothesis messages, which is essentially a map from integer IDs to float scores and poses.

Message types exist separately for 2D (using sensor_msgs/Image) and 3D (using sensor_msgs\PointCloud2). The metadata that is stored for each object is application-specific, and so this package places very few constraints on the metadata. Each possible detection result must have a unique numerical ID so that it can be unambiguously and efficiently identified in the results messages. Object metadata such as name, mesh, etc. can then be looked up from a database.

The only other requirement is that the metadata database information can be stored in a ROS parameter. We expect a classifier to load the database (or detailed database connection information) to the parameter server in a manner similar to how URDFs are loaded and stored there (see [6]), most likely defined in an XML format. This expectation may be further refined in the future using a ROS Enhancement Proposal, or REP [7].

We also would like classifiers to have a way to signal when the database has been updated, so that listeners can respond accordingly. The database might be updated in the case of online learning. To solve this problem, each classifier can publish messages to a topic signaling that the database has been updated, as well as incrementing a database version that's continually published with the classifier information.

Messages

  • Classification2D and Classification3D: pure classification without pose
  • Detection2D and Detection3D: classification + pose
  • XArray messages, where X is one of the four message types listed above. A pipeline should emit XArray messages as its forward-facing ROS interface.
  • VisionInfo: Information about a classifier, such as its name and where to find its metadata database.
  • ObjectHypothesis: An id/score pair.
  • ObjectHypothesisWithPose: An id/(score, pose) pair. This accounts for the fact that a single input, say, a point cloud, could have different poses depdending on its class. For example, a flat rectangular prism could either be a smartphone lying on its back, or a book lying on its side.
  • BoundingBox2D, BoundingBox3D: orientable rectangular bounding boxes, specified by the pose of their center and their size.
  • BoundingRect2D: A simplified bounding box that uses the OpenCV format: definition of the upper-left corner, as well as width and height of the box. The BoundingRect2D cannot be rotated.

By using a very general message definition, we hope to cover as many of the various computer vision use cases as possible. Some examples of use cases that can be fully represented are:

  • Bounding box multi-object detectors with tight bounding box predictions, such as YOLO [1]
  • Class-predicting full-image detectors, such as TensorFlow examples trained on the MNIST dataset [2]
  • Full 6D-pose recognition pipelines, such as LINEMOD [3] and those included in the Object Recognition Kitchen [4]
  • Custom detectors that use various point-cloud based features to predict object attributes (one example is [5])

Please see the vision_msgs_examples repository for some sample vision pipelines that emit results using the vision_msgs format.

References

CHANGELOG

Changelog for package vision_msgs

2.0.0 (2020-08-11)

  • Fix lint error for draconian header guard rule
  • Rename create_aabb to use C++ extension This fixes linting errors which assume that .h means that a file is C (rather than C++).
  • Add CONTRIBUTING.md
  • Fix various linting issues
  • Add gitignore Sync ros2 with master
  • Update test for ros2
  • add BoundingBox3DArray message (#30)
    • add BoundingBoxArray message
  • Make msg gen package deps more specific (#24) Make message_generation and message_runtime use more specific depend tags
  • Merge branch \'kinetic-devel\'
  • Removed \"proposal\" from readme (#23)
  • add tracking ID to the Detection Message (#19)
    • add tracking ID to the Detection
    • modify comments
    • Change UUID messages to strings
    • Improve comment for tracking_id and fix whitespace
  • Convert id to string (#22)
  • Specify that id is explicitly for object class
  • Fix dependency of unit test. (#14)
  • 0.0.1
  • Pre-release commit - setting up versioning and changelog
  • Rolled BoundingRect into BoundingBox2D Added helper functions to make it easier to go from corner-size representation to center-size representation, plus associated tests.
  • Added license
  • Small fixes in message comments (#10)
  • Contributors: Adam Allevato, Leroy R

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Recent questions tagged vision_msgs at answers.ros.org

vision_msgs package from vision_msgs repo

vision_msgs

Package Summary

Tags No category tags.
Version 0.0.1
License Apache License 2.0
Build type CATKIN
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/Kukanani/vision_msgs.git
VCS Type git
VCS Version noetic-devel
Last Updated 2020-07-18
Dev Status MAINTAINED
CI status
Released RELEASED
Package Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Messages for interfacing with various computer vision pipelines, such as object detectors.

Additional Links

No additional links.

Maintainers

  • Adam Allevato

Authors

  • Adam Allevato

ROS Vision Messages

Introduction

This package defines a set of messages to unify computer vision and object detection efforts in ROS.

Overview

The messages in this package are to define a common outward-facing interface for vision-based pipelines. The set of messages here are meant to enable 2 primary types of pipelines:

  1. "Pure" Classifiers, which identify class probabilities given a single sensor input
  2. Detectors, which identify class probabilities as well as the poses of those classes given a sensor input

The class probabilities are stored with an array of ObjectHypothesis messages, which is essentially a map from integer IDs to float scores and poses.

Message types exist separately for 2D (using sensor_msgs/Image) and 3D (using sensor_msgs\PointCloud2). The metadata that is stored for each object is application-specific, and so this package places very few constraints on the metadata. Each possible detection result must have a unique numerical ID so that it can be unambiguously and efficiently identified in the results messages. Object metadata such as name, mesh, etc. can then be looked up from a database.

The only other requirement is that the metadata database information can be stored in a ROS parameter. We expect a classifier to load the database (or detailed database connection information) to the parameter server in a manner similar to how URDFs are loaded and stored there (see [6]), most likely defined in an XML format. This expectation may be further refined in the future using a ROS Enhancement Proposal, or REP [7].

We also would like classifiers to have a way to signal when the database has been updated, so that listeners can respond accordingly. The database might be updated in the case of online learning. To solve this problem, each classifier can publish messages to a topic signaling that the database has been updated, as well as incrementing a database version that's continually published with the classifier information.

Messages

  • Classification2D and Classification3D: pure classification without pose
  • Detection2D and Detection3D: classification + pose
  • XArray messages, where X is one of the four message types listed above. A pipeline should emit XArray messages as its forward-facing ROS interface.
  • VisionInfo: Information about a classifier, such as its name and where to find its metadata database.
  • ObjectHypothesis: An id/score pair.
  • ObjectHypothesisWithPose: An id/(score, pose) pair. This accounts for the fact that a single input, say, a point cloud, could have different poses depdending on its class. For example, a flat rectangular prism could either be a smartphone lying on its back, or a book lying on its side.
  • BoundingBox2D, BoundingBox3D: orientable rectangular bounding boxes, specified by the pose of their center and their size.

By using a very general message definition, we hope to cover as many of the various computer vision use cases as possible. Some examples of use cases that can be fully represented are:

  • Bounding box multi-object detectors with tight bounding box predictions, such as YOLO [1]
  • Class-predicting full-image detectors, such as TensorFlow examples trained on the MNIST dataset [2]
  • Full 6D-pose recognition pipelines, such as LINEMOD [3] and those included in the Object Recognition Kitchen [4]
  • Custom detectors that use various point-cloud based features to predict object attributes (one example is [5])

Please see the vision_msgs_examples repository for some sample vision pipelines that emit results using the vision_msgs format.

References

CHANGELOG

Changelog for package vision_msgs

0.0.1 (2017-11-14)

  • Initial commit
  • Contributors: Adam Allevato, Martin Gunther, procopiostein

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

No known dependants.

Recent questions tagged vision_msgs at answers.ros.org

vision_msgs package from vision_msgs repo

vision_msgs

Package Summary

Tags No category tags.
Version 0.0.1
License Apache License 2.0
Build type CATKIN
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/Kukanani/vision_msgs.git
VCS Type git
VCS Version melodic-devel
Last Updated 2017-11-14
Dev Status MAINTAINED
CI status
Released RELEASED
Package Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Messages for interfacing with various computer vision pipelines, such as object detectors.

Additional Links

No additional links.

Maintainers

  • Adam Allevato

Authors

  • Adam Allevato

ROS Vision Messages

Introduction

This package defines a set of messages to unify computer vision and object detection efforts in ROS.

Overview

The messages in this package are to define a common outward-facing interface for vision-based pipelines. The set of messages here are meant to enable 2 primary types of pipelines:

  1. "Pure" Classifiers, which identify class probabilities given a single sensor input
  2. Detectors, which identify class probabilities as well as the poses of those classes given a sensor input

The class probabilities are stored with an array of ObjectHypothesis messages, which is essentially a map from integer IDs to float scores and poses.

Message types exist separately for 2D (using sensor_msgs/Image) and 3D (using sensor_msgs\PointCloud2). The metadata that is stored for each object is application-specific, and so this package places very few constraints on the metadata. Each possible detection result must have a unique numerical ID so that it can be unambiguously and efficiently identified in the results messages. Object metadata such as name, mesh, etc. can then be looked up from a database.

The only other requirement is that the metadata database information can be stored in a ROS parameter. We expect a classifier to load the database (or detailed database connection information) to the parameter server in a manner similar to how URDFs are loaded and stored there (see [6]), most likely defined in an XML format. This expectation may be further refined in the future using a ROS Enhancement Proposal, or REP [7].

We also would like classifiers to have a way to signal when the database has been updated, so that listeners can respond accordingly. The database might be updated in the case of online learning. To solve this problem, each classifier can publish messages to a topic signaling that the database has been updated, as well as incrementing a database version that's continually published with the classifier information.

Messages

  • Classification2D and Classification3D: pure classification without pose
  • Detection2D and Detection3D: classification + pose
  • XArray messages, where X is one of the four message types listed above. A pipeline should emit XArray messages as its forward-facing ROS interface.
  • VisionInfo: Information about a classifier, such as its name and where to find its metadata database.
  • ObjectHypothesis: An id/score pair.
  • ObjectHypothesisWithPose: An id/(score, pose) pair. This accounts for the fact that a single input, say, a point cloud, could have different poses depdending on its class. For example, a flat rectangular prism could either be a smartphone lying on its back, or a book lying on its side.
  • BoundingBox2D, BoundingBox3D: orientable rectangular bounding boxes, specified by the pose of their center and their size.

By using a very general message definition, we hope to cover as many of the various computer vision use cases as possible. Some examples of use cases that can be fully represented are:

  • Bounding box multi-object detectors with tight bounding box predictions, such as YOLO [1]
  • Class-predicting full-image detectors, such as TensorFlow examples trained on the MNIST dataset [2]
  • Full 6D-pose recognition pipelines, such as LINEMOD [3] and those included in the Object Recognition Kitchen [4]
  • Custom detectors that use various point-cloud based features to predict object attributes (one example is [5])

Please see the vision_msgs_examples repository for some sample vision pipelines that emit results using the vision_msgs format.

References

CHANGELOG

Changelog for package vision_msgs

0.0.1 (2017-11-14)

  • Initial commit
  • Contributors: Adam Allevato, Martin Gunther, procopiostein

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

Recent questions tagged vision_msgs at answers.ros.org

vision_msgs package from vision_msgs repo

vision_msgs

Package Summary

Tags No category tags.
Version 0.0.1
License Apache License 2.0
Build type CATKIN
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/Kukanani/vision_msgs.git
VCS Type git
VCS Version kinetic-devel
Last Updated 2019-10-04
Dev Status MAINTAINED
CI status Continuous Integration
Released RELEASED
Package Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Messages for interfacing with various computer vision pipelines, such as object detectors.

Additional Links

No additional links.

Maintainers

  • Adam Allevato

Authors

  • Adam Allevato

ROS Vision Messages

Introduction

This package defines a set of messages to unify computer vision and object detection efforts in ROS.

Overview

The messages in this package are to define a common outward-facing interface for vision-based pipelines. The set of messages here are meant to enable 2 primary types of pipelines:

  1. "Pure" Classifiers, which identify class probabilities given a single sensor input
  2. Detectors, which identify class probabilities as well as the poses of those classes given a sensor input

The class probabilities are stored with an array of ObjectHypothesis messages, which is essentially a map from integer IDs to float scores and poses.

Message types exist separately for 2D (using sensor_msgs/Image) and 3D (using sensor_msgs\PointCloud2). The metadata that is stored for each object is application-specific, and so this package places very few constraints on the metadata. Each possible detection result must have a unique numerical ID so that it can be unambiguously and efficiently identified in the results messages. Object metadata such as name, mesh, etc. can then be looked up from a database.

The only other requirement is that the metadata database information can be stored in a ROS parameter. We expect a classifier to load the database (or detailed database connection information) to the parameter server in a manner similar to how URDFs are loaded and stored there (see [6]), most likely defined in an XML format. This expectation may be further refined in the future using a ROS Enhancement Proposal, or REP [7].

We also would like classifiers to have a way to signal when the database has been updated, so that listeners can respond accordingly. The database might be updated in the case of online learning. To solve this problem, each classifier can publish messages to a topic signaling that the database has been updated, as well as incrementing a database version that's continually published with the classifier information.

Messages

  • Classification2D and Classification3D: pure classification without pose
  • Detection2D and Detection3D: classification + pose
  • XArray messages, where X is one of the four message types listed above. A pipeline should emit XArray messages as its forward-facing ROS interface.
  • VisionInfo: Information about a classifier, such as its name and where to find its metadata database.
  • ObjectHypothesis: An id/score pair.
  • ObjectHypothesisWithPose: An id/(score, pose) pair. This accounts for the fact that a single input, say, a point cloud, could have different poses depdending on its class. For example, a flat rectangular prism could either be a smartphone lying on its back, or a book lying on its side.
  • BoundingBox2D, BoundingBox3D: orientable rectangular bounding boxes, specified by the pose of their center and their size.

By using a very general message definition, we hope to cover as many of the various computer vision use cases as possible. Some examples of use cases that can be fully represented are:

  • Bounding box multi-object detectors with tight bounding box predictions, such as YOLO [1]
  • Class-predicting full-image detectors, such as TensorFlow examples trained on the MNIST dataset [2]
  • Full 6D-pose recognition pipelines, such as LINEMOD [3] and those included in the Object Recognition Kitchen [4]
  • Custom detectors that use various point-cloud based features to predict object attributes (one example is [5])

Please see the vision_msgs_examples repository for some sample vision pipelines that emit results using the vision_msgs format.

References

CHANGELOG

Changelog for package vision_msgs

0.0.1 (2017-11-14)

  • Initial commit
  • Contributors: Adam Allevato, Martin Gunther, procopiostein

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

Recent questions tagged vision_msgs at answers.ros.org

vision_msgs package from vision_msgs repo

vision_msgs

Package Summary

Tags No category tags.
Version 2.0.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/Kukanani/vision_msgs.git
VCS Type git
VCS Version ros2
Last Updated 2020-08-11
Dev Status MAINTAINED
CI status No Continuous Integration
Released RELEASED
Package Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Messages for interfacing with various computer vision pipelines, such as object detectors.

Additional Links

No additional links.

Maintainers

  • Adam Allevato
  • Adam Allevato

Authors

  • Adam Allevato

ROS Vision Messages

Introduction

This package defines a set of messages to unify computer vision and object detection efforts in ROS.

Overview

The messages in this package are to define a common outward-facing interface for vision-based pipelines. The set of messages here are meant to enable 2 primary types of pipelines:

  1. "Pure" Classifiers, which identify class probabilities given a single sensor input
  2. Detectors, which identify class probabilities as well as the poses of those classes given a sensor input

The class probabilities are stored with an array of ObjectHypothesis messages, which is essentially a map from integer IDs to float scores and poses.

Message types exist separately for 2D (using sensor_msgs/Image) and 3D (using sensor_msgs\PointCloud2). The metadata that is stored for each object is application-specific, and so this package places very few constraints on the metadata. Each possible detection result must have a unique numerical ID so that it can be unambiguously and efficiently identified in the results messages. Object metadata such as name, mesh, etc. can then be looked up from a database.

The only other requirement is that the metadata database information can be stored in a ROS parameter. We expect a classifier to load the database (or detailed database connection information) to the parameter server in a manner similar to how URDFs are loaded and stored there (see [6]), most likely defined in an XML format. This expectation may be further refined in the future using a ROS Enhancement Proposal, or REP [7].

We also would like classifiers to have a way to signal when the database has been updated, so that listeners can respond accordingly. The database might be updated in the case of online learning. To solve this problem, each classifier can publish messages to a topic signaling that the database has been updated, as well as incrementing a database version that's continually published with the classifier information.

Messages

  • Classification2D and Classification3D: pure classification without pose
  • Detection2D and Detection3D: classification + pose
  • XArray messages, where X is one of the four message types listed above. A pipeline should emit XArray messages as its forward-facing ROS interface.
  • VisionInfo: Information about a classifier, such as its name and where to find its metadata database.
  • ObjectHypothesis: An id/score pair.
  • ObjectHypothesisWithPose: An id/(score, pose) pair. This accounts for the fact that a single input, say, a point cloud, could have different poses depdending on its class. For example, a flat rectangular prism could either be a smartphone lying on its back, or a book lying on its side.
  • BoundingBox2D, BoundingBox3D: orientable rectangular bounding boxes, specified by the pose of their center and their size.
  • BoundingRect2D: A simplified bounding box that uses the OpenCV format: definition of the upper-left corner, as well as width and height of the box. The BoundingRect2D cannot be rotated.

By using a very general message definition, we hope to cover as many of the various computer vision use cases as possible. Some examples of use cases that can be fully represented are:

  • Bounding box multi-object detectors with tight bounding box predictions, such as YOLO [1]
  • Class-predicting full-image detectors, such as TensorFlow examples trained on the MNIST dataset [2]
  • Full 6D-pose recognition pipelines, such as LINEMOD [3] and those included in the Object Recognition Kitchen [4]
  • Custom detectors that use various point-cloud based features to predict object attributes (one example is [5])

Please see the vision_msgs_examples repository for some sample vision pipelines that emit results using the vision_msgs format.

References

CHANGELOG

Changelog for package vision_msgs

2.0.0 (2020-08-11)

  • Fix lint error for draconian header guard rule
  • Rename create_aabb to use C++ extension This fixes linting errors which assume that .h means that a file is C (rather than C++).
  • Add CONTRIBUTING.md
  • Fix various linting issues
  • Add gitignore Sync ros2 with master
  • Update test for ros2
  • add BoundingBox3DArray message (#30)
    • add BoundingBoxArray message
  • Make msg gen package deps more specific (#24) Make message_generation and message_runtime use more specific depend tags
  • Merge branch \'kinetic-devel\'
  • Removed \"proposal\" from readme (#23)
  • add tracking ID to the Detection Message (#19)
    • add tracking ID to the Detection
    • modify comments
    • Change UUID messages to strings
    • Improve comment for tracking_id and fix whitespace
  • Convert id to string (#22)
  • Specify that id is explicitly for object class
  • Fix dependency of unit test. (#14)
  • 0.0.1
  • Pre-release commit - setting up versioning and changelog
  • Rolled BoundingRect into BoundingBox2D Added helper functions to make it easier to go from corner-size representation to center-size representation, plus associated tests.
  • Added license
  • Small fixes in message comments (#10)
  • Contributors: Adam Allevato, Leroy R

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Recent questions tagged vision_msgs at answers.ros.org

vision_msgs package from vision_msgs repo

vision_msgs

Package Summary

Tags No category tags.
Version 0.0.1
License Apache License 2.0
Build type CATKIN
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/Kukanani/vision_msgs.git
VCS Type git
VCS Version lunar-devel
Last Updated 2017-11-14
Dev Status MAINTAINED
CI status Continuous Integration
Released RELEASED
Package Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Messages for interfacing with various computer vision pipelines, such as object detectors.

Additional Links

No additional links.

Maintainers

  • Adam Allevato

Authors

  • Adam Allevato

ROS Vision Messages

Introduction

This package defines a set of messages to unify computer vision and object detection efforts in ROS.

Overview

The messages in this package are to define a common outward-facing interface for vision-based pipelines. The set of messages here are meant to enable 2 primary types of pipelines:

  1. "Pure" Classifiers, which identify class probabilities given a single sensor input
  2. Detectors, which identify class probabilities as well as the poses of those classes given a sensor input

The class probabilities are stored with an array of ObjectHypothesis messages, which is essentially a map from integer IDs to float scores and poses.

Message types exist separately for 2D (using sensor_msgs/Image) and 3D (using sensor_msgs\PointCloud2). The metadata that is stored for each object is application-specific, and so this package places very few constraints on the metadata. Each possible detection result must have a unique numerical ID so that it can be unambiguously and efficiently identified in the results messages. Object metadata such as name, mesh, etc. can then be looked up from a database.

The only other requirement is that the metadata database information can be stored in a ROS parameter. We expect a classifier to load the database (or detailed database connection information) to the parameter server in a manner similar to how URDFs are loaded and stored there (see [6]), most likely defined in an XML format. This expectation may be further refined in the future using a ROS Enhancement Proposal, or REP [7].

We also would like classifiers to have a way to signal when the database has been updated, so that listeners can respond accordingly. The database might be updated in the case of online learning. To solve this problem, each classifier can publish messages to a topic signaling that the database has been updated, as well as incrementing a database version that's continually published with the classifier information.

Messages

  • Classification2D and Classification3D: pure classification without pose
  • Detection2D and Detection3D: classification + pose
  • XArray messages, where X is one of the four message types listed above. A pipeline should emit XArray messages as its forward-facing ROS interface.
  • VisionInfo: Information about a classifier, such as its name and where to find its metadata database.
  • ObjectHypothesis: An id/score pair.
  • ObjectHypothesisWithPose: An id/(score, pose) pair. This accounts for the fact that a single input, say, a point cloud, could have different poses depdending on its class. For example, a flat rectangular prism could either be a smartphone lying on its back, or a book lying on its side.
  • BoundingBox2D, BoundingBox3D: orientable rectangular bounding boxes, specified by the pose of their center and their size.

By using a very general message definition, we hope to cover as many of the various computer vision use cases as possible. Some examples of use cases that can be fully represented are:

  • Bounding box multi-object detectors with tight bounding box predictions, such as YOLO [1]
  • Class-predicting full-image detectors, such as TensorFlow examples trained on the MNIST dataset [2]
  • Full 6D-pose recognition pipelines, such as LINEMOD [3] and those included in the Object Recognition Kitchen [4]
  • Custom detectors that use various point-cloud based features to predict object attributes (one example is [5])

Please see the vision_msgs_examples repository for some sample vision pipelines that emit results using the vision_msgs format.

References

CHANGELOG

Changelog for package vision_msgs

0.0.1 (2017-11-14)

  • Initial commit
  • Contributors: Adam Allevato, Martin Gunther, procopiostein

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

No known dependants.

Recent questions tagged vision_msgs at answers.ros.org