-
 

Package Summary

Tags No category tags.
Version 3.0.2
License MIT
Build type CATKIN
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/GT-RAIL/rail_object_detection.git
VCS Type git
VCS Version indigo-devel
Last Updated 2018-11-01
Dev Status DEVELOPED
CI status Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

The rail_object_detector package

Additional Links

No additional links.

Maintainers

  • Siddhartha Banerjee
  • Andrew Silva

Authors

No additional authors.

RAIL Object Detector

OpenCV 3.0+ Warning:

Many files within Darknet will no longer compile on OpenCV 3+. We have a branch titled ‘opencv3_compilation’ which fixes these errors for CPU, but there are many more to fix before Darknet will compile for GPU use. As such, if you are on OpenCV 3+, we suggest cloning the opencv3_compilation branch and using the DRFCN detector instead of Darknet.

Two Minute Intro

This package now includes two object detectors which you may choose between, YOLOv2 and Deformable R-FCN (DRFCN). Detections from YOLOv2 are a bit faster, >10fps compared to ~4fps (on a Titan X), but less accurate than the detections from DRFCN.

The YOLOv2 detector uses darknet to perform object detection. It provides the ability to query for objects in an image through both services as well as from a topic.

The Deformable R-FCN detector is built on MXNet, and provides the ability to query for objects from a topic.

The response to all queries contains a list of objects, each of which has the following properties:

  1. label
  2. probability - confidence value in recognition
  3. centroid_x - X pixel value of the centroid of the bounding box
  4. centroid_y - Y pixel value of the centroid of the bounding box
  5. left_bot_x - X pixel value of bottom-left corner of bounding box
  6. left_bot_y - Y pixel value of bottom-left corner of bounding box
  7. right_top_x - X pixel value of top-right corner of bounding box
  8. right_top_x - X pixel value of top-right corner of bounding box

Querying through services (Darknet Only)

There are two modes of querying:

  • Scene Queries
  • Image Queries

Scene Queries are served by first subscribing to an existing camera sensor topic. Then at the moment of the query, we run object recognition on the latest frame from the camera and the resulting objects in that scene are returned after however long darknet takes.

Image Queries require an image to be sent along with the query. Object recognition is performed on this input image, and the detected objects as well as the original image are sent back.

Query through topic (Darknet and DRFCN)

If enabled (default with DRFCN), the detector subscribes to an existing camera sensor topic and grabs images from this camera as they are published. After performing object detection on the grabbed image, the detector publishes the list of objects that were found to the query topic along with the timestamp of the image which was grabbed for detection.

With the Darknet detector, the interval for grabbing images is specified in the form of a frequency. If the desired frequency exceeds the maximum frequency of operation of the detector (~1 Hz on CPU), we limit to the maximum frequency of operation.

Installation

Darknet Installation:

  1. Put this package into your workspace
  2. Assuming WS as the top level directory of this package (where this README is located), navigate to ${WS}/libs/darknet
  3. Download the weights into this directory from this Google Drive link
  4. Run catkin_make and enjoy the ability to use object detection! (If you need to update the file paths, use absolute paths!)

DRFCN Installation

Below is a list of python libraries that are required for the Deformable RFCN network in MXNet to work:

  • Cython
  • EasyDict
  • mxnet-cu80==0.12.0b20171027
  • Pillow
  • pyyaml

Note that the version of MXNet is the same as in the Deformable ConvNets repo. You may change the version if your setup requires something different, but it may not work properly.

There is a requirements.txt file included in the repo, which lists the above libraries. You can install all of them at once by running:

pip install -r requirements.txt

Once you have all of the requirements, installation procedes as follows:

  1. Put this package into your workspace
  2. Navigate to the top level of this package (where this README is located)
  3. run sh bin/build_drfcn.sh
  4. Download the model parameters from this Dropbox link and move them into the libs/drfcn/model subdirectory of this package.
  5. Run catkin_make and enjoy the ability to use object detection! (If you need to update the file paths, use absolute paths!)

Testing your Installation

Testing either detector is possible by starting a camera node or by adding some testing .jpg images to the libs/darknet/data directory. Then follow the procedures below according to the detector you wish to test.

Darknet Testing

Three optional test scripts are included in the scripts directory (test_image_query.py, test_scene_query.py, and test_detections_topic.py). To test your installation, do the following:

  • Run a camera with your favorite ROS camera node.
  • Launch the detector_node node with the image topic of your camera and image queries enabled:
roslaunch rail_object_detector darknet.launch use_image_service:=true image_sub_topic_name:=[camera image here]`

  • Run the scene query test script; this should periodically detect and recognize objects in images from your camera:
rosrun rail_object_detector test_scene_query.py

  • Run the image query test script; this should run object recognition on the images you copied into libs/darknet/data:
rosrun rail_object_detector test_image_query.py

  • Shutdown the previous launch and restart with services disabled but the detections topic enabled:
roslaunch rail_object_detector darknet.launch publish_detections_topic:=true image_sub_topic_name:=[camera image here]

  • Run the topic test script; this should run object recognition in the backround and print to console the list of objects that were detected along with the timestamp:
rosrun rail_object_detector test_detections_topic.py

DRFCN Testing

Launch the demo node like so:

roslaunch rail_object_detector drfcn_demo.launch

In a separate terminal, run:

rosrun image_view image_view image:=/drfcn_node/debug/object_image

and you will see the image you pointed to with detected objects highlighted and labeled. It should look something like this:

Visualization of the object detector

Colors change with each new detection of the object, and note that there is no tracking or propagation of labels (as on the couch in the gif above).

You can also launch the demo node with a publish rate parameter, which slows or speeds up the cycling of images in the libs/darknet/data directory. For example, to publish at 0.5 Hz run:

roslaunch rail_object_detector drfcn_demo.launch rate:=0.5

Otherwise, you can run a camera with whichever ROS camera node you would like.

  1. Run a camera with your favorite ROS camera node.
  2. Launch the detector_node node with the image topic of your camera and debug mode enabled:
roslaunch rail_object_detector drfcn.launch image_sub_topic_name:=[camera topic here] debug:=true`

Once again, in a separate terminal, run:

rosrun image_view image_view image:=/drfcn_node/debug/object_image

and you will see the image you pointed to with detected objects highlighted and labeled.

ROS Nodes

detector_node

Named darknet_node and drfcn_node for the Darknet and DRFCN detectors respectively. It is a wrapper for object detection through ROS services. Relevant services and parameters are as follows:

  • Services (Darknet Only)
    • <detector_node>/objects_in_scene (rail_object_detection_msgs/SceneQuery)
      Scene Query service: recognize objects in the latest image from the camera stream image_sub_topic_name. Takes no input, and outputs a list of detected, labeled objects and a corresponding image. Only advertised if use_scene_service is true.
    • <detector_node>/objects_in_image (rail_object_detection_msgs/ImageQuery)
      Image Query service: recognize objects in an image passed to the service. Takes an image as input, and outputs a list of detected, labeled objects and a corresponding image. Only advertised if use_image_service is true.
  • Topics (Darknet and DRFCN)
    • <detector_node>/detections (rail_object_detection_msgs/Detections)
      Topic with object detections performed in the background by grabbing images at a specified interval. For Darknet, advertised if publish_detections_topic is true. For DRFCN, this is always published.
    • <detector_node>/debug/object_image (sensor_msgs/Image)
      Topic with object detections visualized on incoming images as they come in from the subscriber. Only published if debug:=true. Currently unavailable for the Darknet detector.
  • Parameters (Darknet and DRFCN)
    • image_sub_topic_name (string, default: “/kinect/qhd/image_color_rect”)
      Image topic name from camera to subscribe to for object detection
  • Parameters (DRFCN Only)
    • debug (bool, default: false)
      Enable or disable debug mode, which publishes input images with object bounding boxes and labels overlaid
    • use_compressed_image (bool, default: false)
      Flag to use the compressed version of your input image stream. It will append /compressed to the name of your image topic
    • model_filename (string, default: “${WS}/libs/drfcn/model/rfcn_dcn_coco”)
      Model parameters for the DRFCN detector. Make sure to use an absolute path. See the documentation on DRFCNs for details on the parameters themselves.
  • Parameters (Darknet Only)
    • num_service_threads (int, default: 0)
      Number of asynchronous threads that can be used to service each of the services. 0 implies the use of one thread per processor
    • use_scene_service (bool, default: true)
      Enable or disable Scene Query service
    • use_image_service (bool, default: false)
      Enable or disable Image Query service
    • publish_detections_topic (bool, default: false)
      Enable or disable Detections topic
    • max_desired_publish_freq (float, default: 1.0)
      Desired frequency of object detection. If frequency exceeds maximum detector frequency, the desired value will not be achieved
    • probability_threshold (float, default: 0.25)
      Confidence value in recognition below which a detected object is treated as unrecognized
    • classnames_filename (string, default: “${WS}/libs/darknet/data/coco.names”)
      Configuration file for darknet. Make sure to use an absolute path. See darknet for details on configuration file itself.
    • cfg_filename (string, default: “${WS}/libs/darknet/cfg/yolo.cfg”)
      Configuration file for darknet. Make sure to use an absolute path. See darknet for details on configuration file itself.
    • weight_filename (string, default: “${WS}/libs/darknet/yolo.weights”)
      Configuration file for darknet. Make sure to use an absolute path. See darknet for details on configuration file itself.

Startup

Simply run the launch file to bring up all of the package’s functionality:

roslaunch rail_object_detector <detector>.launch

Where ‘detector’ is either ‘darknet’ or ‘drfcn’. Note that the default Darknet uses scene queries only, and the default DRFCN uses image topics only.

GPU Mode

DRFCN requires an NVIDIA GPU with at least 4GB memory, and requires no additional setup to run in GPU mode (as it is the only mode). Building Darknet for GPU mode will greatly increase detection speed, but requires some additional setup.

Building with CUDA

Darknet can be built with CUDA support to provide >10x speedup in object detection. The compilation flags DARKNET_GPU and DARKNET_GPU_ARCH can be used to enable this.

Make sure you have cleaned out any previous darknet builds by running make clean in the directory rail_object_detector/libs/darknet before attempting to build with CUDA support

catkin_make -DDARKNET_GPU=1 -DDARKNET_GPU_ARCH=compute_52

Explanation of flags:

  1. DARKNET_GPU: Set to 1 in order to enable GPU. Any other value disables it
  2. DARKNET_GPU_ARCH: Set to the compute capability of your CUDA enabled GPU. You can look this up on Wikipedia

Scope for Improvement

  • Move libs/darknet/data to libs/data for clarity / ubiquity
  • Add a service call for the deformable convnets
  • Automatically run sh bin/build_drfcn.sh
  • Travis builds only test the CPU build of darknet. Include tests for the other configurations too.
  • Installing the mxnet libraries through setup.py is failing. (See TODO comments in setup.py and drfcn_detector.py)
  • Scene Query and Publishing the topic don’t seem to work well together on darknet for some fathomable reason. Fix it.
  • Cleaning build artifacts catkin_make clean does not cleanup darknet build artifacts.
  • Include the ability to download the weights files automatically from the CMakeLists.txt file
  • There is a distinct lack of defensive programming against malicious (NULL) messages and the like. Beware.
  • There are no checks for memory leaks that might accummulate over time in the darknet detector.
CHANGELOG

Changelog for package rail_object_detector

3.0.2 (2018-11-01)

  • Update the default branch

3.0.1 (2018-09-07)

  • Create a separate package for the messages
  • Contributors: Siddhartha Banerjee, banerjs

2.0.1 (2017-12-11)

  • Added Deformable RFCN detector.
  • Change darknet node name.
  • Change directory structure and file names for darknet.
  • Update README with valid weights file, and fix broken links
  • Contributors: Andrew Silva, Ryan Petschek, Siddhartha Banerjee, Weiyu Liu

1.0.4 (2017-02-09)

  • Adding in an automatic build for 32 bit
  • Contributors: Siddhartha Banerjee, banerjs

1.0.3 (2017-02-09)

  • Adding in base travis build
  • Fixed the 32-bit compile error, hopefully
  • Contributors: Siddhartha Banerjee

1.0.2 (2017-02-03)

  • Completed the build of GPU with flags
  • Pushing fixes in master back to gpu_devel. Merge branch 'master' into gpu_devel
  • Updated the README. Unfortunately, the mangling of data between publisher and service still exists and I cannot get rid of it
  • Fixed timing bugs with the object detector and possibly even the bug between service and topic contention. Need to test
  • Merging updates on master into the GPU branch
  • Created and Tested install of the package
  • Removed the need to update internal config file values
  • Returning defaults
  • Changed naming for a public release.
  • Testing out GPU functionality
  • Adding in a gitignore
  • Updated the detector to also use a topic for publishing object detections
  • removed unused parameters
  • removed service name parameter
  • Updated path in test script to use package-based absolute path
  • package-location-based absolute paths for ros params, minor cleanup and debugging, documentation
  • Adding in the cfg and data folders
  • Completed final details on object detection. Delivering project for now
  • Cursory stress test of the node is complete
  • Finally done with a functioning prototype of the object detector
  • Successful linkage of C++ with C
  • Completed ROS Skeleton for the detector
  • Initial commit of the object detector
  • Contributors: David Kent, Siddhartha Banerjee

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Dependant Packages

Launch files

  • launch/drfcn_demo.launch
      • image_sub_topic_name [default: /kinect/qhd/image_color_rect]
      • model_filename [default: $(find rail_object_detector)/libs/drfcn/model/rfcn_dcn_coco]
      • debug [default: true]
      • rate [default: 1]
  • launch/darknet.launch
      • num_service_threads [default: 0]
      • use_scene_service [default: true]
      • use_image_service [default: false]
      • publish_detections_topic [default: false]
      • image_sub_topic_name [default: /kinect/hd/image_color_rect]
      • max_desired_publish_freq [default: 1.0]
      • probability_threshold [default: 0.25]
      • classnames_filename [default: $(find rail_object_detector)/libs/darknet/data/coco.names]
      • cfg_filename [default: $(find rail_object_detector)/libs/darknet/cfg/yolo.cfg]
      • weight_filename [default: $(find rail_object_detector)/libs/darknet/yolo.weights]
  • launch/drfcn.launch
      • image_sub_topic_name [default: /kinect/qhd/image_color_rect]
      • model_filename [default: $(find rail_object_detector)/libs/drfcn/model/rfcn_dcn_coco]
      • debug [default: false]
      • use_compressed_image [default: false]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged rail_object_detector at Robotics Stack Exchange