Oct 7, 2011

3d point cloud blob tracking & targeting with kinect and a robotic pan-tilt turret


After exploring the possibilities of using opencv 2d algorithms for 3d tracking with 3d depth buffer images, we decided to work directly with the 3d point cloud data to perform a better and more versatile 3d tracking.




One of the most interesting open source resources available for 3d point cloud handling is the PointCloud (PCL) project. The aim of this initiative is to provide a robust and easy way of analysing 3d world data, including goals like 3d object recognition, surface features estimation, etc.

Before getting deeply into the use of PCL (which allows not only an efficient handling of the 3D data, but also interfacing directly with 3D sensors as the kinect), we decided to implement from scratch one of the blob parsing algorithms already available in PCL and explained in the PCL documentation (http://pointclouds.org/documentation/tutorials/cluster_extraction.php).


The algorithm is an Euclidean Cluster Extraction. Basically, identifying sets of point that are within a given 3d range one from its neighbour, equivalent to adjacent pixels of the same colour in 2d.

The advantage of making a custom implementation is that we know exactly where to look for adjacent points: they come already next to each other as 2d pixels in the depth buffer image. This allows us also to keep using for now all the image based noise filtering and background subtraction, instead of moving everything to the PCL way. Another useful thing of working directly with the 3d point cloud data, is that we can develop a more user friendly interface, now in an opengl 3d virtual world, to see what is occurring with the 3d blob detection and localization. A further step will be to allow the setup to be done via this opengl interface, like in most 3d modelling software.

Here is a clip of the system up and running as we have it now, with the pan-tilt turret pointing to the highest point of the nearest blob. The turret and the kinect sensor are both represented as 3d objects in the virtual world according to the positions and rotations measured from the real scene.

4 comments:

  1. this is really great!
    well done!!

    :)

    ReplyDelete
  2. this looks great.
    is this code that you would be willing to share?

    thanks,
    stephan.

    ReplyDelete
    Replies
    1. The code is partially available in the form of openFrameworks addons:

      https://github.com/dasaki/ofxKinectBlobFinder
      https://github.com/dasaki/ofxKinectBlobTracker

      What are you interested in, more specifically?

      Regards,

      David

      Delete
  3. oh. that was you.
    i tried that addon but did not see any thing happening.
    i will check again.
    thx.

    ReplyDelete