Point cloud sensors


#1

Hello
I want to share a post to awaken on capture technologies !

I am interested in knowing the best solution for capturing a point cloud in real time to realize interactive screen walls for example.

The intel Realsense look pretty good to me :

http://www.pattenstudio.com/works/intel-nyfw2015/
http://www.pattenstudio.com/works/intel/
http://www.pattenstudio.com/works/intel-2016/

My interrogation is how to merge properly and easily point cloud (depthmap) between multiple sensors like realsense ?

Interesting tool :

Other track :
3D LIDAR like SICK, VELODYNE…

Thanks to share your experience !


#2

I am using an Intel RealSense D435 for a project at the moment. The SDK is solid, but lacks a great amount of features. For example, there is no support for human tracking, let alone skeleton tracking. It only supports connecting/disconnecting, obtaining the color and depth streams, correcting the camera shift between color and depth cameras and creating a point cloud from the depth stream.

The SDK supports multithreading through the use of concurrent buffers and is pretty stable.

To align multiple cameras, I wrote a tool to change each camera’s virtual position, height and pan&tilt angles. Results are more than adequate for our use case. I tried using Perspective-N-Point calibration, but could not get OpenCV’s solvePnP to work at all. I honestly believe it’s broken.

-Paul


#3

Realtime is a tough thing to crack. The best way is definitely something like Paul suggested, and to know the real-world positions of your cameras and position them similarly in code. Otherwise, use some kind of real world calibration like openCVs checkerboard.

It’s something the guys at http://www.depthkit.tv/ have also done a lot of work on with pretty cool results. They are more focused on volumetric filmmaking, but still super cool.