Hello and good day.
I’ve been struggling the last days into being able to put skeleton tracking in Cinder.
I even have the repository for depth cameras https://github.com/jing-interactive/Cinder-DepthSensor which seems to be really professional but not bundled with an application.
I happen to find also an example https://github.com/jing-interactive/AirTouch that runs on visual studio code (i think) but I am not able to build it and my workflow in that IDE is null.
Anyone can help me with this? Been rotating around this problem for quite some time and perhaps someone more advanced in cinder and perhaps c++ might be able to help.
All the examples I see have a lot of years and have other versions of cinder also, there is nothing from last year or so.
Thanks in advance, Luis.
Hello and good day.
If you’re using a single depth camera (e.g. an Intel RealSense), I recommend using NuiTrack. If you need more than one camera connected to a single PC, you’re out of luck. In that case, your only option is OpenCV and weeks of work. Unless you still have a Kinect somewhere.
Hello and thank you for your answer Paul!
I have a Kinect, which is the one I am tying to make the skeleton tracking.
The XBOX 360 -> https://www.amazon.in/Microsoft-Kinect-Sensor-Xbox-360/dp/B01F7MXXEU
It connects fine and I can use it with CinderFreenect and I actually can get the skeleton tracking using Processing, but I am not able in cinder.
All the examples I have is for old versions and I am not able to build it.
Are you able to make skeleton tracking using cinder and macos?
I only need one kinect at the time.
Thank you in advance.
I am currently using MacOS and even if I always get windows.h dependencies problems.
Are you able to build it yourself?
I don’t understand why the main Kinect -> CinderFreenect library doesn’t allow skeleton tracking.
Kinect support is much stronger on Windows rather than MacOS as Microsoft’s drivers are way better. (It’s pretty simple to set up a dual boot system if you plan to get serious with it down the line.)
Freenect is a library that grabs the 3D data but it doesn’t include skeleton tracking. If you’re getting skeleton tracking to work in Processing then it’s probably using the NITE/OpenNI/Sensorkinect libraries. OpenNI is the framework; NITE is the proprietary engine from Primesense that’s not officially supported on the Kinect; Sensorkinect is a hacked version of the Kinect driver that makes NITE work. Since Apple bought Primesense a few years ago, there hasn’t been a legitimate way to download NITE, though there are still quite a few binaries knocking around. Also, there are two versions of NITE/OpenNI (version 1 and version 2). I don’t think a hack was ever made for version 2 so you have to stick with version 1.
There are Cinder addons for OpenNI. This one (https://github.com/wieden-kennedy/Cinder-OpenNI) won’t work for you with an Xbox 360 Kinect.
This one does work: https://github.com/pixelnerve/BlockOpenNI
but it doesn’t include the binaries. (You’ll see the links in the readme.txt for the drivers to install now take you straight to the apple homepage.) If you check the openFrameworks addons for OpenNI then you may find the binary installers bundled with one of them. There’s also a program called Synapse which uses the NITE drivers to send OSC, and I think it might have the drivers bundled. I can’t remember.
There’s another issue which is newer versions of MacOS enforce driver signing, and these drivers aren’t signed. So you’ll need to disable driver signing which if I recall involves rebooting in a recovery terminal and typing a few commands.
All in all - it’s probably not worth your while going down this path if you’re just starting. Skeleton tracking on an old Kinect on MacOS was always flakey at best and now it’s harder than ever (unless there’s some recent developments I don’t know about).
- Choose openFrameworks over Cinder for this project as there’s a bigger community so more likely to be people still working with your setup.
- Alternatively, use Processing or Synapse and send the skeleton data to Cinder via OSC. This is probably easiest if you want to use Cinder.
- Look into using windows with the Kinect SDK and the new Kinect v2 from the Xbox One.
- In the not too distant future it should be possible to get reasonable skeleton tracking from an RGB camera in realtime. There are libraries that do this now, such as openPose, but I’ve not managed to have one work in real-time without a massive graphics card to run all the deep learning algorithms.
One other point if you do perservere is that the original process for installing the NITE drivers put them into a fixed place on the hard drive (in /local/usr/bin or something), so if you get Synapse to work after installing some drivers then you may find that BlockOpenNI works as well.
Thank you for the response, I appreciate.
I managed to get OpenPose up and running and it is very good at detection, probably the best in the industry. Waving your arm with the rest of your body out of view is still properly detected as “an arm belonging to an invisible body”, so pretty impressive. But yeah, performance is a thing… on my laptop it was at most ~2 fps.
But if you have little time or experience, @timmb 's suggestions are what you should go for (with the addition of a paid solution like NuiTrack, of course).
Thank you once again.
Yes, I read it and I have the papers opened here for me to read when I have time.
It seems pretty impressive, I can think in the possibilities for this in mobile devices. Even though it is a great solution for kinect because most of my ideas to work are to use the skeleton tracking.
Thank you again!
Was going trough the forum to see what people are using these days for skeleton tracking.
I have a upcoming project where we need to track persons with skeleton + depth data
Was thinking of just trying to buy myself some Kinect v2 hardware and use the MS SDK.
I remember it was pretty fast and easy a couple of years ago or maybe go for a Intel RealSense with NuiTrack.
Anybody who has experience with these on a real event running ±12u a day?