These cameras have some nice features, including the ability to do object and body tracking across multiple cameras. They work with stereo visible light. They require an NVIDIA GPU for any depth processing or tracking. This CinderBlock works with Cinder on both Linux and Windows. They’ve been tested with the Jetson Orin Nano specifically.
The CinderBlock also supports code paths which keep the stereo image and depth cloud on the GPU, so there’s no round-tripping to the CPU and back.
Have you guys used these in production for anything? My last large scale tracking job i ended up using (20 ) Intel realsenses but as far as I know they’ve been discontinued so I’m always curious how other people haved fared with different depth cameras.
We’ve got a project in the works I can’t say too much about yet that is using these. I can say that the ability to do object/body tracking across multiple cameras is great. They do require NVIDIA GPUs and they eat a fair amount of processing if you need that for other things. IMO their software stack is the most sophisticated and polished, at least among the depth cameras I’ve used.
Definitely worth considering, especially if visible light (no IR or ToF) meets your requirements.
RealSense is back/still around, they spun out from Intel earlier this year but I don’t think the product line has changed too much from the last refresh. The stack’s been shortened a bit though, not a ton of features anymore and they’re deprecating a lot of their wrappers/frameworks, but still a solid option if you just need to pull some depth.