Machine learning libtorch

I’m working on another machine learning block for cinder, this time, using libtorch… the c++ api for pytorch. I have it successfully running in cinder but I’d like to add more before I upload it.
Apparently, one can convert most pytorch models into c++ libtorch using torchscript…(python) pytorch->torchscript->libtorch (c++)
Furthermore, nvidia has c++ lib, Tensorrt, which utilizes the new Turing architecture for inference speed-ups.
In which case, the process would be pytorch->onnx->tensorrt.
Also, Microsoft MIT labs announced the open source Deepspeed… which accelerates training in pytorch.
So it looks as though the industry is settling/supporting pytorch and nvidia labs also seem to use it.
I’ll post my findings as soon as I have a fully functional work-stream but if anyone else has resources or examples on the matter… I figured this forum topic would be a good place to collect resources together…
Mostly, since I’m not a python coder, I’m having a hard time figuring out how to convert the models with torchscript or onnx… but it should be possible. Anyone try this out yet?
This approach seems like the best c++ machine learning solution and pytorch/tensorrt also seems like it will be the most popular machine learning workflow moving forward… tensorrt also supports tensorflow but I really like that there is a libtorch c++ api as-is and Nvidia is sharing a ton of amazing pytorch research…
Cheers.

5 Likes