Hey all,

wanted to share some recent work that I did to bring Newtek’s NDI protocol into a first version of a Cinder block. You can read all about NDI but in short its a highly efficient video-over-IP protocol that aims LAN’s. The code is pretty minimal to get it up an running and the samples illustrate the sender-receiver concept. I made a short video where some of Cinder’s examples have been pumped with NDI capabilities and are streaming their pixels to showcase the idea ( the sender is running on OS X and the receiver on a Linux VMware locally on the same machine ).


The video used is big buck bunny 1080@30fps from Blender.

I think its cool and has quite some potential applications in both commercial and non-commercial creative projects. Programs like Spout, Resolume and Syphon have also started to provide support and in general any NDI-supported hardware/software can act as a source for a Cinder-NDI receiver and vice-versa which also opens up quite some possibilities.

You can find it here .

Hope you enjoy!


NDI has been invaluable to my new projects! Thanks Petros for a clean sleek block. I went ahead and added audio capability to send audio along with video. Maybe you’d like to add it to your repo?

1 Like

Hey @Malfunkn,

happy to hear that is of use for you - I did some updates a few months ago in order to update to the latest NDI SDK version ( at that point v3 ) and also to make the block a bit more generic under the hood, so its easier to integrate new features moving forward. The NDI SDK is also included as part of the block now since Newtek lifted their restrictions for redistribution.

As part of this process I also integrated basic audio send / receive functionality albeit in a different way than yours as I m not using an audio node for this although I like the idea and maybe I give it a try to integrate it in a similar way.

Included also now with the samples is a basic AsyncSurfaceReader class that you can use for reading the pixels in a non-blocking fashion so this should also help to keep things tidy a bit.

The samples have been updated in order to illustrate these new additions and API changes.

Have fun!

That’s great! I downloaded the new NDI just before posting actually… I should have checked your repo first. Awesome work!
Also, I found out that NDI natively sends YUV format so I was checking out the possibility of converting the colors pace from RGB to YUV in a glsl shader to ease the conversion in NDI. Then we can use NDIlib_FourCC_type_UYVY and bypass the compression step in NDI. Right?

I’m running most my stuff in a fbos/shaders anyways and I don’t always need an alpha channel. Not sure if YUV works the way I think though…

You should not do conversion from RGB to YUV in a shader. NDI specifically mentions in the documentation that they have highly optimized algorithms to do the conversion for you. Just set the appropriate flags on the transmitter:

NDIlib_video_frame_v2_t ndiVideoFrame;
ndiVideoFrame.xres = width;
ndiVideoFrame.yres = height;
ndiVideoFrame.FourCC = NDIlib_FourCC_type_RGBA;
ndiVideoFrame.p_data = mSurface->getData();
ndiVideoFrame.line_stride_in_bytes = int( mSurface->getRowBytes() );
ndiVideoFrame.frame_rate_D = 1000;
ndiVideoFrame.frame_rate_N = 60000;

NDIlib_send_send_video_v2( mSender, &ndiVideoFrame );

NDI will then automatically convert from RGB to YUV in the transmitter and from YUV back to RGB in the receiver. Due to their implementation, this will be extremely fast.


Rad… I won’t mess with it then.

Another thought though…
I noticed petros’s async audio has to use a buffer large enough to fill the whole video frame.

int samplesPerFrame = ci::audio::Context::master()->getSampleRate() / mCinderNDISender->getFps();

Using realtime audio, however, what’s a good way of going about building a cinder::audio::Buffer that would append the between-frame audio::Buffer?
For example, I’m losing a ton of in-between-frames of visual data when I use low audio samples FramesPerBlock for reduced audio latency. I like my input buffer around 128 frames.
44100 sample rate / 30 fps = 1470 audio::Buffer frames for a full frame of video. Fair enough. But 1470/128 = 11.484375 so I’m only visualizing 1/11.48 frames of audio for that frame of video, right? If I only care about the visual aesthetic, I’m okay losing the .4843 and rounding down to 11.0*128 frame cycles to fill the buffer.
But for NDI async, it needs a full frame or there will be clicks in the audio. Any thoughts on how one could go about building an exact “accumulative” audio buffer node with indivisible buffer sizes? I’ve been needing it anyways cause 128 sized audio buffers look pretty pathetic…

I have a StreamingBuffer I’ve been working on to solve a similar problem which might be useful here. We’re using it to extract audio via FFmpeg, potentially at a faster rate than realtime. The interface looks like this:

class CI_API StreamingBuffer {
	StreamingBuffer( size_t blockSizeBytes = 65536 );
	StreamingBuffer( const StreamingBuffer &rhs ) = delete;
	StreamingBuffer( StreamingBuffer &&rhs ) = delete;

	void	pushFront( const void *data, size_t sizeBytes );
	size_t	popBack( void *output, size_t maxSize );

	size_t	getSize() const;

	bool 	empty() const { return getSize() == 0; }
	void	clear();
	void	shrinkToFit();

	//! Performs a non-destructive copy to \a output, up to \a maxSize bytes. Does not pop any data. Returns number of bytes written
	size_t	copyTo( void *output, size_t maxSize ) const;

	StreamingBuffer&	operator=( const StreamingBuffer &rhs ) = delete;
	StreamingBuffer&	operator=( StreamingBuffer &&rhs ) = delete;


Internally it uses blocks of memory and tries to recycle them to minimize allocations. It’s thread-safe and meant for a single-producer / single-consumer model. In the medium-term I’d like to include this in Cinder. However in the meantime I’d be happy to send you / Petros the implementation if that’s helpful.


1 Like

I’d love to check it out! I may add a new forum thread for this and some other audio tweaks because there is actually quite a bit I think we can gather for Cinder’s audio. I know I have a few things to contribute. Thanks andrewfb!

void shrinkToFit();

very clever…

OK - so as I mentioned, we’re using this in production, and I’m reasonably confident of its correctness, but certainly not perfectly so. It has unit tests I haven’t included here. I am not sure yet how we should handle move / copy semantics so I’ve disabled them so far. My instinct is to disallow copying but later to implement moving - that would be a welcome addition if someone wants to take it on.

In our usage, and in the pattern this is designed for, one thread is adding new bytes as FFmpeg decodes them, and the audio thread is consuming them at whatever the audio rate is. By design, the decompression thread can get far ahead of playback thread, and the StreamingBuffer will grow automatically.

Here is the full interface:

and here is the full implementation: