Do you already have the videos in equirectangular format?
If so, literally just bind the texture and draw a sphere and you’re done. If not, you’ll need to either distort your textures realtime, which considering you’re on a mobile device is probably ill-advised, or generate an equirectangular video.
To do this, i’ve used cmft with pretty good results, though only ever with stills. I suppose you could dump your video to individual frames, and pass them each to cmft (exporting in latlng format, to use their parlance) and then recombine the processed frames into a video. You’ll want your results to come out looking something like this
editHere’s a quick gist using a rip of the above video to help you get started.
No, I don’t have a 360 video yet: I was going to generate one using c4d for simple scenes (like falling rain with alpha channel, to be mixed with a camera video feed) and later to get a physical 360 camera.
Edit: And this is the result of using MotionManager to drive the _camera.
–8
Aside from cubemaps, another really popular 360 video option is dual fisheye. If you’re generating the images yourself, cubemap is probably the way to go, but if you’re using something like a Ricoh Theta or Sony Insta360 you can just write fragment shaders to convert it live. As is almost always the case, Paul Bourke has written about it http://paulbourke.net/dome/dualfish2sphere/
@sharkbox –– Thanks, that is a great approach too. Now if only apple allowed us to use both front and back cameras simultaneously –– add two fisheye lenses, and voila: realtime 360 camera on iPhone!
The advantage is that texture coordinates are calculated per pixel, instead of per vertex, which may solve issues with texture seams (e.g. when using an IcoSphere instead of a normal Sphere). If you don’t have these issues, @lithium 's approach is probably the better option (slightly better performance), but @num3ric 's shader is a great solution if you do.