In it's simplest form, this can take the form of projecting a video on a angular surface giving the appearance of casting it on a three-dimensional surface. Such feats are not directly supported by FFMpeg, but pairing it with Imagemagick and you can accomplish it.
Let's jump ahead to see what we're trying to accomplish, then walk through the steps to do so.
Let's say we want to project a video like this;
Why would you want to do such a thing? Whelp, perhaps you have a sales video and while the presenter is pitching your product you wish to show your product in a sub-window and rather than do so in a flat 2D manner you want it to look like it has some depth. Or, perhaps you cast it on a computer/phone/tablet screen (real or virtual). You can do so by overlaying the perspective mapping atop your full video.
So, how do you accomplish it? Let's call it a three-step process;
- extract all the frames from the input video using FFMpeg.
- feed all the frames through the Imagemagick distort perspective utility
- take the resultant frames and rejoin back into the output video.
Step 1 -- Extract the Frames/Audio
$ ffmpeg -y -i ~/Downloads/BigBuckBunny.mp4 frame%05d.png audio.mp3
Since this process requires disassembly of a video to raw frames, later reassembling and reassembly will include audio it's important to retain the same frame rate as the input video, otherwise your audio will be out of sync in the output video. For example, our input video is 24fps, suppose when you reassemble the frames at 25fps the video will be ahead of the audio.
Step 2 -- Run through Distort Perspective
Let's jump into the command first, then spend a bit of time describing how we came up with the specifics. Our input video is 1280x720 and we wish to maintain those dimensions and project the video on a diagonal plane within the video.
$ find . -name "frame*png" -exec convert {} -matte -virtual-pixel transparent -distort Perspective '0,0,170,0 0,720,170,720 1280,720,854,500 1280,0,854,100' -background none {} \;
The distort perspective utility essentially requires you provide a mapping from the video frame coordinates to the mapping coordinates. In our example;
- orig frame (0,0) => mapFrame (170,0)
- orig frame (0,720) => mapFrame (170,720)
- orig frame (1280,720) => mapFrame (854,500)
- orig frame (1280,0) => mapFrame (854,100)
Details for this distortion effect can be found here;
Step 3 -- Reassemble Frames to Output Video
$ ffmpeg -r 24/1 -i frame%05d.png -i audio.mp3 -c:a copy output.mp4
Specifying the same framerate as our input video (e.g. 24fps) we can reassemble the frames and stay synchronized with the audio.
The result will look something like this;
Now go ahead and make something cool.
Cheers.