Friday, January 18, 2019

FFMpeg Time Machine


We installed a security camera at our house and it, like most, has the ability to capture video based on motion.  Unfortunately robust motion detection tends to introduce latency as it accrues sufficient motion to determine that the event is significant and not noisy things, like a leaf blowing across the lawn.  The trouble with this is that the time leading up to the motion is often lost from the video capture and you're left with part of the event.  For example, it's not uncommon for a video of the mail-person delivering a package starting when the person is well within the scene rather than a more complete video of the person as they enter the scene.

Ideally, what you'd want from a security system is to have a robust motion detection algorithm, but once a motion event has been detected to provide video leading up to the motion, say 10 seconds back and forward.  This could be accomplished by buffering video and bundling this video buffer into the captured video.

This is surprisingly easy with FFMpeg and is the focus of this post.  Read on  ye seeker of FFMpeg sexiness.

Let's break down a simple implementation:

  1. capture video from a camera into 10 second segments, with a common naming convention that includes an incrementing numeric (making each file name unique)
  2. a simulated trigger event which responds by grabbing the last X segments and concatenate into a final video file

Capture Video Segment (e.g. Buffers)

Our video source will be our USB camera.  In the interest of posterity, and to verify our concatenation of the video segments is seamless and in-order, we'll overlay the current time upon the video.  The segment subcommand automagically creates video segments of the specified length.  You can specify a segment file naming convention as well.
The following example captures the camera video, applies a time-stamp overlay and generates files in the form {/tmp/capture-000.mp4 /tmp/capture-001.mp4..../tmp/capture-999.mp4}




$ ffmpeg -i /dev/video0 -vf "drawtext=fontfile=/usr/share/fonts/truetype/droid/DroidSans.ttf:text='%{localtime\:%T}'" -f segment -segment_time 10 -segment_format mp4 "/tmp/capture-%03d.mp4"


Concatenate Video Segments Into Final Video

Two key things need to be done in order to concatenate the video segments into the final video:

  1. determine what video files to concatenate
  2. order the video files in order of capture
  3. concatenate them into final video


The following script does precisely that.  The find command looks for files that are less than 60 seconds old and sorts them via epoch time.  Each file is added to a temp file, this temp file has the list of video segment files in-order.  FFMpeg takes this list of files and concatenates them in-order into the final video file.


$ ./grabCamEvent /tmp/foo.mp4

The above command would result in a 60-70 second video file starting approximately 60 seconds ago.  Approximately because the video segment length comes into play here.



$ cat -n grabCamEvent

     1 #!/bin/bash
     2 outFile=$1
     3 tmpFile=/tmp/temp-$(date +%s)
     4 
     5 for f in $(find /tmp/ -name "capture*mp4" -newermt '60 seconds ago' -printf "%T@ %p\n" | sort -n | cut -f 2 -d ' '); do
     6   echo "file '$f'" >> $tmpFile
     7 done
     8 
     9 ffmpeg -y -f concat -i $tmpFile -vcodec copy $outFile
    10 rm $tmpFile
    11 

This is primarily the foundation for a proof-of-concept.  A proper solution would include periodically deleting old video files and writing the video segments in such a manner as to not burn out your hard-drive, perhaps replacing the destination with a ram-disk.

I'm genuinely puzzled, given the ease of this solution, why more security systems don't employ such a feature.

¯\_(ツ)_/¯






No comments:

Post a Comment