I have to give credit for this topic to my loving wife who to is a software engineer. The basis of this post is based on her observations she shared during recent a car ride.
There a number of risks when choosing to use old technology, make no mistake though, the use of old technologies is almost always a choice. While there are cases when migrating to new tech doesn't make sense or is cost preventative, more often the 'cost' typically accounts for the monetary costs of executing the transition rather than the costs of remaining use of the old tech.
While there are many risks of continuation of using old tech; downtime impacts, increased licensing/maintenance costs, decreased productivity, security and such...I'm going to specifically focus more on the indirect costs with respect to your engineering team(s).
Take a minute and close your eyes. Now imagine your dream job. Imagine what you're working on and what you're working with. If you are an aspiring journalist, did you imaging penning your masterpiece using a feather quill? If you're an experienced welder, did you imaging using an oxy-acetylene welding unit? If you're a software engineer, did you imaging firing up Windows 95 and Visual Studio 97? No? Really? Why not? Whelp, you're likely in good company. Given the choice, no one chooses to work with old shit. For now, stick a pin in this and we'll come back to it in a bit.
Take another minute and close your eyes. Now imaging your dream team. Are they well-versed in the latest technologies? Are they highly sought after, in demand or would they have a difficult time acquiring another job if it ever came to that? Concerning experience, are they primarily junior, primarily nearing the end of their career or are they distributed across various levels of experience?
Let me attempt to knit these two fundamental thoughts together now.
Great products are created by great teams. Great teams consist of a variety of experience levels; junior to expert-level contributors. Great team members prefer keeping current with latest technologies, for personal as well as professional reasons. So what happens if your company chooses to not make use of current technologies? Best case, your team keeps current individually in hopes that one day they can make use of it. Perhaps one day they get to apply their newly acquired knowledge on a future product; perhaps it'll be before a recruiter offers them alternative employment that currently uses that tech. Perhaps instead they simply stagnate and are ill-equipped to apply new tech when the someday comes.
New tech, old tech; job seekers will find job providers (and vice versa) it's a matter of compromises. The seeker may compromise on use of old tech, the provider may compromise on the ideal candidate. More junior level seekers are likely more willing to compromise on positions early in their career. Late-staged seekers (those nearing retirement from the profession) may also be more willing to compromise. What's less likely however are highly experienced seekers in the prime of their career compromising on a position that may place their competitive advantages at risk. Simply put, this industry moves so rapidly that great candidates can't afford to work for companies that are stuck in the past.
It's all about balance. This doesn't mean you should chase every new shiny technological button, but it also doesn't mean you should continuously reject introducing new techniques. Listen to your team; are they whispering of new technologies and techniques that could apply to your products? Are you listening? If people are leaving, are their new positions utilizing newer technologies?
Obviously, tech isn't the only factor in choice of employment but most of the colleagues I've worked with over these past decades hold it in pretty high regard. Please consider it as a factor in establishing your corporate talent pipeline.
Personal software engineering blog where I share a variety of topics that interest me.
Sunday, February 24, 2019
Sunday, February 17, 2019
FFMpeg Zoom
I won't embarrass myself referring to being an amateur videographer but I have set up a video camera, pointed it at something worthwhile and hitting record. With a high-def camera and a wide angle lens you can capture life in the making. While cameras offer zoom capabilities I'm far more likely to lose the subject so I've made the habit of setting the camera up on a tripod, zooming out to capture the entire scene and adding digital zoom effects post-processing. In the age of high-def cameras.....why not? I'm less likely to miss the shot and have numerous tries in adding effects afterwords.
Let's grab a video, apply a text target overlay (to make sure we're zooming where we think we are) and then zoom to that location.
$ youtube-dl https://www.youtube.com/watch?v=PJ5xXXcfuTc -o input
Let's slap a 'X' at 560,400 so we can confirm we're zooming to where we expect;
$ ffmpeg -y -i input.mkv -ss 30 -t 15 -vf drawtext="fontfile=/usr/share/fonts/truetype/droid/DroidSans.ttf:text='X':fontcolor=black:fontsize=24:box=1:boxcolor=black@0.5:boxborderw=5:x=560:y=400" -codec:a copy target.mp4
Finally, let's zoom to 560,400
$ ffmpeg -y -i target.mp4 -vf "scale=iw*2.0:ih*2.0,zoompan=z='min(max(zoom,pzoom)+0.05,5.0)':d=1:x='560*2.0-(560*2.0/zoom)':y='400*2.0-(400*2.0/zoom)'" -an output.mp4
In the above example, we're using the following values for scalars;
S=2.0
Z=5.0
K=0.050
Experiment with the scalars to get your desired affect;
Safe intellectual travels my fair reader.
Sunday, February 10, 2019
Applying Image Overlay to Video
Overlaying an image atop a video is a good way to add content to an informative video, or a means to apply a watermark.
In it's simplest form, the command takes the form of:
- specifying two input files, a video file and an image file
- image scaling size
- image overlay location
In it's simplest form, the command takes the form of:
- specifying two input files, a video file and an image file
- image scaling size
- image overlay location
$ cat go
#!/bin/bash
VidFile=/tmp/foo.mp4
ImgFile=/tmp/image.png
OutVidFile=/tmp/output.mp4
ffmpeg -y -i ${VidFile} -i ${ImgFile} -filter_complex "[1] scale=w=100:h=100 [tmp]; [0][tmp] overlay=x=10:y=10" -an ${OutVidFile}
mplayer ${OutVidFile}
If you want to have the overlay fade in/out the command is slightly more complex, the filter requires a fade in timestamp and a fade out timestamp. The following command has the image fade in at 5 seconds, and begins fading out at the 10 second mark:
$ cat go
#!/bin/bash
VidFile=/tmp/foo.mp4
ImgFile=/tmp/image.png
OutVidFile=/tmp/output.mp4
ffmpeg -y -i ${VidFile} -loop 1 -i ${ImgFile} -filter_complex "[1:v]fade=t=in:st=5:d=1,fade=t=out:st=10:d=1[over];[0:v][over]overlay=x=10:y=10" -t 20 -an ${OutVidFile}
mplayer ${OutVidFile}
The end result:Sunday, February 3, 2019
FFMpeg Dynamic Adjustment of Filters
FFMpeg has a full array of video and audio filters, specify the right parameters and it produces pure magic. The filter scalars can readily be specified as filter static parameters or in some cases based on time. But, what if you wish to dynamically modify filter parameters dynamically or in real-time? When compiled with ZeroMQ (0MQ) support, some filters can be adjusted in real-time by sending filter commands vi 0MQ.
The 0MQ support is optional, not configured by the default configuration, so it likely requires building Ffmpeg from source and configuring it for ZeroMQ support. The build procedure takes the form of a typical autoconf; configure, make, make install. Refer to my previous post for building on Ubuntu which includes instructions for adding package dependencies and building with what my common feature set; Building FFMpeg. The --enable-libzmq configure flag enables ZeroMQ based filter commands. It also requires installation of ZeroMQ development libraries pre-compilation (also found in the instructions).
Not all FFMpeg filters accept ZeroMQ commands, the ones that do are documented in the documentation; FFMpeg Filters, look for 'This filter supports the following commands'.
It's best to start by setting up your command line sequence, then update it to account for ZeroMQ command inputs. The FFMpeg documentation indicates the hue filter supports ZeroMQ commands; http://ffmpeg.org/ffmpeg-filters.html#Commands-14
10.90.2 Commands
This filter supports the following commands:
- b
- s
- h
- H
- Modify the hue and/or the saturation and/or brightness of the input video. The command accepts the same syntax of the corresponding option.If the specified expression is not valid, it is kept at its current value.
The following command applies the hue filter with h=90, s=1 and plays the video after the filter has been applied;
$ ffmpeg -loglevel debug -i /tmp/foo.mp4 -filter_complex "hue=h=90:s=1" -vcodec libx264 -f mpegts - | ffplay -
To apply filter commands via ZeroMQ you need to:
1) know the internal filter name of the pipeline
2) add ZeroMQ input to the filter
3) send the command via zmqsend command
We specifically added debug logging to our FFMpeg command so we could learn the name of the internal filter; Parsed_hue_1
[Parsed_hue_1 @ 0x3a1e600] H:0.5*PI h:90.0 s:1.0 b:0 t:11.9 n:357
Let's add ZeroMQ input to our filter, note the slight modification to our previous command;
$ ffmpeg -loglevel debug -i /tmp/foo.mp4 -filter_complex "zmq,hue=h=90:s=1" -vcodec libx264 -f mpegts - | ffplay -
Lastly, re-run the above command and within a new terminal send a hue filter parameter update;
$ echo Parsed_hue_1 h 50 | zmqsend
$ echo Parsed_hue_1 s 3 | zmqsend
Whelp, that's about all I've got. While I've on-and-off looked at ZeroMQ integration with FFMpeg on a few occasions over the past years I've never found any solid documentation. Hopefully this will help set you on your way. I'll likely post more as I go.
Cheers.
Monday, January 28, 2019
Superset Visualization - Get Your Data On
Big Data can quickly become a big problem and one of the challenges is simply making sense of massive volumes of information. A good visualization tool is the key, the topic of this lil' post.
From the creators at Airbnb, yes....the same AirBnb that brings you the concept of couch surfing on a strangers chaise lounge, also creates an open-source tool that's worth a serious look at. It's gone through a few names, originally named Panoramix, renamed to Caravel and again renamed to Superset a few months later so be on the lookout for alternative name references when doing your own readings. Here is a good place to start; https://www.youtube.com/watch?v=3Txm_nj_R7M
Let's follow through the installation process (performed on Ubuntu 16.04 LTS);
At this stage, SuperSet should be fully installed with a number of examples. SuperSet provides a web-interface that is accessible after running the server (as follows);
Point your browser to localhost:8088 and bask in your tech glory. The interface provides a number of dashboards that demonstrate the visualization capabilities.
From the creators at Airbnb, yes....the same AirBnb that brings you the concept of couch surfing on a strangers chaise lounge, also creates an open-source tool that's worth a serious look at. It's gone through a few names, originally named Panoramix, renamed to Caravel and again renamed to Superset a few months later so be on the lookout for alternative name references when doing your own readings. Here is a good place to start; https://www.youtube.com/watch?v=3Txm_nj_R7M
Let's follow through the installation process (performed on Ubuntu 16.04 LTS);
$ sudo sudo apt-get install -y build-essential libssl-dev libffi-dev python-dev python-pip libsasl2-dev libldap2-dev
$ pip install --upgrade setuptools pip
$ sudo pip install superset
$ fabmanager create-admin --app superset
$ superset db upgrade
$ superset load_examples
$ superset init
At this stage, SuperSet should be fully installed with a number of examples. SuperSet provides a web-interface that is accessible after running the server (as follows);
$ superset runserver
Point your browser to localhost:8088 and bask in your tech glory. The interface provides a number of dashboards that demonstrate the visualization capabilities.
That's all for now, follow-on posts will explore Superset capabilities. In the meantime, feel free to reference SuperSet's main documentation here; http://airbnb.io/projects/superset/
Friday, January 18, 2019
FFMpeg Time Machine
We installed a security camera at our house and it, like most, has the ability to capture video based on motion. Unfortunately robust motion detection tends to introduce latency as it accrues sufficient motion to determine that the event is significant and not noisy things, like a leaf blowing across the lawn. The trouble with this is that the time leading up to the motion is often lost from the video capture and you're left with part of the event. For example, it's not uncommon for a video of the mail-person delivering a package starting when the person is well within the scene rather than a more complete video of the person as they enter the scene.
Ideally, what you'd want from a security system is to have a robust motion detection algorithm, but once a motion event has been detected to provide video leading up to the motion, say 10 seconds back and forward. This could be accomplished by buffering video and bundling this video buffer into the captured video.
This is surprisingly easy with FFMpeg and is the focus of this post. Read on ye seeker of FFMpeg sexiness.
Let's break down a simple implementation:
- capture video from a camera into 10 second segments, with a common naming convention that includes an incrementing numeric (making each file name unique)
- a simulated trigger event which responds by grabbing the last X segments and concatenate into a final video file
Capture Video Segment (e.g. Buffers)
Our video source will be our USB camera. In the interest of posterity, and to verify our concatenation of the video segments is seamless and in-order, we'll overlay the current time upon the video. The segment subcommand automagically creates video segments of the specified length. You can specify a segment file naming convention as well.The following example captures the camera video, applies a time-stamp overlay and generates files in the form {/tmp/capture-000.mp4 /tmp/capture-001.mp4..../tmp/capture-999.mp4}
$ ffmpeg -i /dev/video0 -vf "drawtext=fontfile=/usr/share/fonts/truetype/droid/DroidSans.ttf:text='%{localtime\:%T}'" -f segment -segment_time 10 -segment_format mp4 "/tmp/capture-%03d.mp4"
Concatenate Video Segments Into Final Video
Two key things need to be done in order to concatenate the video segments into the final video:- determine what video files to concatenate
- order the video files in order of capture
- concatenate them into final video
The following script does precisely that. The find command looks for files that are less than 60 seconds old and sorts them via epoch time. Each file is added to a temp file, this temp file has the list of video segment files in-order. FFMpeg takes this list of files and concatenates them in-order into the final video file.
$ ./grabCamEvent /tmp/foo.mp4
The above command would result in a 60-70 second video file starting approximately 60 seconds ago. Approximately because the video segment length comes into play here.
$ cat -n grabCamEvent
1 #!/bin/bash
2 outFile=$1
3 tmpFile=/tmp/temp-$(date +%s)
4
5 for f in $(find /tmp/ -name "capture*mp4" -newermt '60 seconds ago' -printf "%T@ %p\n" | sort -n | cut -f 2 -d ' '); do
6 echo "file '$f'" >> $tmpFile
7 done
8
9 ffmpeg -y -f concat -i $tmpFile -vcodec copy $outFile
10 rm $tmpFile
11
This is primarily the foundation for a proof-of-concept. A proper solution would include periodically deleting old video files and writing the video segments in such a manner as to not burn out your hard-drive, perhaps replacing the destination with a ram-disk.I'm genuinely puzzled, given the ease of this solution, why more security systems don't employ such a feature.
¯\_(ツ)_/¯
Sunday, January 13, 2019
Bash Printf -- Pretty, Pretty Numbers
A repeated need I've encountered is a need to format a numeric with leading zeros, similar to the common form used in C. Typically, I take a over-complicated approach of comparing the number to >100 or >10 and pre-pending leading zeros.
After investigating alternative approaches with good 'ole Google, the better approach is shown below;
$ cat /tmp/go
#!/bin/bash
for i in `seq 100`; do
N=$(printf %03d $i)
echo $N
done
Cheers.
After investigating alternative approaches with good 'ole Google, the better approach is shown below;
$ cat /tmp/go
#!/bin/bash
for i in `seq 100`; do
N=$(printf %03d $i)
echo $N
done
Cheers.
Subscribe to:
Posts (Atom)