Monday, November 2, 2020

FFmpeg Commands -- Blog Content Summary

Over the past couple years this blog contains a variety of content that interests me.  While not intended, a considerable amount of this blog revolves around using FFmpeg.  I feel drawn to this particular utility for a number of reasons I guess:

  • it is primarily a command-line interface; offering a great deal of flexibility
  • while powerful and popular, I find it lacks sufficient examples and documentation in demonstrating its use, a significant learning curve for the new user
  • I have a history of interest in image processing and computer vision which aligns with, or compliments, these interests
  • I have an interest in polishing and refining personal media content and this tool hits all the buttons to do so
I thought I'd touch on some of the areas of content, specifically to see what areas I've covered and plan areas that may be worth focusing on.  I have a starting-point for a more comprehensive tutorial on FFmpeg that I am preparing for a future presentation, perhaps YouTube or a Meet-Up.  Stay tuned.

Setup/Configuration/Installation

My daily drivers tend to be Linux-based workstations and laptops, so my setup/configuration instructions tend to follow:

Convert Images to Video

Nearly everyone on planet Earth now carries around a high-definition camera in their pocket.  We snap hundreds, if not thousands, of photos over the course of the year.  It's pretty common to want to take a series of images and transform it into a slide-show to amazing and impress your friends and family.  Here are but a few resources to do just that.

Clipping Time Segments from Video

First order video processing is trimming out uninteresting time segments.  Take a raw video, clip out interesting bits and concatenate them together will set you on course for a better video presentation.

Concatenating Videos

Cropping Video

Whether its clipping out a noisy background or simply directing the viewers attention, often you want to crop out the point-of-interest from a wide-angle shot.

Padding/Extending Video

Sometimes, just the opposite is needed, taking a video of arbitrary size and making it bigger to accommodate an additional visual (e.g. overlay, video, graph,...)

Zooming In/Out of Video

A static shot can still draw the viewers attention by zooming in/out to draw the viewers focus to a narrower or broader aspect.

Applying Overlay to Video

Like frosting on a cake, adding overlays upon a video makes a good product even better (image, text or subvideo).

Fading In/Out of Video

We've all seen it, a scene transition from one location or another, typically demonstrated by fading out of the first scene (e.g. fade-to-black) and fading into the next (e.g fade-from-black).

Video Blending

Sometimes we wish the resultant video to be comprised of multiple semi-transparent sources, sometimes referred to as cross-fading.  Dynamically applying a weighting factor from multiple sources can give you some pretty dramatic effects.

Video Scene Transitions

I've authored a series of posts which provide a variety of scene transitions, introducing a new cut scene by moving in the destination scene in some manner (e.g. wipes, curtain-call,...)

Video Blurring

Whether it be a license plate, a disruptive background, or subjects who rather not be in your video, there is often a need for applying a blur to a video or video areas.

Video Stablilization

Shaky hand?  No tripod?  This video filter can remove the side-effects of a shaky videographer.

Creating Cartoon from Video

Can you take a real video and convert it into something more cartoon-like?  We spent some time doing just that.
ZeroMq (e.g. 0mq, zmq) is integrated into FFmpeg and can be used to dynamically modify video filters on the fly.
I particularly find that make and video processing make a good pairing, especially when the video processing utility is command-line based.  These two utilities play very well together.

Tuesday, October 13, 2020

Google ProtoBuff + ZeroMq -- C++


In the last series of posts we demonstrated ZeroMq as a technology that supports 'sockets on steroids', supporting multiple platforms as well as multiple languages.  The examples to-date have been transmitting strings between senders and receivers.  While interesting, to effectively create a distributed heterogeneous system we need to be capable of transmitting meaningful messages, preferably complex data structures rather than just strings.  That's where Google's Protobuff comes into play: http://code.google.com/p/protobuf/


Building off our previously created Ubuntu 12.04 32-bit VM, let's start by installing the additional necessary packages;



$ sudo apt-get install libprotoc-dev


With the developer libraries installed, we can now extend our previous C++ example to transmit a ProtoBuff message.


We'll extend our Makefile to add the necessary libraries and a target (e.g. msgs) to generate the C++ files for the message.



$ cat Makefile 

CC=g++

SRCS=main.cpp Messages.pb.cc

OBJS=$(subst .cpp,.o,$(SRCS))

INCLUDES += -I.

LIBS += -lpthread -lrt -lzmq -lprotobuf

.cpp.o:

$(CC) -c $<

main: msgs ${OBJS} 

${CC} ${CFLAGS} -o $@ ${OBJS} ${LIBS}

msgs:

${SH} protoc -I. --cpp_out=. Messages.proto

clean:

${RM} ${OBJS} main *.pb.*


Oh, we should take a look at our simple Protobuff message file:


$ cat Messages.proto 

message Person {

  required int32 id=1;

  required string name=2;

}


Finally, our extended main file:


$ cat main.cpp 

#include

#include

#include

#include

#include

#include

#include "Messages.pb.h"

void* ctx=zmq_init(1);

char* EndPoint="tcp://127.0.0.1:8000";

static const int N=100;

static const int BufferSize=128;

void* sender(void*)

{

  printf("(%s:%d) running\n",__FILE__,__LINE__);

  void* pub=zmq_socket(ctx, ZMQ_PUB);

  assert(pub);

  int rc=zmq_bind(pub,EndPoint);

  assert(rc==0);

  Person p;

  p.set_name("fatslowkid");

  p.set_id(01);

  for(int i=0; i

  {

    zmq_msg_t msg;

    std::string S=p.SerializeAsString();

    char* content=(char*)S.c_str();

    int rc=zmq_msg_init_size(&msg, BufferSize);

    assert(rc==0);

    rc=zmq_msg_init_data(&msg, content, strlen(content), 0,0);

    assert(rc==0);

    rc=zmq_send(pub, &msg, 0);

    assert(rc==0);

    ::usleep(100000);

  }

}

void* receiver(void*)

{

  printf("(%s:%d) running\n",__FILE__,__LINE__);

  void* sub=zmq_socket(ctx, ZMQ_SUB);

  assert(sub);

  int rc=zmq_connect(sub,EndPoint);

  assert(rc==0);

  char* filter="";

  rc=zmq_setsockopt(sub, ZMQ_SUBSCRIBE, filter, strlen(filter));

  assert(rc==0);

  for(int i=0; i

  {

    zmq_msg_t msg;

    zmq_msg_init_size(&msg, BufferSize);

    const int rc=zmq_recv (sub, &msg, 0);

    char* content=(char*)zmq_msg_data(&msg);

    Person p;

    p.ParseFromString(content);

    printf("(%s:%d) received: '%s'\n",__FILE__,__LINE__,p.name().c_str());

    zmq_msg_close(&msg);

  }

}

int main(int argc, char* argv[])

{

  printf("(%s:%d) main process initializing\n",__FILE__,__LINE__);

  int major, minor, patch;

  zmq_version (&major, &minor, &patch);

  printf("(%s:%d) zmq version: %d.%d.%d\n",__FILE__,__LINE__,major,minor,patch);

  pthread_t rId;

  pthread_create(&rId, 0, receiver, 0);

  pthread_t sId;

  pthread_create(&sId, 0, sender, 0);

  pthread_join(rId,0);

  pthread_join(sId,0);

  printf("(%s:%d) main process terminating\n",__FILE__,__LINE__);

}


Notice that we now transmit and receive Protobuf messages, serialized as strings.  The value of this is that the serialization mechanism is multi-platform & multi-language support.


Cheers.

Tuesday, October 6, 2020

Intro to Docker



Containers are extraordinarily popular and open the doors to alternative service-oriented architectures.  This week I spent a few minutes with Docker, a quick intro to the technology.  I started with a quick intro from YouTube;

Following along, we first need to install Docker on our Ubuntu machine.

$ sudo apt-get install docker.io

After Docker is installed on our workstation we configure the container, build it, then run it. 

Our example will be trivial, a http server with a static welcome message.  The Dockerfile specifies the configuration of the container, the src/index.php serves up the welcome page.

~/docker$ tree .
.
├── Dockerfile
└── src
    └── index.php


~/docker$ cat -n Dockerfile
     1    FROM php:7.0-apache
     2    COPY src/ /var/www/html/
     3    EXPOSE 80

The Dockerfile specifies the container recipe, an Apache container, a couple configuration steps: 1) copying the index file to the container location, and 2) opening port 80 for incoming traffic.

~/docker$ cat -n src/index.php
     1    <?php
     2   
     3    echo "Hello, World";
     4    ?>


With the configuration information available, we build the container by issuing the following command:

~/docker$ sudo docker build -t hello-world .

Afterwards, we can launch the container:

~/docker$ sudo docker run -p 8080:80 hello-world

The port redirection redirects 8080 incoming ports to the host to the container port 80.

Then, we can connect to the container by opening a browser to connect to our host: http://localhost:8080

Enjoy!

Sunday, September 27, 2020

Will The WFH Trend Result In An Increase In Outsourcing?

Photo by Oleg Magni from Pexels

In general, I desperately try to see the good in things; every cloud has a silver lining style stuff. Typically, if you look hard enough you can find good in most things, but you tend to have to dedicate a real effort to find it.
This dumpster fire of a year has required a herculean effort from the world to find the good in a historically crappy year. Remember when 46 million acres of Australia burned to ash and we were all thinking that would be the disaster of the year? Devastating yes, but not nearly the 'disaster of the year nominee' we may have thought at the time. Fast forward to the world's pandemic, a worldwide unemployment rate of 8.3%, businesses encountering challenges of a lifetime, young and old affected and the mental health of the world just teetering on the razors edge between partial insanity and full-blown mental breakdown. Finding good, any modest level of good, is like an epic game of Where's Waldo.

In search of good some, I'd even argue many, have found a glimmer of a positive in the form of a world-wide 'working from home (WFH)' policy change. Personally, I have found solace in the fact that I enjoy the WFH environment much more than I had expected. I was always a 'go into the office guy', preferring the separation, and as a life-long participant in classical conditioning leaving home to go into the office was a means of removing distractions and turning on 'work mode'. Work would certainly follow me home, but the majority of work was conducted at the office. Enter economic and health survival mode; those that can perform work from home are encouraged to. A nation, a world, thrust a massive population into the WFH deep end, they with a doggy paddle, progressing to treading water and on our way to finding proficiency in the 'new norm'. Most of my colleagues and friends feel the WFH policy, full or part-time, is a big win for many and anticipating the policy to continue post-pandemic. Trading the frustration, cost and time wasted in traffic for added personal time can be a big win for the current and future's workforce. 

So, if you're like me, you set your eye on this glimmer of good on the horizon when you can anticipate the benefits of working from home when the world has reclaimed some form of normal. You look to a future to a better world where your commute remains 12 feet from your bed, society has returned to some form of normal and you continue to have the ability to use your lunch break to take your dog for a walk or have lunch with your spouse/partner.

Desperate to find the negative in this sliver of a perk, the dark shadows of my unconscious push a thought into my head; "if the workforce transitions to primarily remote, will outsourcing become the new norm?". A well-delivered shin-kick to my fragile emotional state delivered with precision and purpose, your future job security is now on the table....well done dark forces....well done.

Personally and professionally I have some very strong opinions on outsourcing and as a courtesy to any dedicated readers that have made it this far I'll refrain detailing my opinions but suggest that collaborating teams require timely communications with one another and conflicting timezones are the supervillain in such matters. Perhaps in time we will find a way to work effectively in highly remote settings, or corporate culture will change to accommodate. My current team has folks in Florida, the midwest, and California, a 3 hour difference at the extremes. If our culture continues to be flexible in remote worker hours of 9-5 (locally) that means folks lose 3 hours of shared communication time, or are forced to adjust. As a software engineer, it's not uncommon for managers to be even more accommodating, allowing team members to start later or earlier which can potentially compound the issue. 

Perhaps as a nationwide workforce we fail to work effectively and a WFH policy becomes a failed experiment only to return to localized teams in cube farms. Alternatively, maybe we evolve into a workforce and find ways to address such issues, timezones become irrelevant and our workforce comes from a worldwide pool of talent.

As an industry we should recognize the possibilities solving remote and WFH policies present and strive to become as effective (or more effective) than colocated teams. As a workforce we should recognize the complications we can add, the benefits it comes with and the risks to our profession that may be on our horizon. 

Only time will tell.


Tuesday, September 22, 2020

Generating Multi-Plot Real-Time Plots with Python


In an earlier post the real-time plotting capabilities were demonstrated, we're extending on this by showing how to generate multiple plots simultaneously.  A couple noteworthy observations, in the past post the X and Y scaling was automatically scaled after each element addition.  While you can still do this, typically for multiplots we would prefer maintaining a shared X range.  While somewhat unnecessary, I've elected to maintain a uniform Y range.




#!/usr/bin/python
from pylab import *;
import time;

def log(M):
  print "__(log) " + M;

def test02():
  plt.ion();
  fig=plt.figure(1);
  ax1=fig.add_subplot(311);
  ax2=fig.add_subplot(312);
  ax3=fig.add_subplot(313);
  l1,=ax1.plot(100,100,'r-');
  l2,=ax2.plot(100,100,'r-');
  l3,=ax3.plot(100,100,'r-');
  time.sleep(3);

  D=[];
  i=0.0;
  while (i < 50.0):
    D.append((i,sin(i),cos(i),cos(i*2)));
    T1=[x[0] for x in D];
    L1=[x[1] for x in D];
    L2=[x[2] for x in D];
    L3=[x[3] for x in D];

    l1.set_xdata(T1);
    l1.set_ydata(L1);

    l2.set_xdata(T1);
    l2.set_ydata(L2);

    l3.set_xdata(T1);
    l3.set_ydata(L3);

    ax1.set_xlim([0,50]);
    ax2.set_xlim([0,50]);
    ax3.set_xlim([0,50]);
    ax1.set_ylim([-1.5,1.5]);
    ax2.set_ylim([-1.5,1.5]);
    ax3.set_ylim([-1.5,1.5]);

    plt.draw();
    i+=0.10;
  show(block=True);

#---main---
log("main process initializing");
test02();
log("main process terminating");

Easy Peasy;



Monday, September 14, 2020

Atypical Uses for Makefiles


Makefiles traditionally center themselves in a build process.  While they have taken a backseat to more modern build utilities, oftentimes they are simply that....in the backseat.  Eclipse, for example, autogenerated makefiles as part of a project build.  Why?  Because when properly done, makefiles are extraordinarily powerful.  Its dependency engine can parse large projects and selectively execute only what needs to be rebuilt.  Done poorly, you're at the mercy of 'make clean; make all' sequences to make sure everything is built up-to-date.

Aside from it's power as a build utility, make can be useful for other purposes as well.  Over the past couple years I've extended my use of make into a few additional areas where I find the dependency engine to prove to be very useful.  This post will touch on a few.


Quick Introduction to Makefiles

Whether you are familiar with makefiles or not, consider taking a peek at this past post.  While it touches on the general syntax and utility of make, in the later sections it shows some atypical uses.  As a means to demonstrate the dependency engine, it shows how Imagemagick can be used to transform images of varying file types.  A daisy-chained dependency chain can be created by converting an image through a series of steps JPG => PNG => GIF => JP2 => XWD.  Perhaps unpractical, but it shows an atypical use of make and it provides a simple and visible execution of the dependency engine.

Poor Man's Parallelism

I'm particularly proud of this usage, to my knowledge no one else has ever recommended this as a use of make.  Most, familiar with make, know that it can be multi-threaded and the dependency engine fires off the specified number of threads to speed up the process.  Given a large text file, make can provide an easy means to parallelize the process.  The trick; split the file into segments, process each segment, then join the results.  In ~30 lines of code, you can greatly improve the execution time for simple parallel processing, all enabled by the sophisticated dependency engine at the core of make.  Take a peek at the details here.

FFmpeg

While I've toyed to a good degree with bash scripts, python scripts and a variety of other shell utilities I've come home to make as the best tool for FFmpeg-based tasks.  Video conversions and/or modifications lend themselves nicely to makefiles.  Suppose you have a large directory full of images and you wish to create a slideshow....this post is a good starting point to showing you how makefiles can make that an easy process.  Want to take 6 hours of raw camera footage and transform it into something worth watchable....this post shows how that can be done.  Want to download a Youtube video and apply a series of transformations to create some completely new content.....this post can give you a head start.

Some of the beauties that were created by these posts;


Shell Script Replacements

Makefiles have become my adhoc replacements for simple shell scripts.  Certainly, they are limited to execute simplistic series of commands and are no substitute for sophisticated needs, but chances are if you have to execute a series of shell commands in a pre-defined order a makefile will scratch that itch.  Sophisticated recipes can become clumsy, and I've found myself authoring some ugly, ugly, ugly recipes, but recently I found a workaround.  Suppose you have a series of 5 tightly-bound commands and find it difficult to define as a recipe, a trick I've found useful is creating a shell script by means of a recipe, then using it in another target.

all: somethingCool.sh
        ${SH} ./$<

somethingCool.sh:
        ${SH} echo "echo 'doing something cool'" > $@
        ${SH} echo "sleep 1" >> $@
        ${SH} chmod a+x $@

clean:
        ${RM} somethingCool.sh

Notice the helper shell script is created by make as a dependency of the all target, used as needed and can be safely removed, recreated when later needed.  Certainly, this falls apart if the creation of the shell script by make is unnecessarily complex, but in many of my projects the only file under version control is the makefile, the rest are created as part of the project and removed by the clean target.  Nice and clean.  Rapid prototyping an alternative script can be done by a new target and once you get the hang of it can prove to be quite powerful.  The make, make run, make clean becomes the holy trinity of rapid prototyping.

I've found adhoc usage reports, grabbing debug logs, grepping for significant events, sorting and tallying event counts really align with raw, events, report targets and recipes.

Wednesday, September 9, 2020

Cartoonize Video with FFmpeg -- Take 2

 Some time back, I posted a rough attempt to convert a video into a cartoon-style equivalent; here.  That technique converted the video into a series of still images and ran each through ImageMagick sketch filter.  The result looked something like this:




Since then, I play with this time and time again, revising it to something like this:



Again, recently I thought of another technique.  Often, cartoon coloring schemes look 'flattened', meaning red'ish elements are colored red, blue'ish colored blue, so on and so forth.  The 'gex' filter can perform such a conversion pixel-by-pixel and seems promising.  It would require applying a series of filters, for each color so I thought I'd give it a try on simple grayscale first...if it showed promise then expand on a colorized equivalent.  Here is what I got, seems promising:

The technique, in a nutshell, is to convert the video to grayscale, then perform a selective filter on each pixel.  Pixels within the [0,50] luminosity range get assigned 0, pixels within the (50,100) range get assigned 50;
[0,50] = 0
(50,100] = 50
(100,150] = 100
(150,200] = 150
(200,255] = 200

The filter can be a mouthful, taking the form:
$ ffmpeg -i input.mp4 -filter_complex "format=gray,geq=lum_expr='if(lte(lum(X,Y),50),0,if(lte(lum(X,Y),100),50,if(lte(lum(X,Y),150),100,if(lte(lum(X,Y),200),150,if(lte(lum(X,Y),255),200,0)))))'" -acodec copy cartoonX.mp4

Notice, it's a chain of less-than-equal-to conditions of the form if(condition,X,Y) which means if condition set pixel to X, otherwise Y.  To execute the series of else conditions, a nesting of alternative conditions are replaced for Y.  Ugly, but effective and given it doesn't require converting a video to frames and back again a ton faster; still slow'ish, but way faster than the alternative.

Seems to me that this shows a good deal of promise.  The result is dark'ish, but that could be addressed by migrating the pixel assignments to the ceiling of the range, or perhaps applying lightening filter pre-filtering could also address it.  Seems plausible for converting a video to a grayscale cartoon'ish equivalent, using the same technique for color seems like a good next step.




Sunday, September 6, 2020

Deep Thinking / How To Be A Better Software Engineer


Over the years you get to understand what 'works' for you and oftentimes you'll only later find that what works for 'you' often works for 'others' and is also known by 'others'.  And sometimes, decades after you learn these tricks and techniques you discover that they are readily known and even recorded in print.  Here are a few techniques that can improve your thought process, translating into better software products and contributions.


Thinking In Motion

My first job in software engineering was at a 40-acre engineering facility, a series of buildings covered by a massive superstructure.  Rain, sleet, or snow you had 40 acres of walking available, uncompromised by environmental conditions 24/7.  The typical ritual was a lap around the facility after lunch seemed to evolve into a routine whenever you hit a mental block.  Struggling on a design or implementation often meant figuratively, or sometimes literally, banging your head against the wall.

So there you are, trying to work out a troubling problem, frustrated and in time the frustration turns to a bit of anger.  Surrounded by your office mates, your workspace is no place to lose your cool so you decide by happen-stance to take a walk, mostly just to relocate to a place you can grit your teeth and maybe cuss a bit under your breath.  Your focus, still on the problem at hand, but now you're out and about thinking, venting while in motion.  Like the sun pushing through the dark and stormy clouds a solution begins to emerge, first as a glimmer of possibility evolving into a full-blown solution.  Now, armed with a solution you double-time it back to your desk reinvigorated and supercharged to prove it out.  
Weeks pass, you find yourself in a similar situation; puzzled and frustrated, you take a walk, arrive back at your desk with a solution and implement it on your return.  Again, and again, this proves to be an effective process for you, and as far as you're concerned it works exclusively for you.  Later, you find your co-workers have arrived at the same process.  Later in life, you learn this exercise-based thought process is well-documented by the likes of Scientific America and such.
In general, even mild exercise increases blood flow which energizes the old cerebrum and hippocampus and pull you out of your mental funk.

Thinking in a Free Space

You spend a good portion of the day trying to solve something, but it continues to evade you. Tattered and defeated you pack up for the day, make your way to your car and begin your commute home.
You can get stuck on a small number of associations that aren't working when consciously trying to remember something. When you relax, your mind more freely and randomly associates, which gives it a chance to find a different association to the information then the fruitless ones you were consciously pushing on. A guilty pleasure comes on the radio and you proceed to car dance your way home to a Hungry Man's dinner and a cold beer. Mid-song you're hit square betweeen the eyes with a solution to your day's problem......BAM!! Like your 'take a walk' process, this happens again and again, and you're left with "why the heck does this work?".


Some say that the reason this works is that your brain is an association solving machine. Sometimes you get stuck on a small number of associations on the conscious level that won't solve your problem, but you continue to try to make them work and continue to fail. Then, your mind set free by a ridiculous radio song it is free to more freely and randomly find associations and possibly landing on a completely new thought that will solve your problem. Sometimes, the best way to solve a problem is to set it aside for a bit and 'background process it'.

Thinking in Speech

You find yourself stumbling with a problem and under the prodding of your manager you reluctantly take a walk over to your coworker for an assist.  You're not sure how to solve it and begin to explain the problem to your teammate, you describe the problem, you describe what you've tried and mid-sentence you stop, a new solution comes to mind and you return to your desk.  The act of verbalizing the problem and your thought process opened new doors and didn't even require a contribution from your teammate.  Sometimes simply talking the problem out outloud allows you to arrive at new solutions.  A newly found thought trick, goes right into your personal toolbelt.  In time, you later find others have arrived at the same tactic and it is often known as the 'rubber ducking' principle as described in the 'The Pragmatic Programmer'.  The audience of your verbal mind-dumping can be a teammate, your bartender, a reluctant spouse/partner/date, your pet or your given choice of any given inanimate object.

Thinking in Print

I've never known a software engineer that likes authoring documentation, perhaps they exist and are but elusive, or perhaps they simply don't exist in this universe.  Despite a general distaste of writing documentation, we tend to recognize its value and necessity to quality software and will do so reluctantly.  I find, despite not liking doing it, that the sheer act of writing stuff down betters your design.  You are forced to describe your architecture, your classes, their responsibilities, their interactions and you find yourself internally challenging your design 'why did i do this', 'what if i had done that', 'what are this particular classes responsibilities'.  In doing so, you leave with a list of refactoring opportunities and/or changes to the design and make the time to incorporate them, ending with a more robust and sophisticated product.  I find this likely a simple extension on verbalization, as it gives you the same benefits.


These are likely but a few tips and tricks for supercharging your thought process.  To my knowledge these tricks appear to be pretty well-known, some well-established and documented, some passed along via developer palaver over drinks.  My gift to you.

Sunday, August 30, 2020

How To Quit



Some time back a colleague, who recently joined our profession, found himself unchallenged in his current role and threw his resume into the ring.  Shortly thereafter he was offered a new job more fitting of his  interests and asked me for advice on how to proceed with 'giving notice'.  This post will revolve around that advice, expanding on it a bit.

Before we get into it, I find myself struggling to recall how I ever came to these opinions.  School certainly didn't touch on such topics, and I don't recall ever engaging colleagues or mentors about the matter.  Google wasn't nearly as popular as a junior engineer, nor Reddit, Twitter,....so I can only surmise that I came to these conclusions by means of sidebar conversations or personal conclusions.  Being long-in-the-tooth I often take for granted such topics, but have no recollection as to how/when I came to such opinions.  There is an entire world full of junior folks entering this, and every, profession that are faced with such questions, trying their best to do the right thing, and a general lack of good advice IMHO.  Reddit subreddits on such matters are plagued with near-toxic advice partly because much of the community is under a lot of stress.  So I'd encourage you, as professionals, to be approachable for the younger folks, offer them your support, encouragement and knowledge readily, patiently and promptly.  Now, let's get into the matter at hand.

Assumptions

While every situation has unique qualities, I'm going to gear my advise to a graceful self-initiated departure with a goal of leaving on good terms.  Sometimes deciding to leave a company can be easy, specifically when you hate your tasks, your boss or your teammates.  Far more often I find it can be a difficult decision, I spent a little time on that topic in a previous post Intellectual Wrestling When Contemplating Leaving a Company if you have some time to burn.

General Guidance

I recommend some general advice, not necessarily hard-n-fast rules that can't ever be broken, but stuff I try to apply myself.

Keep Your Communication Positive, Professional and Honest

    People change, organizations change, and while you may never imagine yourself ever working for this particular company again you may quickly find that the tech community is smaller than you initially think. I routinely find my old teammates working for different companies and while I don't particularly believe in the "don't burn any bridges or you'll regret it" advice, I have observed that people can be a product of their work environment. I've worked with folks that were terrible to work with at one organization but were completely different at another; people are capable of change. I've also known many-a-folk that have worked for the same organization on multiple occasions and the work environment has been drastically different between the instances; companies can change. Toxic elements can be replaced with better ones, bad managers can retire, quit, or be terminated for better ones. New management who are ill-equipped for the role may grow into becoming a great leader.  Organizations can be organic, changing for the better, or sadly for the worse.
    I personally feel that you should be positive and professional because you want to, not because you're expected to or forced to.  Additionally, it's important to stay true to yourself.  If you've been anxiously waiting 3+ months to tell your boss or teammate to 'shove it' and you'll regret not saying it, say it rather than live with the regret.  
    When leaving, you get to set the stage as to how much info you are willing to share.  You can be as vague or as detailed as you wish, but I always encourage being honest.  You can be honest and vague -- "I'm leaving for an opportunity that is better suited to my personal interests", or as detailed as you wish -- "I'm leaving because I don't have confidence in the financial outlook of the company.".  One big factor, in my opinion, as to the appropriate level of detail really depends on whether you believe the organization/company/team will act on your feedback.  I truly believe that companies want to be better and recognize that they need to be better.  When companies lose customers and team members they should want to know why they are leaving so that they can take those factors into consideration and determine if they should or need to change.  Imagine being a company with a revolving door of talented folks coming and leaving and not knowing why?  That, my friend, is a recipe for disaster.  With your upcoming resignation, you have a new-found sense of freedom that others may not share.  You may know that the majority of your team is miserable for the same reasons that drove you to look elsewhere.  You have an ability to speak on behalf of the team, and a good organization/team wants to know where they can improve (or what's driving folks away).
     

Deliver Your Resignation In Writing

Personally, I prefer providing a copy of my resignation either via e-mail or physical print.  Often, a copy of your resignation will be placed in your HR folder, kinda book-ending your employment record.  Additionally, I feel that exclusively verbal exchanges tend to lack recall of details, like when your last day will be.  

I tend to follow something of the recipe: 

    • announcement of resignation, 
    • thank them for opportunities, 
    • acknowledge talent of team, 
    • identify final working day  
    • optionally provide contact information
For example;

It is with regret that I hereby tender my resignation from [companyX].  I appreciate the opportunities this position has offered me over the past years and have thoroughly enjoyed working with such a talented team.

My resignation is effective today with my final working day of [last day date] unless it is felt an earlier separation date is more appropriate.

I wish you, my team and everyone at [companyX] all the very best for your continued success.

Short, direct and to the point.  The purpose of this exchange is pretty limited, a graceful announcement of your resignation and final working date.  The audience, your manager and HR typically.  Team announcements and follow-up conversations tend to take place independently and later. 

Preserve the Chain-of-Command

The term 'chain of command' has some pretty dated and perhaps negative connotations but it's worth preserving for a number of reasons.  In a world of transparency and open-communication, the responsibility goes both ways.  I've known folks that openly shared their job search details and upcoming resignation with seemingly everyone but their manager and personally I feel that's a bit unprofessional for a few reasons.
Your current and upcoming tasks need to find a new home, specifically someone to do them.  Your manager will be responsible for doing just that, perhaps hiring your replacement, offloading your tasks to existing team members or re-prioritizing tasks to account for the change in the team.  It's common professional courtesy to give them some time to get their ducks in a row before letting everyone else know.  Give them time to prepare for the question "With Bob leaving at the end of the month, how will his tasks be handled?".  Remember our goal should be a 'graceful transition' and giving leadership some additional time will assist in precisely that.  Unless you truly hate the company, your team and everyone else for that matter, this additional time will reduce the stress on all those folks affected; if you like your team, give your leadership some time to come up with a transition plan, its more for your team than preserving appearances.

Process

I feel that the typical flow of events take the following form:

Establish a Final Work Date

A pretty typical convention is to provide your current employer with 2-weeks notice.  Often, this is a professional courtesy rather than legal obligation.  That said, it's of my opinion that it's best to preserve a 2-week notice if possible for a few reasons:

    • some company policies require it
    • some contracts (e.g. contractor agreement) legally require it
    • give time for a graceful transition, knowledge transfer, task hand-off

Defining this final work date is typically done by getting a new formal job offer, planning a time to notify your manager and tagging on 2 weeks from that day.  Job offer arrives Friday afternoon, plan is to notify your manager on Monday morning, last day is second Friday to follow.  

Author a Resignation Letter/Email

Armed with your final day, you can author your resignation letter calling out your final working date to avoid any confusion.

Deliver Verbal Resignation

I've changed jobs a number of times in my career, to this day this step continues to come hand-in-hand with anxiety and discomfort, but you press through it.

Ideally, I prefer this be done in-person as I feel it is shows more respect.  Managers can be overly busy, so I try to arrange it with pre-established private 1-on-1 meetings, or try to catch them when they have a free moment, requesting a private conversation and delivery the message.

Despite all the best intentions, sometimes this plan falls flat.  Your new employer is expecting you on date X, you need to deliver your resignation on X-14 days and something can always go wrong.  Your manager scheduled work travel and is across the country when you didn't expect it, a sick child resulted in him/her going home to take care of them, their schedule is packed with end-to-end meetings throughout the day.  The best laid plans can often go off the tracks, this is where you may find a need to deliver the message off-plan; e-mail, to another party, later than planned...

Hopefully, if you are considered a valued member of the team you'll likely be asked a couple things; 1) why you are leaving, and 2) is there anything that would make you stay.   It's worth putting in some time in thinking about how you would respond to such questions beforehand.

One final topic I always ask in this chat is "how would you prefer to communicate to the team"?  Two things you're looking for: 1) when should the team be notified, and 2) by whom.  Depending on your manager and organization, your manager may prefer to make the announcement, especially if you have customer and/or intra-departmental relationships.  Otherwise, they may prefer you make the announcement directly to your team.  That covers the 'how', it's also important to get a 'when'.  Your manager may want a day or so before you tell your team, they may want to begin the process of hiring someone, they may want to re-prioritize activities, they may want to just spend some time on how to address your leaving.  Be prepared to give them a bit of time to get their plan in play.

Deliver Formal Letter/Email

Immediately, or shortly after the verbal exchange, deliver the printout/e-mail to the your manager.  Your manager will likely provide a copy to HR for your employee file and will use the last working date for notifying the affected parties (e.g. HR, leadership,...).  Additionally, they may initiate a hiring process by authoring a job posting and coordinating it with HR.

Announce to Team

Initiated by an official announcement from your manager, or a side-note you offer via Slack channel, e-mail or during a daily standup.  Typically, this is a short exchange, positive, professional with the direct goal of making everyone aware.  Most often, this is short announcement to the group and throughout the day the team will reach out to you individually to share their opinions and feelings about your departure.

Handoff/Knowledge Transfer

With the clock running, you've acquired a great deal of knowledge and responsibilities that now need to find a new home.  Typically, the team will be begin a desperate flurry of handing off your existing tasks and performing knowledge transfers.  Having been responsible for a number of tasks, ones which you are counted on and accomplish well.....now your team is faced with how will they get done when you're gone?  This tends to take the form of: you training someone directly, or documenting how you do it.

Exit Interview

Optionally, well-established companies will have an exit interview for departing employees.  Good companies understand the value in retaining talent, so they want to understand why folks elect to leave.  A big ol' pile of money goes into hiring someone new and training them to become an effective contributor, so people leaving is the equivalent of cash walking out of the door.

Often, Human Resources will conduct an exit interview with you, with the purpose of determining why you choose to leave.  Those details are tallied and perhaps one-day applied to reduce employee turn-over.  For example; If 80% of folks are leaving due to compensation, then the company can gather the information, establish the trend, and make changes to address the situation.  Equally, they can choose to not act on the findings as well.

You are in full control of how much detail you wish to share, be as vague as you wish, or as detailed as you wish.  

Final Day

The big day; you cleaned out your desk, preserved all your work, exchanged personal goodbyes to your teammates, and as a last act you will author your departing e-mail.

Normally, this last act is an e-mail calling out your final day, a form of gratitude to having the opportunity to work with the team, an acknowledgement of you learning a lot and a desire to keep in touch often with your personal contact information (e.g. phone, e-mail).  This is sent shortly before handing over your equipment, badge, parking pass and someone accommodating you to the door.  A firm handshake, a thank you, and you're off to another adventure.


 

Friday, August 28, 2020

Software Consulting -- What The Heck Does '...as an additional insured...' Contract Clause Mean?




While software contracting comes with a great deal of perks, it also comes with some exhaustively dull tasks.  Since our company prefers corp-to-corp contracts, every new contract comes with an arduous task of reviewing the contract specifics.  Contracts, authored by lawyers, speaking lawyer-speak can be mentally taxing for anyone and some folks will simply sign whatever is placed in front of them just to avoid reading pages of mumbo-jumbo.

The whole idea of just signing whatever gives me the hives, but I can understand the reluctance of reading these agreements with diligence.  Contracts are binding legal documents, they need to be taken seriously, but unfortunately it sometimes requires more offline research than any sane person wants to perform.  Lately, there is a trend in agreements that absolutely scared the crap out of me:

YourCompany shall add OurCompany as an additional insured to the Comprehensive General Liability policy. OurCompany’s insurance shall be primary, and any insurance maintained by OurCompany shall be excess to and not contribute to YourCompany's insurance.

Please excuse the phrasing; OurCompany == them, YourCompany == me as typically the agreement is provided from them.

My layman's interpretation of this statement implied that I'd add this new company onto our CGL policy and in the event that they were sued it would be covered by our policy!  The whole idea seemed completely absurd!  I could only compare it to inviting some stranger off the street, giving them my car and legally signing a document that I was legally responsible for them driving through the Mall of America.  They act unprofessionally or careless, we foot the bill.  The whole thing seemed ridiculous at its core and the first time I had seen this clause it seemed unique to this one contract that was authored by a massive media company inclined to bully subcontractors into whatever the hell they wanted.  Fearing I was over-reacting, I asked for clarification on the statement and their response was "*shrug* it was asked for by our legal department".  Unconvinced, I spent the next couple nights researching via Google and it continued to appear that my concern with the clause was justified.  In that particular instance, I chose not to sign that contract partly because this clause would not be removed from the agreement.  That was 2'ish years ago.

Fast-forward to a couple weeks ago, a new contract, similar clause, and an appearance of it trending in newer contracts.  I sought out the wisdom from insurance professionals on Reddit, but was met with crickets.  While I probably could have negotiated it's removal, this time I skipped Google and went directly to an authority on the subject, namely our CGL insurance provider.  The policy agent didn't shed much light on the topic, but put me in touch with one of their actuaries (Quentin) who provided insight into it's meaning.  Here's what I learned.

Quentin stated that I was applying a 'broader definition to it than it means', the clause is limited to the services that you perform for them.  The purpose of me holding a CPL is to cover any claims against work that I perform.  OurCompany has a similar CPL  to cover claims against work that they perform.  This clause essentially provides that separation as a provision for the court system.  Without the clause, a claim can be brought against any company for any work, this statement essentially says 'if there is a claim against them, you need to file a claim against them', work that they perform is covered by their policy, work that you perform is covered by your policy.  Claims are limited to work that you perform for them, not a universal catch-all of coverage which is what my layman's interpretation of the clause was.

The phrasing to this day still gives me the willies, but the clincher that set my mind at ease was when Quentin pointed out that insurance companies by nature are risk-averse, they aren't going to do something that puts a lot of additional liability on them.  "This additional insurer, we literally give it out for free any time your client requires it", "if we thought it could generate claims against us, we would charge for it".  So, if someone who daily performs statistical analysis of risk isn't concerned by this I guess it shouldn't concern us, it's part of the blanket policy.

Please, perform your own research on the topic, consult your own insurance provider.  Given that there is a great deal of confusion on the topic and it caused me a great deal of anxiety and time I thought I'd share it with those that may find it useful.

Cheers.





  

Tuesday, August 25, 2020

Software System Forensics -- Auto Generated Message Trace Diagrams


Understanding an existing software system can be a daunting task.  Diving head-first into a source code repository with the objective of gaining a system understanding can be particularly challenging.  Taking the high-dive into source code often results in crawling down a variety of rabbit holes that may or may not be of particular relevance.  It's not uncommon for software to have edge cases and/or 'dead code' that while are compiled into the release are rarely (or ever) executed due to run-time constraints.  But, really, what are the alternatives?

Whelp friends, what if you could execute a software system, gather method calls, w/caller and callees, and create a visual representation of the process flow?  That will be the topic of this particular blog post.

Let's introduce our team:

Our power forward; the hustle with the muscle, the beta with aaaalllllll the data.....GDB.


At point guard; the mate that will translate, the teammate that will update....your buddy and mine...Python.

And rounding out the crew, a battering ram of a diagram....WebSequenceDiagram.  


That's our roster; GDB to collect caller/callee information, Python to convert GDB output into something that can be used to generate a visual diagram, and WebSequenceDiagram to create the diagram.  This particular team has proven to be quite beneficial when I've been tossed into the deep end of the pool without my water-wings.  Let's work through a simple example;

Behold, an overly simple software system source file:
$ cat -n main.cpp 
     1 #include <stdio.h>
     2
     3 class C
     4 {
     5   public:
     6     C() { }
     7     void beak();
     8     void flap();
     9     void shake();
    10     void clap();
    11 };
    12
    13 void C::beak() {}
    14 void C::flap() {}
    15 void C::shake() {}
    16 void C::clap() {}
    17
    18 class B
    19 {
    20   public:
    21     B():c_() { }
    22     void stepOnce();
    23   private:
    24     C c_;
    25 };
    26
    27 void B::stepOnce() { c_.beak(); c_.flap(); c_.shake(); c_.clap(); }
    28
    29 class A
    30 {
    31   private:
    32     B b_;
    33   public:
    34     A():b_() { }
    35     void run();
    36 };
    37 void A::run() { for(int i=0; i<10; ++i) b_.stepOnce(); }
    38
    39 int main()
    40 {
    41   printf("(%s:%d) main process initializing\n",__FILE__,__LINE__);
    42   A obj;
    43   obj.run();
    44   printf("(%s:%d) main process terminating\n",__FILE__,__LINE__);
    45 }

Even the most modest of software engineers can peek at this code and understand it without the need for any advanced tools, but this process of capturing debug info and transforming it into a sequence diagram works for far more complicated systems, frankly it's saved me hours and hours of tracing through source code.  Fred R. Barnard may have not been a software engineer, but he just as well could have been when he coined the phrase "a picture is worth a thousand words".  

So, that's our system, let's turn our attention to GDB.  We'll author a GDB command script which will perform all the heavy lifting; we'll enable logging, write gdb info to a gdb.log file, set up breakpoints in methods we are particularly interested in (e.g. class A, B, C), the breakpoints will print the backtrace and release the process to continue.  The backtraces saved in the gdb log file will be used to extract the caller/callee methods for our diagram.
$ cat -n gdb.cmd 
     1 set pagination off
     2 set logging file ./gdb.log
     3 set logging overwrite on
     4 set logging on
     5
     6 define MyTrace
     7   bt 2
     8   cont
     9 end
    10
    11 break main
    12 commands
    13   rbreak ^A::
    14     commands
    15       MyTrace
    16   end
    17   
    18   rbreak ^B::
    19     commands
    20       MyTrace
    21   end
    22   
    23   rbreak ^C::
    24     commands
    25       MyTrace
    26   end
    27   
    28   cont
    29 end
    30
    31 run
    32 quit

Armed with the gdb command script, we simply run our main process under gdb as follows:
$ gdb --batch -x ./gdb.cmd ./main 2> /dev/null

When the process terminates, we have a gdb.log file that takes the form:
$ more gdb.log 
Breakpoint 1 at 0x40063c: file main.cpp, line 40.

Breakpoint 1, main () at main.cpp:40
40 {
Breakpoint 2 at 0x4006e4: file main.cpp, line 34.
void A::A();
...
Breakpoint 2, A::A (this=0x7fffffffdc77) at main.cpp:34
34     A():b_() { }
#0  A::A (this=0x7fffffffdc77) at main.cpp:34
#1  0x0000000000400670 in main () at main.cpp:42

Breakpoint 4, B::B (this=0x7fffffffdc77) at main.cpp:21
21     B():c_() { }
#0  B::B (this=0x7fffffffdc77) at main.cpp:21
#1  0x00000000004006f0 in A::A (this=0x7fffffffdc77) at main.cpp:34

Breakpoint 6, C::C (this=0x7fffffffdc77) at main.cpp:6
6     C() { }
#0  C::C (this=0x7fffffffdc77) at main.cpp:6
#1  0x00000000004006d4 in B::B (this=0x7fffffffdc77) at main.cpp:21

Breakpoint 3, A::run (this=0x7fffffffdc77) at main.cpp:37
37 void A::run() { for(int i=0; i<10; ++i) b_.stepOnce(); }
#0  A::run (this=0x7fffffffdc77) at main.cpp:37
#1  0x000000000040067c in main () at main.cpp:43

Since we created breakpoints for all our class A,B,C methods, hitting one will produce a backtrace depth of 2, the caller(#1) and the callee(#0).  Since the stack trace has the class name and method, we have sufficient info to create a sequence diagram, we just have to parse the gdb log file and extract the info.

Python is an amazing tool for file processing/parsing and the one we'll be using.  We will use some regex magic and string commands to transform the gdb raw output into a text file similar to this: 
$ cat -n mtd.txt
     1 main -> A:A()
     2 A -> B:B()
     3 B -> C:C()
     4 main -> A:run()
     5 A -> B:stepOnce()
     6 B -> C:beak()
     7 B -> C:flap()
     8 B -> C:shake()
     9 B -> C:clap()
This string format, <object> -> <class>:<method>(), is compliant with Web Sequence Diagram, simply copy-n-pasting in the contents into the web-app will produce magic.  More on that later, let's turn our head toward the necessary Python script.
$ cat -n mkMtd 
     1 #!/usr/bin/python
     2 import re;
     3 import sys;
     4
     5 # https://www.websequencediagrams.com/
     6
     7 def methodName(S):
     8   retVal="";
     9   m1=re.search(".+ (.+)::(.+)\((.+)\)",S);
    10   if m1:
    11     retVal="%s:%s()"%(str(m1.group(1).strip()),str(m1.group(2).strip()));
    12   else:
    13     m2=re.search(".+ in (.+)\(.*\) (.+)",S);
    14     if m2:
    15       cName=' '.join(m2.group(2).split(' ')[1:]).split('.')[0];
    16       retVal="%s:%s"%(cName, str(m2.group(1)));
    17     else:
    18       m2=re.search(".+ (.+)\(.*\) at (.+)",S);
    19       cName=m2.group(2).split(".")[0];
    20       retVal="%s:%s"%(cName, str(m2.group(1)));
    21   return retVal;
    22
    23 def parseDebugOutput(fileName):
    24   with open(fileName, 'r') as fp:
    25     C=fp.read();
    26   lastLine=(None,None);
    27   noDupCallMap=dict();
    28   for line in C.split('\n'):
    29     callerX=re.search("#0 .*",line);
    30     if callerX:
    31       m1=methodName(line);
    32     calledX=re.search("#1 .*",line);
    33     if calledX:
    34       m2=methodName(line);
    35       mtdLine="%s -> %s"%(m2.split(':')[0],m1);
    36       print mtdLine;
    37
    38 inFile=sys.argv[1];
    39 parseDebugOutput(inFile);

You run this delicious little bastard as follows:

$ ./mkMtd ./gdb.log

And it spits out Web Sequence Diagram compliant input commands;

Export the results into a PNG and you can include it in your design documentation;

With a bit of additional work, the diagram creation could be also automated by using the code from a previous post: https://dragonquest64.blogspot.com/2020/05/python-generated-sequence-diagrams.html

It's worth noting that while this method have time-and-time again proven useful to me, it presents a specific challenge;
You're likely to use this on a sophisticated system, one with dozens of classes, hundreds of methods and setting a breakpoint in each of them is technically possible, your diagram will quickly become an eye-sore.  The challenge is carving out the uninteresting methods from the breakpoints or the gdb log file and that process can be time-consuming.  I'd argue, not as time-consuming as spending dozens of hours browsing source code, but it will take a time investment of trial-n-error.  So, be prepared to spend some time on that.

I've used this technique in multi-process systems (capturing and displaying message entry/exit points), investigated in-memory DB accesses (during system initialization) and executed this capture/analysis on specific user scenarios.  It's an incredibly useful technique, produces valuable information, but takes some fine-tuning to find the right balance in breakpoint/method captures.

Cheers.