Nitrogen Tank Monitoring

Lab Monitoring

The Story

For a few months last year, I lived around the block from work. I would sometimes stop in on the weekends and pick up stuff I forgot at my desk. One day the power went out at my apartment and I figure I would check in at work and see if there were any problems. I messaged our Lab Safety Manager on slack to say "hey the power went out, and I am at the office. Is there anything you'd like me to check?". He said he hadn't even gotten the alarm email/pages yet, so if I would check out in the lab and send him a picture of the CO2 tanks to make sure that nothing with the power outage compromised those. Once I had procured access to the BL2 lab on my building badge, I made my way out back and took a lovely picture of the tanks, everything was fine.

The following week, in my one on one meeting with my manager, I mentioned what happened and she and I discussed the event. It clearly isn't sustainable sending someone in any time there was a power outage if we didn't need to, but the lab equipment doesn't have any monitoring ports.

Operation Lab Cam was born. I decided to put together a prototype of a Raspberry Pi 3 with a camera module and play around with getting a way to monitor the display on the tanks. After a few months of not touching the project, I dug into it in a downtime day again. The result is now we have an automated camera box that will take a picture once a minute and display it on an auto refreshing web page. There are many professional products out there that do exactly this, but I wanted something that has the ability to be upgraded in the future.

Summary of the Technical Details

Currently the entire process is managed by one bash script, which is a little clunky, but it's livable. The implementation of the script goes a little like:

  1. Take a picture to a temporary location.
  2. Add a graphical time stamp.
  3. Copy that image to both the currently served image, and a timestamped filename backup.

The web page that serves the image is just a simple web page that shows the image, and refreshes once every thirty seconds.

The Gritty Technical Details

The program I'm using to take pictures is the raspistill program. If I had written my script to just call raspistill every time I wanted a picture taken, it would have potentially taken a lot longer to save the images. This happens because it needs to meter the image every time, which adds up. The solution is Signal mode and turning raspistill into a daemon. If you enable signal mode, any time you send a SIGUSR1 to the process, the backgrounded process will then take the image.

Instead if setting up a service with systemd, I have a small bash script. At the beginning, I run a ps aux and check if raspistill is running, if it's not, I start up a new instance of raspistill with the appropriate options and background it. The next time this script runs, it will detect that raspistill is running and be almost a no-op.

After this, I send a SIGUSR1 (kill -10) to take the picture which is then saved, un-timestamped. Next up I call imagemagick's convert on this image, I crop out the center (so I couldn't use raspistill's "-a 12" option) because all I care about is a 500x700 pixel region.

This is then copied to the image that is served by the web page, and also backed up in a directory that nginx will listen to.

The Image

My Setup

It seems every developer has to talk about their setup, I guess I am no different.


I program in Python. When I introduce myself to people I say "I am a Python Programmer". I can do some JavaScript, but I don't tend to do it much in my professional life.

Operating System(s) and Shell

I program in Python, so for ease of use I use either Ubuntu or OS X. I used to use Arch and Gentoo, but the one time you don't read the release notes while upgrading makes for a bad time getting running again. Some times it's just easier using an easy operating system. I use zsh for my shell, because I also love oh-my-zsh.

Editor (here be wars!)

My main editor as of this writing is Sublime Text 3. When I first started programming professionally out of college, I used vim. I used vim for a year or so until I found Sublime. I bought Sublime within an hour and a half of using it for the first time. I love multi line editing, I love the plethora of plugins it has available and I just love everything about the way it looks. After a while, I went back to vim, decided to dig deep into how to use it, and I remained happy with my vim use. I have extensive vim dot files, muscle memory:wq and I still love it.

For the past two years I actually used Pycharm, used the Community edition for a bit, and then finally paid for it for a year. Early March that year expired and now that JetBrains is doing a subscription model, I didn't feel like renewing (even if work would have paid for it). So I decided to start using Sublime Text 3 again.

The main reason I love Sublime right now has to be easy multiline text. There are ways you can do it with Pycharm, and vim (visual mode is awesome), but to me Sublime makes it the easiest, and that's why I stick with it.


I mentioned dotfiles in the Editor section, and I might as well link to them in case you have any desire to see my setup. My first dotfiles repo exists somewhere on my github archive account in which I just had a vimrc file some plugins and bundle code. A year or so later my dev team learned about thoughtbot's dotfiles. A nice collection of dotfiles that help a developer setup a new machine the same way every time. Since discovering them, I have put together my own version. I have dotfiles for zsh, vim, git, and more. When I want to setup a new computer I just clone the repo, and run the install bash script. Super easy and I don't even really think about it anymore.


Recently, github started showing signed commit metadata. I now sign all my git commits by default. The key I use can be found on my PGP Keys page. I keep this key on a Yubikey Neo. Any time I want to use this pgp key, I must have my yubikey in my computer. It's basically a SmartCard, but in nice small usb format. The Yubikey Neo has another awesome trick, it can generated One-time Passwords (OTP) at the press of a button. For authenticating to LastPass, which I use along with KeePass for passwords, I use my yubikey and it generates an OTP to verify my log in. The last trick the Yubikey has up its sleeve is the NFC capabilities. I store my 2FA Authenticator code for DigitalOcean on my yubikey. I just swipe it on any Android device with "Yubico Authenticator" installed, and I have my 2FA codes.

I also recently started using Authy for codes. I installed their ssh login binaries, and am required to type a token every time I log in. This may or may not remain, as it needs me to forward an environment variable any time I use scp.

Tox and Pyenv - A Quickstart

I recently decided that I would try to help out with the Hyper Project. So far I have just changed one of the README's to reflect the change on travis-ci build status to actually show the image now that Cory moved it from his personal account to an organization. Simple and easy.

I am developing on my MacBook, and for work at Addgene I use Python 2.7.11 because we're not ready for 3.x yet due to some dependencies. I wanted to run the Tox tests on hyper-h2, and hpack but I had no py33, py34, py35, or pypy installed so everything failed except py27.

The solution to this was to use Pyenv and install everything as shims.

To do this I had to install pyenv which I did with homebrew.

$ brew install pyenv
$ echo 'eval "$(pyenv init -)"' >> ~/.zshrc # (This initializes pyenv when I start zsh.)

Once this was installed, I was able to use pyenv to install the other versions:

$ pyenv install 3.3.6
$ pyenv install 3.4.4
$ pyenv install 3.5.1
$ pyenv install pypy-5.0.0

Pyenv has features where you can install a python version into the directory you're in, with pyenv local. This makes a .python-version in my current directory with the versions I specify. I'm not a big fan of this right now on my first pass, as I don't want to have to add .python-versions into my global gitignore.

The other option is to use pyenv global which makes the versions available globally, I find this a lot nicer, because it puts all the version configuration in my user space, rather than my project space. To do this, I then ran

$ pyenv global 2.7.11 3.3.6 3.4.4 3.5.1 pypy-5.0.0

On my system python, outside of any virtual environments I install tox (pip install tox). I navigated back to the 'hpack' directory and ran tox. All the tests passed.


You may run into "The python zlib extension was not compiled" errors. To fix this, add the following CFLAGS variable in your shell: CFLAGS="-I$(xcrun --show-sdk-path)/usr/include" before your pyenv install ... lines.

Getting started in Python

Resurrecting this blog post as I had another friend ask about Python. I need to rewrite this for Python 3, but like many developers... I'm still on Python 2.7 at work.

~~~ Originally written in 2013 ~~~

I have a friend who is interested in becoming a Python developer. He has some Python experience with CodeAcademy, but he of course wants to take this a step further and develop on his own computer. I figure I’d give him a few pointers, and I know this has been rehashed a million times, but what the hell, why not blog on it again.

There are a few important things to learn besides the actual language itself. The first I am going to discuss is has to deal with installing packages, then followed up closely with Python’s path trickery. Finally I’m going to wrap up by discussing some software related to development, that could be used for any language, but I use daily in my work as a Python Software Engineer. Let’s get started.


Python is a wonderful language, but how useful would it be if you had to rewrite everything by hand? Not useful at all. That’s why the lovely pip developers were born. PIP (executable pip) is a package manager written for Python. It’s very simple to use, and in my opinion is way better than easy_install. To use pip you need to know at a minimum three commands.

pip install
This command does exactly what it says on the box. It queries PyPI (Python Package Index) and downloads the latest version of the package on the server. It then installs it to your site-packages.

pip uninstall
This deletes all files associated with the package supplied. 100% simple.

pip freeze
This shows what packages are installed on your system and what versions. If you supply ‐‐local it will show what packages are installed in your current environment.
These three commands will get you started with package management, there are more commands you can find by looking through the help documents.


If you notice I mentioned a current environment in my previous pip freeze explanation, here is why. Python has a default place that it looks when you reference a package. This is generally in something like /usr/lib/python2.7/site-packages/ or *C:\Python27\lib*. There is a set of scripts called virtualenv that creates an environment where you run it with a complete copy of your Python executable, and a blank (unless you copy them over) site-packages directory. You can then install any packages there activate the virtual environment. When activated you use those specific versions, no matter the version of what is installed on your system.
Let’s show an example of the first time use of virtualenv:

$ sudo pip install virtualenv # Only time you might need sudo, try without first.
$ virtualenv myenv # Create the virtual environment
$ source myenv/bin/activate # Activate the virtual environment
(myenv)$ python -c "import MYPACKAGE; print MYPACKAGE"

Notice how it says your package is not in /usr/lib/python2.7/site-packages/ ? That’s because you’re using virtualenv to tell your copied python to use that library instead. There are many reasons you would want to use a virtual environment. The most frequent reason is to preserve version numbers of installed packages between a production and a development environment. Another reason virtualenv is useful if you do not have the power to install packages on your system, you can create a virtual environment and install them there.


After you create a virtual environment, you just run source bin/activateand it will activate the virtual environment. This can get tedious knowing exactly where your virtual environments are all the time, so some developers wrote some awesome scripts to fix that problem. This is called virtualenvwrapper and once you use it once, you will always want to use it more. What it does is that it has you create a hidden directory in your home directory, set that to an environment variable and references that directory as the basis for your virtual environments.

The installation of this is pretty easy, you can pip install virtualenvwrapper if you want, or download the package and compile by hand. Once installed correctly, you can run the command mkvirtualenv envname to create a virtual environment. You can then run workon envname from anywhere, and it will activate that environment.

For example, you could be at /var/www/vhosts/ and run workon envname and it would activate the environment from there. This isn’t a required package (none of them are really…) as I went a couple years without using virtualenvwrapper, but it is very useful and now I use it every day. Some tips I use with my setup of virtualenvwrapper is that I use the postactivate scripts to automatically try to change into the proper project directory of my environment.

This also means I usually name my virtualenv after my project name for easy memory. It makes no sense to have a project called "cash_register" but the virtualenv be called “fez”. This is how I change to the right project after activating my virtualenv. This goes in $WORKON_HOME/postactivate

# This hook is run after every virtualenv is activated.
# if its a work or a personal project (Example)
proj_name=$(echo $VIRTUAL_ENV|awk -F'/' '{print $NF}')
if [[ -e "/Users/tsouza/PersonalProjects/$proj_name" ]]
    cd ~/PersonalProjects/$proj_name
    cd ~/WorkProjects/$proj_name

This about wraps up part one of this two part blog series. Next time I will discuss how to use Git and how to configure SublimeText2 and Aptana Studio for use with Python. Stay tuned!