Building a RackHD home lab


RackHD driver for Docker-machine allows a user to provision a Docker instance on bare-metal hardware managed by RackHD. This brings many of the advantages and flexibility of cloud-based infrastructure into your own data center. As major proponents of Infrastructure As Code, we think any tool that makes deploying and scaling infrastructure repeatable and automated is a major win.

One technology that is typically found on the servers in your data center is Out-of-Band Management (OBM), sometimes called “lights-out management.” This is a network connection to the nodes that is independent of their running Operating System (OS) and allows for remote management whether it is power management, OS installation or even remote access to the display. OBM is a foundational tool for RackHD since tasks like remote OS installation and power management are core competencies.

This brings us back to the RackHD driver. While most of the Docker-machine drivers are for virtual cloud-hosted infrastructure, the RackHD driver is unique in targeting bare-metal. While developing the driver, I unsurprisingly found that there was only so much progress I could make using virtualized resources. Not to mention there was code that I couldn’t exercise without bare-metal or OBM. And of course I wanted to use real hardware for testing, but not just any old PC in the closet would do. So, I built a home-lab for RackHD with these capabilities and even managed to avoid breaking the bank.

Here’s how.

My hardware requirements roughly in order of priority were:

  • OBM capable
  • Efficient and quiet (after all, it will sit in my home office)
  • Economical
  • Portable

Performance was not a concern, but getting my manager to okay the expense report was. I ended up getting two nodes, one using IPMI and one using Intel AMT.

Finding a low-cost node with a Baseboard Management Controller (BMC) for IPMI support is surprisingly non-trivial. This is generally an enterprise feature, and therefore exists mostly in servers at enterprise price-points. I ended up finding the SuperMicro MBD-X7SPA-HF-O, a mini-ITX server motherboard. Mini-ITX is a fairly small form factor, and this board included a BMC plus an integrated Intel Atom processor. Add power, RAM, and storage and I was in business! Total cost for chassis, board, hard drive and memory was about $300 from NewEgg.

RackHD also has native support for Intel AMT so I knew I also wanted to look at the Intel NUC nodes. There seems to be only one model of NUC that supports AMT, as it requires vPro support in the processor. That model is the older DC53427HYE, and requires you to supply your own RAM, mSATA drive, and (oddly) a power cord. This includes a Core-i5 processor with vPro and AMT support, and is only about the size of a couple decks of cards. This makes it extremely portable, and potentially great for demos. Hello MeetUps! Total cost for the NUC kit, mSATA drive, memory, and cable was about $475.

Intel NUC.jpg

The NUC cost is higher, but you do get much more powerful hardware and a smaller form factor. I also fitted both nodes with only 4GB RAM and just over 100GB of primary storage to keep costs down.

Both the NUC and SuperMicro node share their OBM Ethernet connection with the same port as their primary Ethernet. This meant I only had one network connection to worry about to each node. I already had all of the cables I needed, and a spare 5-port Gigabit switch on hand to use. I assembled all my hardware, plugged in power and network, and connected the network to a separate wired Ethernet port on my laptop. I fired up RackHD inside of Docker running in Virtualbox using docker-compose, bridging the dedicated wired network port on my laptop to the Virtualbox VM. This allowed RackHD to have full access to the network and serve as the default gateway for DHCP, TFTP, PXE, etc.

RackHD home setup.jpg

I powered on my two nodes and saw…. Nothing. Hmm. Well I suppose it would be a lot to ask for the BIOS of two completely different types of nodes to both be configured exactly the way I would need it.

So I then connected a monitor and USB keyboard, went into the nodes’ BIOS to configure them for PXE booting, and tried again. This time, success! The nodes were discovered and catalogued by RackHD, and I was able to launch OS install workflows against them. Let the docker-machine driver development continue!


Linux Turns 25 in Style

It’s hard to believe that Linux has already been around for 25 years. What an incredible feat!

Let’s take a moment to go back in time to see the humble beginnings of Linux. Here is the first email from Linus Torvalds on the Minix distribution list, describing what later became Linux as “just a hobby, won’t be big and professional like gnu.

With such a deep history and humble beginnings we couldn’t help but want to celebrate with thousands of passionate people at  LinuxCon and ContainerCon North America 2016! EMC {code} is honored to be a sponsor at this years events. Will you be there?

For those of you planning to attend here are a few things we are looking forward to at the event:

You can find the EMC {code} team at booth #44 in the Expo Hall. We invite you to stop by and learn more about the {code} community and our available open source projects. We look forward to seeing you all and meeting some new faces! Let the countdown begin!


PS: Don’t forget to wish Tux a Happy Birthday! Here’s an idea: why don’t we all break out into song during Linus’ keynote?

Buzz from the Drone Nationals


One of the obstacles on the race course

It’s difficult to deny the steady rise in popularity of the small, remote-controlled flying devices collectively referred to as “drones”. From hot gift items to the future of home delivery, drones are everywhere! As the industry grows, most of today’s drones flown recreationally rely on open source software. Some of these open source projects have actually caused disruption to the drone industry, since open source allows savvy consumers to customize and extend the capabilities of their drones.

During the first weekend in August, EMC {code} was a major partner at the 2016 National Drone Racing Championships. The races were held on Governors Island in New York City, a beautiful, historic site within view of the Statue of Liberty. Competitors from all over the United States – and some from beyond – raced for the title.

The competition featured events for both multirotor and fixed-wing vehicles. The race course was a challenging series of large fabric-covered hoops erected throughout the outdoor venue. The sky buzzed with the whine of tiny electric motors being pushed to their limits as the speedy drones zipped through the obstacles and past the bleachers full of spectators. The days were hot and humid, with a brief but intense rain shower on Saturday afternoon which halted the races temporarily.

The race course, viewed from atop a nearby hill

The racing drones were controlled using wireless first-person video (FPV) technology; pilots viewed the racecourse through head-mounted visor displays showing a live feed from tiny cameras mounted on the drones. Many fans in the bleachers brought along their own FPV visors, so they could tune into the video feed for their favorite racer and watch along in real time. The tiny vehicles – and their pilots – were pushed to their limits, and several of the drones met sudden and violent ends through spectacular collisions. One competitor, “ZoeFPV” Stumbaugh, even had her custom multirotor drone burst into flames in mid-air when her battery became overheated!

IMG_1668Drones are swiftly emerging as a valuable new market, with some predicting a $12B valuation by the year 2021. Data collection from drones offers a wealth of new and fascinating knowledge, but the sheer volume of that data also presents a challenge for drone operators. Ultimately data storage will be a challenge that companies using drone technology will have to address.

Throughout the weekend the {code} team hosted the Sky Lab – a fun and interactive exhibit containing many stations that visitors could try their hands at piloting drones in virtual reality. Kids and adults alike jumped in to experience the hardware flight controls that flew simulated drones on a virtual copy of the official drone race track. The Sky Lab was so popular that some visitors returned all three days to play the games.

We were very happy to be a part of this exciting event, and to help bring our message of open source community and collaboration to an emerging market. We congratulate all the competitors on their performance, and look forward to the 2016 World Drone Racing Championships coming up in Hawaii this October!


Drones Dart Across Governors Island for the Title

Screen Shot 2016-08-03 at 9.25.39 PM.png

We’re stoked to announce EMC {code} was selected as a major partner for the 2016 US National Drone Racing Championship on Governors Island, New York due to our expertise in open source! We join the ranks with GoPro, EY, Vizio, Lowepro and several others in partnership with the Drone Sports Association (DSA). This weekend, from August 5th-August 7th, the EMC {code} team and members of the {code} Community will take over Governors Island along with thousands of drone fans and the nation’s top talent in the racing community.

The Pilots’ path to Nationals was not an easy one. According to the Drone Nationals website, this journey began with “19 qualifiers, 20 teams, 500 pilots and over 101 Freestyle submissions.” The winners of this year’s championship will head to Hawaii for the 2016 World Drone Racing Championships in October. It won’t be easy, but it certainly looks like a lot of fun.

Good Luck to all the pilots and congratulations for making it this far! 

So, bottom line – if you are in New York this weekend, you have no excuse but to hop on a ferry to Governors Island and join EMC {code} in the Sky Lab where we are proud to host a virtual reality and gaming wonderland. The Sky Lab will include three virtual reality headsets (HTC Vive) for real-time experiences and a multi-computer station for attendees to test their flying skills against the pilots’ times on the actual course! If you can’t make it in person, you can follow the action via live stream on ESPN3 this weekend starting at 1PM ET on Saturday, August 6th

General admission is free, be sure to register – we look forward to seeing you on the island!

On a final note, here’s a quick video from the International Drone Racing Association (IDRA) to get you just as pumped as we are…

Connecting EMC, Open Source and Drones! Oh my!

The EMC {code} team is heading to NYC this weekend (Aug 5-7) for the US National Drone Racing Championship. We are thrilled to showcase how open source software meets hardware in a real world environment. In doing so, we asked one of our community members and drone enthusiast from the EMC dotnext Team Matt Cowger if he would share his perspective. Here is what Cowger had to say:

I’m really excited – unreasonably so.

When you get a call from Joshua Bernstein (VP of Technology of ETD for EMC) that you can spend a weekend at the Drone Nationals (the top drone/quadcopter/multirotor racing event in the US) and help represent EMC {code}, is there any other answer but yes?

My inner geek which really isn’t hidden from the world is beyond pumped. So needless to say, I’m really looking forward to working the event and helping judge the competition. As an avid multirotor racer / builder myself, this will be a pretty awesome work trip.

For some, the question has been”what does EMC {code} have to do with drone racing?” Well, the fascinating thing about modern drones is that many of the best ones are based on entirely open source software (OSS), so it’s a perfect area for the EMC {code} team to get involved with.

Let me give you some examples:

  • rmrc-dodo-fc-flight-controller-f3At the heart of every drone is a flight controller. Drones stay in the air by the sheer brute force of propellers pushing air downwards. This means it’s not a naturally stable system and we need constant feedback to keep the system level and in control. The most common system used in racing quadcopters these days is an ARM STMF303 processor running at about 72MHz with a number of onboard sensors (gyroscopes, accelerometers, barometers, etc). These need software to control them and the most popular software to do so is a family of related open source projects, which include Baseflight, Cleanflight, Betaflight and Raceflight. In the spirit of open source, they each focus on specific problems in drone flight to solve, and regularly share solutions and optimizations across the various projects. Each of these projects is hosted on GitHub that provides access to both users and developers so that we can continue to see innovation and advancements to the technology.
    • There are other projects for other kinds of drones too – software for GPS navigation (iNav) and full mission planning with flight optimization (dRonin). Some of these projects have incredibly rapid release cycles (monthly or faster).
  • photo credit AliExpressAlmost as fascinating are the electronic speed controllers, or ESCs. Each motor’s spin rate needs to be precisely controlled between 0 and up to ~40,000 RPM (yes, really!), and it is the job of the speed controller to receive the requests from the flight controller, read the position of the motor and control its spin rate. This needs to happen about 8,000 times per second. As a result, we use systems centered around high performance control units like the Silicon Labs F390. Each of these speed controllers runs its own software, and the most popular is a piece of open source software called BLHeli. There are very frequent updates to this software adding new features from contributions around the world.
  • Lastly, there’s the example of OpenTX.  As if having your flight controller and speed controllers open sourced wasn’t enough… the most common drone transmitter software OpenTX is also open source. OpenTX is incredibly robust and allows you to configure your controller from a laptop interface and run Lua scripts (another open source tool – a scripting language) right there on the radio while you are flying. OpenTX is so powerful and easy to use that it has become the standard in the multicopter industry, with a number of manufacturers forgoing their own firmware and simply using OpenTX.

There are other great examples as well for specific integrated ‘toy’ class helicopter, like the Eachine H8 and Hubsan X4. Both are great small quadcopters to start with.

In short, modern multicopter racing would not be possible without the power of open collaboration tools and open source methods. That’s why the EMC {code} team and I are stoked to support the upcoming Drone Nationals – it’s a glorious combination of fun, geekery and open source.

Will I (we) see you there? – Matt CowgerIMG_2509

Apache Mesos 1.0 Integrates {code} Projects for Experimental Storage Support

Hard work and persistence pays off. Apache Mesos has released v1.0 and along with it comes experimental storage support. This is {code}’s first major contribution to an Apache Foundation project. We’re very excited to contribute a new solution to the Mesos ecosystem that is solving real problems.


In September 2015, {code} began working on two projects that would solve volume mounting capabilities. The first project, dvdcli, uses native Docker packages and familiar command line functionality to mount volumes to the host rather than the container. This package allows other tools to integrate those mounts into any piece of software, which leads into the next project. Mesos-module-dvdi relies on dvdcli to mount volumes to ANY container engine running inside of Mesos.


The response from the community was immediate – within days of initial release there were signs of usage around the world. In March, Marathon, a popular container orchestration framework for Mesos, added built in support for stateful applications in the 1.0 release. And now 10 months after the initial release, both of these {code} projects are a part of Apache Mesos 1.0 Experimental Storage.  Here is an excerpt from the official announcement:


Starting from Mesos 1.0, we added experimental support for external storage to Mesos. Before this feature, while users could use persistent volumes for running stateful services, there were some limitations. First, the users were not able to easily use non-local storage volumes. Second, data migrations for local persistent volumes had to be manually handled by operators. The newly added docker/volume isolator addresses these limitations. Currently, the isolator interacts with the Docker volume plugins (e.g., REX-Ray, Flocker, Convoy) using a tool called dvdcli. By speaking the Docker volume plugin API, Mesos containers can connect with external volumes from numerous storage providers (e.g., Amazon EBS, Ceph, EMC ScaleIO).

{code} introduced mesos-module-dvdi as of Apache Mesos 0.23 which interacted with dvdcli. Now with mesos-module-dvdi being baked directly into Apache Mesos 1.0, users can natively interact with dvdcli. We are working together with the core Mesos contributors to bring external storage to a stable state and will continue to update this feature based on dvdcli.

The {code} team has laid out a roadmap to contribute more code, which will alleviate the requirement of dvdcli and REX-Ray to be installed on each host. libStorage is the next step in this journey for us to further move storage functionality into a common package that can be used by every storage vendor and container runtime. Of course, our commitment wouldn’t be complete unless we didn’t already have plans for phase two with mesos-module-libstorage.

Monitoring EMC ScaleIO with Grafana using Intel SDI’s Snap

This post is a semi-continuation of a previous blog post, where we looked into Grafana as a method for monitoring EMC’s Elastic Cloud Storage platform. This week we’re back at it again with Grafana, but exploring a different method of collecting and transporting the telemetry and trying to integrate with EMC’s flagship block storage product. In this post we’ll show you how far we got towards monitoring ScaleIO with Grafana using Intel-SDI “snap”.

A previous project had us creating a ScaleIO cluster deployment using an Amazon AWS CloudFormation template for internal use by the {code} team as a repeatable, infrastructure-as-code method for testing our automated REX-Ray builds. This seemed like the ideal test-bed to get some hands-on exposure to an intriguing new open source telemetry framework called “snap”  from Intel’s Software Defined Infrastructure team, and so we set out to put some pieces together – could we use snap to monitor ScaleIO?

snapIntel-SDI’s new snap framework has a few features that make it particularly interesting for metrics and monitoring nerds like us. The ability to manipulate plugins, and to create and process telemetry as ‘tasks’ through a REST-ful API interface makes snap fit smoothly into a modern operations workflow, and the ability to cluster snap nodes together using a feature called ‘tribe’ enables snap to scale to fit any environment. When we saw that Raintank – the new company by the author of Grafana – had recently released a plugin for snap, enabling Grafana to pull metrics directly from a running snap daemon, that cinched it: we knew we had to give snap a try.

Our first impulse was to use Swisscom’s collectd plugin to pull metrics from a ScaleIO cluster and then feed those metrics into snap, but this was based on what turned out to be the faulty assumption that snap had a collectd collector-type plugin available. There’s a mention of collectd on snap’s plugin catalog github page, but that mention is under the “wish list” heading. Snap has – well, had – no native ScaleIO plugin available!

using `snapctl` to watch live ScaleIO metrics being watched by a snap task

using `snapctl` to watch live ScaleIO metrics being watched by a snap task

Fortunately, the Intel-SDI team works in a similar fashion to EMC {code} with a public Slack team that the team uses for development and support. After joining the Slack team and explaining our goals, Intel-SDI team member Taylor Thomas offered to quickly write a new plugin! We spun up a ScaleIO cluster in AWS for him to test with, and less than twenty-four hours later snap now has a working (albeit alpha-level) collector plugin for ScaleIO!

To put all the pieces together, we created a pair of Docker containers with one container running a daemonized Snap server with the new ScaleIO plugin preinstalled, and the second container running the most recent version of Grafana with the Raintank plugin. Running those two containers in a Docker environment allowed us to connect a browser to Grafana and easily create monitoring dashboards with live-updating stats about our ScaleIO cluster.

a live-updating graph of ScaleIO metrics in Grafana

a live-updating graph of ScaleIO metrics in Grafana

To try this project out on your own ScaleIO cluster, follow these steps:

  1. Run the Docker containers on your own Docker host, ensuring that the host has network connectivity to the ScaleIO Gateway service. Start with the first container, containing the Snap daemon and the ScaleIO plugin:
     $ docker run -d -p 8181:8181 mux23/snapd-scaleio
  2. Download and install the Snap control utility snapctl, by installing one of the releases posted here:
  3. Ensure that snapd is running and the ScaleIO plugin is loaded, by running snapctl plugin list:snapctl
  4. Download this snap task definition file, and edit the password and gateway values to reflect your own environment.
  5. Pass the Snap task definition to the Snap server using snapctl task create :
    snapctl task
  6. Run the second container with Grafana and the Raintank Snap plugin installed:
    $ docker run -d -p 3000:3000 mux23/snapd-grafana
  7. Point a browser to your container host on port 3000, and login to Grafana as the user ‘admin’ with the password ‘admin’. From the menu on the top left, select ‘Data Sources’ and then ‘Add Data Source’ – configure it like so:
  8. Navigate to the ‘Dashboards’ menu, and add a new Grafana dashboard. Click on the tiny green tab that appears near the top left of the interface, and select “Add Panel -> Graph”. In the window that appears, click on ‘select task’ beside the ‘Task Name’ heading, and select the task that you created in step 5.
  9. The interface should now update, and you should now be able to see metrics from your ScaleIO cluster appearing in the available metrics list!
    grafana metrics list

It’s important to note that this technology is on the bleeding edge, and should absolutely not be thought of as finished work or considered for use in any critical application!

We’re convinced that open source tools like Grafana have very serious value for gaining perspective on your operational environment, and new industrial-strength frameworks like snap cement this opinion.

We hope you found this post useful, and as always, feel free to join us on our public Slack team where we discuss emerging technologies like this every day.