Monitoring EMC ScaleIO with Grafana using Intel SDI’s Snap

This post is a semi-continuation of a previous blog post, where we looked into Grafana as a method for monitoring EMC’s Elastic Cloud Storage platform. This week we’re back at it again with Grafana, but exploring a different method of collecting and transporting the telemetry and trying to integrate with EMC’s flagship block storage product. In this post we’ll show you how far we got towards monitoring ScaleIO with Grafana using Intel-SDI “snap”.

A previous project had us creating a ScaleIO cluster deployment using an Amazon AWS CloudFormation template for internal use by the {code} team as a repeatable, infrastructure-as-code method for testing our automated REX-Ray builds. This seemed like the ideal test-bed to get some hands-on exposure to an intriguing new open source telemetry framework called “snap”  from Intel’s Software Defined Infrastructure team, and so we set out to put some pieces together – could we use snap to monitor ScaleIO?

snapIntel-SDI’s new snap framework has a few features that make it particularly interesting for metrics and monitoring nerds like us. The ability to manipulate plugins, and to create and process telemetry as ‘tasks’ through a REST-ful API interface makes snap fit smoothly into a modern operations workflow, and the ability to cluster snap nodes together using a feature called ‘tribe’ enables snap to scale to fit any environment. When we saw that Raintank – the new company by the author of Grafana – had recently released a plugin for snap, enabling Grafana to pull metrics directly from a running snap daemon, that cinched it: we knew we had to give snap a try.

Our first impulse was to use Swisscom’s collectd plugin to pull metrics from a ScaleIO cluster and then feed those metrics into snap, but this was based on what turned out to be the faulty assumption that snap had a collectd collector-type plugin available. There’s a mention of collectd on snap’s plugin catalog github page, but that mention is under the “wish list” heading. Snap has – well, had – no native ScaleIO plugin available!

using `snapctl` to watch live ScaleIO metrics being watched by a snap task

using `snapctl` to watch live ScaleIO metrics being watched by a snap task

Fortunately, the Intel-SDI team works in a similar fashion to EMC {code} with a public Slack team that the team uses for development and support. After joining the Slack team and explaining our goals, Intel-SDI team member Taylor Thomas offered to quickly write a new plugin! We spun up a ScaleIO cluster in AWS for him to test with, and less than twenty-four hours later snap now has a working (albeit alpha-level) collector plugin for ScaleIO!

To put all the pieces together, we created a pair of Docker containers with one container running a daemonized Snap server with the new ScaleIO plugin preinstalled, and the second container running the most recent version of Grafana with the Raintank plugin. Running those two containers in a Docker environment allowed us to connect a browser to Grafana and easily create monitoring dashboards with live-updating stats about our ScaleIO cluster.

a live-updating graph of ScaleIO metrics in Grafana

a live-updating graph of ScaleIO metrics in Grafana

To try this project out on your own ScaleIO cluster, follow these steps:

  1. Run the Docker containers on your own Docker host, ensuring that the host has network connectivity to the ScaleIO Gateway service. Start with the first container, containing the Snap daemon and the ScaleIO plugin:
     $ docker run -d -p 8181:8181 mux23/snapd-scaleio
  2. Download and install the Snap control utility snapctl, by installing one of the releases posted here: https://github.com/intelsdi-x/snap/releases
  3. Ensure that snapd is running and the ScaleIO plugin is loaded, by running snapctl plugin list:snapctl
  4. Download this snap task definition file, and edit the password and gateway values to reflect your own environment.
  5. Pass the Snap task definition to the Snap server using snapctl task create :
    snapctl task
  6. Run the second container with Grafana and the Raintank Snap plugin installed:
    $ docker run -d -p 3000:3000 mux23/snapd-grafana
  7. Point a browser to your container host on port 3000, and login to Grafana as the user ‘admin’ with the password ‘admin’. From the menu on the top left, select ‘Data Sources’ and then ‘Add Data Source’ – configure it like so:
    grafana_datasources
  8. Navigate to the ‘Dashboards’ menu, and add a new Grafana dashboard. Click on the tiny green tab that appears near the top left of the interface, and select “Add Panel -> Graph”. In the window that appears, click on ‘select task’ beside the ‘Task Name’ heading, and select the task that you created in step 5.
  9. The interface should now update, and you should now be able to see metrics from your ScaleIO cluster appearing in the available metrics list!
    grafana metrics list

It’s important to note that this technology is on the bleeding edge, and should absolutely not be thought of as finished work or considered for use in any critical application!

We’re convinced that open source tools like Grafana have very serious value for gaining perspective on your operational environment, and new industrial-strength frameworks like snap cement this opinion.

We hope you found this post useful, and as always, feel free to join us on our public Slack team where we discuss emerging technologies like this every day.

LinuxCon Japan Wrap Up

We just got back from LinuxCon 2016 in Tokyo, Japan. After taking a few power naps to get over jet lag, it’s time to reflect and write about the conference. This was a jam-packed week incorporating three conferences rolled into one: LinuxCon, ContainerCon, and Automotive Linux Summit, with an added last day special event ‘Open Source Storage Summit‘ where EMC {code} presented. The trip was both rewarding and eye opening – we learned about technology challenges and goals from the perspective of a different part of the world and learned that a lot of the interest in various technology was dictated by the particular industry one was in.

IMG_20160713_084142

The calm before the storm…

Conversations Halfway Around the World

The level of container-related curiosity and intrigue revealed through conversations in Japan was beyond fascinating. The attendees’ experience with containers and industry backgrounds greatly varied, which brought about conversations of new container use cases we hadn’t even thought of yet. Some attendees are deploying your typical container environments like Docker and Apache Mesos, but some attendees were complete container “newbies” and looking to explore this technology. For example, the people who came for the Automotive Linux Summit, which was a significant portion of the attendees, mainly dealt with embedded systems and had very little intersection with container technologies in their daily work. That said, the ideas and conversations about how containers would provide efficiency were flowing, and not in short supply! In fact… I even learned a lot of new things about the automotive industry.

IMG_20160714_103236

John Mark talking containers with attendees

Open Source Storage Summit

The Open Source Storage Summit took place on the last day and there was a lot of buzz and interest in the air. John Mark kicked off the Summit with a powerful message of storage being the last frontier for open source and that thought leaders like EMC {code} are helping to shape the vision of the future. Thanks for the compliment JM!

Steve Wong from EMC {code} spoke next and offered the audience a choice of either seeing a database or a Minecraft server in a container. Obviously, Minecraft was the unanimous choice. Whether you invest your time in populating a database or a building a Minecraft world, you want the scalability, availability, and durability benefits offered with access to persistent storage. Ultimately, Wong brought down the house with his demo of a Minecraft server in a Docker container in AWS Tokyo using REX-Ray to provision external persistent storage.

His presentation also covered libStorage and Polly projects, which address storage governance and integration with container schedulers such as Mesos, Kubernetes, Docker Swarm, and Cloud Foundry.

CnYDpTIUIAEofh6

Talking REX-Ray at the Open Source Storage Summit

Odds and Ends

MacBook visits the Kanda Myojin for some en-charm-ment

Aside from the real work aspect of the trip, it’s been a while since I have visited Japan (2009) and it was an excellent opportunity to reconnect with the people, culture, and country again. We had some free time to go and visit the Kanda Myojin Shrine. It’s a shrine dedicated to “secure charms and blessings to protect your electronic devices.” Yes… it is a geek temple. Because Steve’s MacBook has recently experienced some lock-ups, we had to take it to be blessed. We gave our donation to the nerd gods, they returned an IT charm sticker, and we now await the results.

The shrine is in the Akihabara district which is loaded with electronics stores. While wandering around, we ran into a Dell retail outlet. We met some of the people associated with Dell open source projects like Project Sputnik at OSCON in Austin earlier this year. The code team is enthusiastic about the upcoming probability of working closer with the people at Dell.

{code} discovers a Dell retail outlet in the Akihabara tech district

{code} discovers a Dell retail outlet in the Akihabara tech district

Overall, the time at LinuxCon Japan was great and I was sad to see it go so quickly. You can find the EMC {code} team at our next event LinuxCon North America. Hope to see you there!

Monitoring EMC Elastic Cloud Storage with Grafana

Every business is different, with unique technology requirements and different software stacks needed to support their operations. Often the answer to monitoring these large, complex environments lies in expensive solutions and dedicated platforms. At {code} we know there are many open source tools available that allow organizations the freedom and flexibility to craft solutions to perfectly fit their unique needs. We also understand EMC’s Elastic Cloud Storage is the driving component of many organizations’ object storage solution.

When {code} team member Jonas Rosland was asked whether it would be possible to integrate the open source monitoring tool Grafana with a production ECS deployment, he said “of course!” – but in practice he found that the puzzle was actually missing a piece or two.

Our team charter at {code} allows and encourages us to contribute to open source projects and communities, so Jonas developed a small stack of open source software hosted in Docker containers to address the question, and published his work on his public GitHub repository. The three Docker containers in his example solution implement the time-series database InfluxDB , the metrics-collection tool collectd alongside a short, custom Python app he wrote to poll performance metrics from an ECS deployment, and the popular new time-series metrics graphing tool Grafana.

Grafana

To collect the metrics, a container running collectd runs the custom python app every few seconds, which pulls metrics from an ECS deployment and translates the output into the JSON format that collectd can understand. The collectd container then passes the metrics data to a second container running InfluxDB for long-term storage, and then the third container runs the Grafana tool to query, parse and display the metrics data in a way that gives a comprehensive view of ECS usage.

Docker Compose

The stack was created with Docker Compose, a tool that helps to define and run multi-container Docker applications. Using Docker Compose makes it simple to try this stack against your own ECS deployment – all you need is a working Docker installation and network access to your ECS deployment. Once this is setup, follow these steps:

  1. Clone Jonas’ repository to your Docker server
  2. Edit the “emcecs-config.yml” file to reflect your ECS server settings
  3. Run ‘docker-compose up’ to start the service
  4. Configure the containers with an http POST

Once the containers are running and configured, you can access the Grafana dashboard by connecting to port 3000 on your Docker host using the login name ‘admin’ with the password ‘admin’.

Here’s a short video to demonstrate:

Give it a try on your own ECS deployment, or if you don’t have a production ECS deployment to call your own, feel free to download and experiment with the free-for-non-production-use Community version. If you run into problems, there’s often people lurking on our Code Community Slack forum who might be able to help.

What you need to know about Storage in Docker 1.12

Congratulations Docker on another fantastic announcement – Docker 1.12!

DockerCon 2016 is here, and with it another well timed Docker announcement! With the 1.12 release coming, there are some nice enhancements for storage. Some of the notable changes enhance support for volume drivers. These changes include the abilities to identify whether volumes are locally vs globally accessible and also to get driver-specific details about available volumes. There are also other nice-to-haves and fixes included in 1.12 that are detailed below. It’s clear that some of these changes will help prepare Docker Swarm to scale. With that said, Docker at scale is likely to be a key theme at DockerCon this week.

Notable Changes

Support for Volume Scopes (local/global) #22077
This doesn’t change anything noticeable; however, services that utilize docker volumes (ex: swarm) are able to identify available volumes as local to a specific host, or global to all hosts. In the past when you ran “docker volume ls” from a swarm manager, any global volume that was available to all swarm agents was listed once for each host. This has posed a challenge when trying to build scalable Docker Swarm clusters. Now, this should be easier to fix with the ability to know which volumes are global instead of local.

docker-swarm-volume-ls

Support for Volume Status #21006
In the past each Docker volume the only details available were the volume name, driver name, where it’s mounted, and basic labels (if used).

docker-1.11-volume-inspect

With v1.12, it’s now possible to get more driver-provided details (nested under Status) for each volume (as seen in the image below).

docker-1.12-volume-inspect

Other Changes

Support for ZFS Volume Size #21946
Prior to Docker 1.12, there was no way to enforce the size of ZFS volumes, and there is now support via the “–storage-opts” flag.

Support for Disk Quotas with BTRFS #19651
If using BTRFS instead of devicemapper as your default docker filesystem, you can now set a max size or quota for individual docker containers.

Volume Name / Driver Filters #21361
There is now enhanced filtering available for “docker volume” commands / api requests. This allows you to get details for a specific volume name or all volumes accessible by a specific volume driver.

Opaque ID Sent with Volume Mount / Unmount Requests #21015
When volume mount/unmount requests are sent to volume drivers, a unique ID is sent to make sure a volume driver can keep track of individual requests. This will allow volume drivers to better track volume mount and unmount requests.

Minor Fix for SELinux Users #17262
If you’re using SELinux on your docker hosts, #17262 fixes the usage of z/Z permission options when attaching a local directory to a new container. Previously, the start of a new container would result in a failure if the folder did not exist prior to launching.

See You at DockerCon 2016!

EMC {code} will be all around DockerCon so come check us out and talk to us about Docker, REX-Ray, Polly, and libStorage – or if you can’t make it to DockerCon head to the {code} Community on slack!

Containers Were Huge at OSCON 2016

Screen Shot 2016-04-11 at 4.32.35 PM

We just returned from OSCON, the annual convention for discussion of all things open source, organized by O’Reilly Media. This event presented a great opportunity to survey the broad spectrum of open source. From our perspective, the hot trend continues to be containers, with cloud and IoT as contenders for the second billing. Microkernels, big data, DevOps, and security were also addressed in many sessions. It seemed like every time slot had at least one container-related presentation.

The container space is akin to virtualization circa 2003. In this era a couple startups were battling over the space. Early adopters were starting to use the technology while most enterprises were watching from the sidelines. Some saw virtualization as a developer tool, although ultimately it was adding features to enable enterprise use at scale that put virtualization on the map with a pin for every datacenter on the planet. Will this storyline be repeated with containers?

EMC{code} has a number of open source projects directed at the future of the container space

EMC {code} has a number of open source projects directed at the future of the container space

For an enterprise just getting started with containers, the smart move is not to base plans on a snapshot of container technology as it exists at this exact moment in time. This is a rapidly moving space, and the right attitude is to consider possible second and third moves on the chessboard. For example, we would be shocked if Intel doesn’t have a sizable development team focussed on hardware features for containers. Most of the slide decks at OSCON addressed current features, but there were opportunities for one-on-one conversations with CTOs and founders of many of the major actors in the container space. Flexibility and keeping your options open will be the bywords for this space.

Highlighted Projects

rex at stairway

REX-Ray on display

Continuing with the theme of containers, project REX-Ray took a very prominent role at the show. It was the center of discussion for attendees asking questions about how container use cases could be extended to include persistent use cases. Lots of interest was focused on the major application platforms.

RackHD at OSCON

RackHD at OSCON

Next up, RackHD also drew attention at the conference. RackHD is an open source tool for automating the installation of hypervisors, container hosts, or general purpose OS stacks on physical hardware. It can be used to orchestrate the installation of application platforms such as Docker, Mesos, Kubernetes, and Cloud Foundry.

Building Community

Throughout the conference we had long but fun conversations around lots of open source projects. Over lunch with Brandon Phillips, and Brian “Redbeard” Harrington of CoreOS, they described how CoreOS and rkt can utilize the virtualization hardware features of an Intel Xeon to deliver containers on bare metal hosts with low overhead and improved security features.

On that same day Redbeard tweeted this: Most effective trolling: walk the OSCON expo hall and ask vendors about the projects they’ve open sourced. Hint: crickets.” Challenge accepted! It was Steve Wong who originally saw the tweet and asked Redbeard to stop by the EMC {code} booth.

redbeard-booth

Brian “Redbeard” Harrington of CoreOS with members of the {code} team. From left to right, Jonas Rosland, Steve Wong, and Drew Smith

So of course our favorite moment at OSCON was when we proved Redbeard wrong by discussing our current open source projects. Shortly after he tweeted this: “Let me eat crow: talked to @cantbewong & co with @emccode about Polly, Rex-Ray, & RackHD, all fascinating.” Obviously we don’t want Redbeard to eat crow, but all we do is work on open source projects.

Hottest event at OSCON: EMC{code} takes over the Container Bar

Hottest event at OSCON: EMC{code} takes over the Container Bar

The {code} team hosted a great party at the Container Bar – an Austin watering hole constructed from shipping containers. As a general rule, what goes on at an EMC {code} party stays at the party… but we know at least one guest, Guilliame Quintard of Varnish Cache, had a great time. His tweet the next day read: “Great party yesterday with @cantbewong. Unrelated note: can someone bring some coffee at the @varnishcache booth?”

At the party we renewed acquaintances with Jérôme Petazzoni and  Richard Mortier of Docker. A few team members met them both back in Los Angeles in January at a Docker Meetup which was hosted in conjunction with the acquisition of Unikernel Systems, during SCALE 14x. You don’t often get a chance to talk about the future of unikernels with an expert from Cambridge University!

Next week (June 1-2) the team is a platinum sponsor at MesosCon, can’t wait to see you there!

IMG_1015

Stateless apps on Cloud Foundry using stateful services on Mesos

Screenshot 2016-04-22 18.00.30

The value of having a platform for stateless applications that ties together with a platform for stateful services is greater than the sum of its parts. Utilizing the Cloud Native Apps nature of Cloud Foundry to create new, innovative products while making sure your customer data and internal company information lives in a stable, reliable and trusted environment is of utmost importance. In this blog post we’d like to show you how to merge these concepts together.

If you’re familiar with Cloud Foundry, then you probably know it’s focused on running stateless applications – hundreds or even thousands of them. These applications can be scaled up or down to handle a myriad of different workloads, all while storing data outside the application. This makes the applications adhere to the 12-factor manifest where configuration and data should always be kept outside where the application runs.

When talking with customers we’ve noticed a common theme around users gravitating towards a common platform for new applications, and that platform is Mesos. Mesos is also heavily focused on stateless applications but thanks to work we’ve done with REX-Ray and mesos-module-dvdi we’ve seen an increasing interest in running stateful services as well.

Can we somehow merge these concepts together? Running stateful services in Mesos and stateless applications in Cloud Foundry, with some awesomeness sprinkled in?

Of course we can:)

First, let’s get Cloud Foundry up and running. There’s a great micro-version of Cloud Foundry that can be found here. Download it, unzip the file and start Cloud Foundry by following the instructions in the repo.

Once you have Cloud Foundry deployed and you can push stateless apps to it, you’ll want to add in databases and other stateful services to the mix. We’ll host those stateful services on Mesos. To do this, head over to our vagrant repo – we’ll be using the playa-mesos setup in there.

In the config.json file for playa-mesos there’s a setting to enable Consul, make sure that’s set to true on all nodes. We’ll get back to that in a bit. Next, start your Mesos environment by running vagrant up.

You should now have Cloud Foundry, Mesos and Marathon running in your local environment, awesome!

Next, it’s time to add a stateful service. We’ll use the key-value store Redis for this. For proper persistence we of course have REX-Ray and Mesos-module-DVDI module already installed and enabled in the Mesos environment, so we should be all set. If perhaps you do not quite yet, you can download these here: REX-Ray and mesos-module-DVDI.

To launch Redis with proper datastore backing you can use this JSON manifest:

Launch Redis using the following command:

curl -X POST http://10.141.141.10:8080/v2/apps -d @redis.json -H "Content-type: application/json"

By looking at the Marathon interface over at http://10.141.141.10:8080/ui/#/apps you should see Redis being deployed.

Screenshot 2016-04-22 17.14.53

Connect them together!

Now let’s see how we can connect our stateless apps to this redis service.
Should we do this manually by adding IPs and ports to the code? Absolutely not:)

Instead, we’ll use Consul – a great piece of software that enables automatic service discovery of everything that’s in your data center. Cloud Foundry uses Consul internally to keep track of everything it runs, and here we use another instance of Consul to keep track of everything we’re running through Marathon.

When you started up the Mesos cluster you actually also created a Consul cluster (thanks to the true setting above) and another piece of software called marathon-consul that ties Marathon and Consul together. Everything you run through Marathon with the tag “consul” gets registered in Consul, you can see an example of how that’s done in the redis.json above.

Now, let’s have a look at what happens here:

  1. You push a Redis manifest up to Marathon
  2. Marathon reads the manifest and asks Mesos to run the application
  3. Mesos sees the resource requirements (including storage), and allocates them using posix/cgroups, DVDI and REX-Ray
  4. When the application is running, Marathon reports back with a green status light
  5. marathon-consul now adds information about Redis into Consul

The information about Redis stored in Consul can be pulled by running the following:

$ http GET 10.141.141.10:8500/v1/catalog/service/redis
HTTP/1.1 200 OK
Content-Length: 279
Content-Type: application/json
Date: Fri, 22 Apr 2016 19:53:19 GMT
X-Consul-Index: 312
X-Consul-Knownleader: true
X-Consul-Lastcontact: 0

[
 {
 "Address": "10.141.141.14",
 "CreateIndex": 312,
 "ModifyIndex": 312,
 "Node": "mesos-slave4",
 "ServiceAddress": "10.141.141.14",
 "ServiceEnableTagOverride": false,
 "ServiceID": "redis.2ced6feb-08c0-11e6-98db-024280a1e134",
 "ServiceName": "redis",
 "ServicePort": 31683,
 "ServiceTags": [
 "marathon"
 ]
 }
]

You can see we have an IP address and a port registered for the Redis service. Now how can we use that with our application?

Application time!

First, we need to make sure that the apps on Cloud Foundry are allowed and able to talk to the services on Mesos. To do that we update the security-group and the dnsmasq service. In the pcfdev folder, run the following commands:

$ wget -nc -nv https://gist.githubusercontent.com/jonasrosland/c70cce115ddcb2422ee5f71800aed1d5/raw -O public_networks.json
$ cf update-security-group public_networks public_networks.json
Updating security group public_networks as admin
OK
$ vagrant ssh -c "sudo tee /etc/dnsmasq.d/10-consul >/dev/null << EOF
server=/consul/10.141.141.10#8600
EOF"
$ vagrant ssh -c "sudo service dnsmasq restart"

Now your Cloud Foundry applications are allowed and ready to connect to Mesos services. Let’s take our example application for a spin!

The application we’re using can be found here: simplehttp. Let’s have a look at it:

import os
import uuid
import redis
import consulate
import json
from flask import Flask, render_template

app = Flask(__name__)

@app.route('/')
def hello():
 CONSUL_HOST = os.getenv("CONSUL_HOST", "no.host")
 CONSUL_PORT = os.getenv("CONSUL_PORT", 0)
 CONSUL_DC = os.getenv("CONSUL_DC", "nodc")

 consul = consulate.Consul(host=os.environ['CONSUL_HOST'], port=os.environ['CONSUL_PORT'], datacenter=os.environ['CONSUL_DC'])

 data = consul.catalog.service('redis')

 json_str = json.dumps(data)
 resp = json.loads(json_str)

 redis_address = (resp[0]['Address'])
 redis_port = (resp[0]['ServicePort'])

 r_server = redis.Redis(redis_address, redis_port) #this line creates a new Redis object and

 r_server.incr('counter')
 counter = r_server.get('counter')
 return render_template("index.html", CONSUL_HOST=CONSUL_HOST, CONSUL_PORT=CONSUL_PORT, CONSUL_DC=CONSUL_DC, redis_address=redis_address, redis_port=redis_port, counter=counter)

@app.errorhandler(500)
def internal_server_error(error):
 app.logger.error('Server Error: %s', (error))

if __name__ == "__main__":
 app.run(debug=False,host='0.0.0.0', port=int(os.getenv('PORT', '5000')))

The application flow looks like this:

  1. Queries the local DNS in Cloud Foundry to look up the Consul environment
  2. Sends a query to Consul to gather information about the Redis service
  3. Connects to Redis, adds an incremental value to the key “counter”
  4. Retrieves the key and publishes the content on the webpage

You might see that the app itself actually has querying functionality built into the function we’re calling. This is a quick and dirty way to enable resiliency to the application since it will automatically pick up new Redis information if the database has to move, restart or has a failure and recovers from it somewhere else.

Make sure to git clone the entire application, then all you need to do is a cf push to push it to the Cloud Foundry environment. It will look something like this:

$ cf push
Using manifest file /Users/jonas/Developer/jonasrosland/demos/simplehttp/manifest.yml

Creating app simple in org pcfdev-org / space pcfdev-space as admin...
OK

Creating route simple.local.pcfdev.io...
OK

Binding simple.local.pcfdev.io to simple...
OK

Uploading simple...
Uploading app files from: /Users/jonas/Developer/jonasrosland/demos/simplehttp
Uploading 18K, 10 files
Done uploading
OK

Starting app simple in org pcfdev-org / space pcfdev-space as admin...
Downloading python_buildpack...
Downloaded python_buildpack (254M)
Creating container
Successfully created container
Downloading app package...
Downloaded app package (5.9K)
Staging...
<snip>
Uploading complete

1 of 1 instances running

App started


OK

App simple was started using this command `python simple.py`

Showing health and status for app simple in org pcfdev-org / space pcfdev-space as admin...
OK

requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: simple.local.pcfdev.io
last uploaded: Fri Apr 22 21:05:58 UTC 2016
stack: unknown
buildpack: python_buildpack

 state since cpu memory disk details
#0 running 2016-04-22 05:06:55 PM 0.0% 0 of 256M 0 of 512M

You can now go to simple.local.pcfdev.io and you should see the following screen:

Screenshot 2016-04-22 17.23.40

As you can see, the application automatically finds its database, and when you refresh the page you’ll see the Redis counter go up, pretty cool!

Now for some resiliency testing, let’s restart Redis. To do that, head to the Marathon interface at http://10.141.141.10:8080/ui/#/apps/%2Fredis and click “Restart”.

Screenshot 2016-04-22 17.51.18

It might end up on the same host but on a different port, or a completely different host. Up to you to experiment.:)

After restarting, you can refresh the simplehttp page and see both the new connection information for Redis and the value increasing!

Screenshot 2016-04-22 17.54.26

Here’s what we just did together…

  1. Ran a stateful service (Redis) on Mesos
  2. The stateful service had proper storage support thanks to REX-Ray and mesos-dvdi-module
  3. Restarted the stateful service results in the data moving with the service
  4. The connection information for Redis was automatically stored in Consul
  5. Launched a stateless application on Cloud Foundry with no information on where the stateful service was located
  6. The application queried Consul for connection information
  7. Once the application had the information, it connected to Redis
  8. The application is ready to store and retrieve data

If you liked this demo, please check out the repos that made it possible:

https://github.com/emccode/vagrant
https://github.com/emccode/rexray
https://github.com/emccode/mesos-module-dvdi
https://github.com/jonasrosland/simplehttp

CJ Desai, President of EMC Emerging Technologies Division talking about EMC {code}, our projects and values

EMC World 2016: Taking the world by {code}

Another EMC World is now under our belt and we feel that we’ve accomplished more than ever. We had a large presence with 21 breakout sessionsa Guru session together with Tobi Knaup (CTO and Co-Founder of Mesosphere), 8 partners in our booth, a self-paced vLab, a new initiative release and a cubic meter of rubber ducks!

Rubber ducks at the {code} booth

Kenny at our booth with just a few of the rubber ducks we handed out to help with “Rubber duck debugging

New Initiative Release and Update

During the conference we launched the initiative Polly, and updated REX-Ray to 0.4. In short, Polly is an open source storage scheduler designed to provide storage resources for Cloud Foundry, Docker, Kubernetes and Mesos, and you’re among the first to see the official logo!

Key features of Polly includes:Polly the Parrot_Containers

  • Centralized control and distribution of storage resources
  • Offer-based mechanism for advertising storage to container schedulers
  • Framework supporting direct integration to any container scheduler, storage orchestrator, and storage platform

You can learn more about Polly in the press release and get involved on the GitHub project page.

Self-Paced vLab

At EMC World a common thing to do is to get to your hands dirty with the hardware and software. That’s done through our large vLab area with over 3,614 labs completed by 1,500 unique attendees using 24,902 VMs. That’s huge! And it gets better. Our self-paced vLab, “Docker, Mesos, and ScaleIO for your Persistent Applications” was the 5th most popular vLab at EMC World and had 165 people complete it!

Mikkel Bernhof worked in the booth and vLab and shares his experience below:

I had the pleasure of joining the EMC {code} team to help out in the {code} booth and hands-on labs for the duration of the EMC World conference. This gave me a chance both to talk to EMC customers and partners about the challenges they are facing with all these new emerging open source technologies, as well as picking the brains of my {code} colleagues.
It was inspiring to see the level of dedication and skill in this group of technologists and Open Source Software (OSS) enthusiasts, whose outgoing and friendly nature obviously captures people’s attention – my own included. This week with EMC {code} was full of great learning experiences and has armed me with curiosity and excitement around EMC’s efforts in the OSS space as well as the fast evolving world of OSS in general.

Thank you to Mikkel and everyone who took the time out of your busy schedule to sit down and get a real hands on experience with what our team has built and focused on over the past year!

Sessions

During our 21 breakout sessions our topics included:

  • Open Source Infrastructure
  • Infrastructure as Code
  • Persistent Applications and Containers
  • Open Source, Community, and Collaboration

Some of our sessions were so popular that due to fire code we scrambled to make repeat sessions possible. The most popular ones were the sessions on containers, as we had predicted, and next year we definitely need a bigger rooms.

Kenny Coleman and Mano Marks doing an Introduction to Docker

Kenny Coleman and Mano Marks doing an Introduction to Docker to a full house

We’ve received many requests already for our material, and since we’re all for “open everything,” you can view all of our presentations on Slideshare, they are of course free to download.

During the Wednesday Guru session Joshua Bernstein (VP of Technology, ETD) and Tobi Knaup (CTO and Co-Founder, Mesosphere) took the stage in front of over 350 people to talk about Data Persistence in the New Container World with huge success!

Josh Bernstein and Tobi Knaup presenting at the Guru session

Josh Bernstein and Tobi Knaup presenting at the Guru session

Booth

In our booth we were showing off demos, projects and of course handing out freebies like {code}-branded cups for your favorite beverage, rubber ducks for debugging, open source stickers for your laptops and prizes from our sweepstakes and sessions. We had many interesting and worthwhile conversations with customers, partners and EMCers who are interested in what we do and how we are driving the open source agenda forward at EMC. Again, thank you everyone who came up and talked to us during these busy hours!

8 Partners

Our partners were nothing short of amazing, answering questions, giving demos and talking to customers while also gathering large crowds of people for their interesting booth talks on containers, configuration management, modernized networking, datacenter-scale deployments and open source. Huge thanks to everyone from AVI Networks, Cloud 66Docker, GitHubMesospherePuppet, RackHD, and Rancher!

Mano Marks from Docker presenting at the booth in front of a large crowd

Mano Marks, Developer Relations Director, from Docker presenting at the booth

A Personal Reflection
Steve Wong headshot2

Steve Wong was inspired earlier this week following his experience at EMC World and took some time to write down his thoughts. Check out his reflection on EMC World and how impactful this experience was on an EMC veteran, but an EMC World newbie.

Thank You

A big thank you to the entire EMC {code} team for pulling everything together to make this an amazing show for everyone, and for all you readers we hope to see you next year in Vegas!

Dell EMC World 2017