Opendaylight in Hyperglance with Flows and Hosts

I have just added to the OpenDaylight collector to add flows and Hosts. It was relatively easy but OpenDaylight is currently acting a little weird, for instance, the Northbound API shows all the flows as being attached to all the switch links. Maybe an issue with OpenDaylight, Mininet or both, not sure.

To be honest I am amazed that OpenDaylight is as stable as it is. Anyway, I have added flows to all the links using "statistics/default/flowstats". I just iterate though the JSON and add it the relevant links to the assigned endpoints. Another annoying thing is that there is no unique identifier per flow so I had to assign a random number to uniquely identity the flow. 

I will be adding commands so you will be able to POST API calls to the controller. Right now you can Ping, SSH and get a webpage of the IP address. That kind of thing is trivial in Hyperglance and I will be adding more as I get user feedback.

I have created a short video to show it in action.

OpenDaylight and Hyperglance

I attended the Open Networking Summit (ONS) in Santa Clara last week and managed to catch some OpenDaylight forums/talks. Nearly all the Silicon Valley heavyweights have joined the OpenDaylight project including Cisco, IBM, Citrix, RedHat and Microsoft. There are many interesting points about the list of companies that have joined and contributed code. One interesting point is the diversity, many are not network vendors. Another is many are competitors and no one knows how they will play together.

My take on the situation is customers can see that SDN is the future and are pressuring the vendors to standardize so they don't end up going down a vendor lock in route. Because there are so many different SDN controllers and different companies have differing views on exactly what SDN is customers are not buying SDN solutions and are also holding off buying traditional solutions until the situation becomes clearer. Consequently both hardware and software vendors need to get some kind of open standard to work off to allay customer fears and move their tin.   

I am pretty excited by OpenDaylight as I think the networking industry needs a standard controller to innovate off and this might just be the one to do it. Open was a big catchword at the ONS and I can see the tie up between OpenDaylight and OpenStack being a big driver in the coming months and years. I thought I would hack up an OpenDaylight collector for Hyperglance over the weekend as I am excited to start working with it. There is still quite a bit of work to do on the collector; I need to add flows, hosts and commands as well as some other bits and bobs. I am hoping OpenDaylight will iterate rapidly so I can iterate with it.

As a side note I met Ed Warnicke at ONS, he is heading up the Cisco dev for Opendaylight . Ed is a super nice guy and was pretty amped up when we met, that may have been down to the Bulletproof Coffee he raved about , Bulletproof coffee sounds like some pretty serious stuff!  I told him I would have an OpenDaylight collector in a week and have lived up to that promise. 

The OpenDaylight UI showing 63 switches

Visualising the cloud - Amazon EC2 in HyperGlance

The old adage “If You Don't Monitor It, You Can't Manage It” holds true just as much today as in the past. People are pushing compute load to the cloud, both public and private, without implementing the robust measuring/monitoring solutions they do with their current infrastructure.

We hear the incumbent solution providers say they support the cloud while telling themselves and anyone else who will listen that no one will truly use the public cloud so you don’t need to bother. I am seeing both public and private cloud projects being kicked off by everyone from banks to government. Private clouds will certainly come first but public cloud is everyone’s goal.

Security is usually touted as the main reason large companies are reluctant to use something like Amazon’s EC2 and that is true with the current mind-set around security. But rest assured very soon we will need to use a new mind-set, one which recognizes that nothing can be deemed as being safe or trusted. The military are wrestling with this new paradigm, where they are not sure if another country or state has already hacked into and currently owns their data. The days of thinking a section of a company’s infrastructure, much less the entire company’s infrastructure, is trusted are gone.  We need to embrace the ethos of not trusting any data source, not even your own!

Amazon’s EC2 cloud is of course the current market 800 pound gorilla and with good reason. They created the current version of cloud (remember old versions like grid computing?) and it does an exceptional job of pushing new features and keeping the system up and running (although no one is perfect as they have had down time lately). Open Stack, Cloud Stack, Eucalyptus and others are desperately trying to gain market share; some have been more successful than others.

To enable better management of the cloud, we have built a HyperGlance collector to pull in data from the Amazon EC2 API to visualise it.

HyperGlance can pull in any structured data and create a topology as long as we have relationship data or something we can postulate connections with. Amazon EC2 is no different.  It has a very nice, well-known API that specifies a Region and Availability zone per VM instance so we can just create a topology that maps to those attributes. Once we have that data, we can query the API for any metrics and attributes for the instances and then overlay them onto the model.

We can also connect the topology to a physical device like a firewall which gives you a hybrid cloud view.  I see people using Nagios or similar tools to monitor the state of the internals of the Instance as Amazon can’t (and shouldn't  see inside.  We can also pull in the Nagios data and overlay onto the Amazon topology.

Next up is OpenStack with Quantum.  Networking is taking its place in the core of the stack, as it should. We are working on a collector to pull in OpenStack data via the API, then we will add on SDN data to that mix. We are working towards a true end-to-end view of I.T., from the applications down to the hardware.

Visualising Openflow/SDN - Big Switch floodlight controller data in Hyperglance EDIT: We no longer support Floodlight

A lot has been said about OpenFlow in the last couple of years: what it can or cannot achieve and what it will or will not change. I believe it is a technology whose time has come. Looking at the advances that compute virtualisation and automation have wrought, people are ready to automate and control the networking side of things to gain the same type of benefits: scale, control and the massive cost savings that go with it.

Being a networking geek, I have been looking at OpenFlow for a while now and have been monitoring the progress. I decided, as a side project, to visualise OpenFlow data in HyperGlance. Being able to visualise the end-to-end flow seemed like a very useful thing to be able to do. One of the advantages of OpenFlow, I always thought, was being able to know the actual data path and therefore being able to discount everything not in the path when troubleshooting. In traditional networks, every switch and router is an island and you rarely know the complete end-to-end path without tracing it hop by hop.

I have been looking for a good candidate to bring OpenFlow data into HyperGlance for months and I finally found Floodlight. Floodlight is the Open Source side of Big Switch Networks, one of the luminaries of the Silicon Valley OpenFlow scene. They have just raised a $25 million funding round so they must be doing something right! Floodlight seems to be getting some mind share and has a nice RESTful API so it’s a good candidate to create a collector for HyperGlance. I want to show how visualisation could help monitor and manage OpenFlow networks just as it does for traditional networks. It’s all just structured data to me. So as long as I can get relationship data and then attributes and metrics to lay over the top, I am happy. Luckily, floodlight gives me everything I need.

First things first, I needed to create the data. I don’t have any OpenFlow switches hanging around so I used Mininet to emulate the switches, workstations and data flow between them. 

Mininet is software that allows “rapid prototyping for software defined networks” (their words) and comes out of Stanford, where OpenFlow was born. The plan was to run both Mininet and floodlight in a VM so I could then just hook HyperGlance into it and voilà, an end-to-end monitoring and visualisation proof of concept. (Didn’t I just mention the great things compute virtualisation has enabled?)

Things went pretty smoothly. Mininet has a ready to go VM to download, floodlight is Java, like HyperGlance, so I knew how to handle that. All that was left was to create the collector to pull the data into HyperGlance. Both Mininet and Floodlight were pretty light weight resource-wise so I ran both in the same VM. Once I had them running and talking to each other, building the collector was straightforward; Floodlight’s API is nice and simple (it exports JSON). I am a pretty poor programmer and had a little help from one of our development guys, but if I can create a collector just about anyone can.

It turned out pretty well, if I do say so myself, I ramped up the collection cycle for demo purposes so the adding and deleting of the flows is very snappy as you can see from the video. In real life, I can’t see why you would need it to be any more than every minute or so (even then only when troubleshooting).

In the future, I plan to use Python with Mininet to start pushing/deleting flows to the switches using HyperGlance’s GUI command functionality to see the workflow and also to see how far I can push Mininet and floodlight scale-wise.

I am actively looking for monitoring use cases and workflows in regards to OpenFlow.  So if you have a live network or have done some research around this, please get in touch.

London 3D Printshow thoughts

I went along to the 3D printshow in London yesterday, mainly to check out the current status of consumer 3D printing. I am positive that this technology will change many industries in the coming years but what I have seen of it in the consumer space has produced not I consider useful. They seem to concentrate on trinkets and the occasional bracket.

There were quite a few consumer 3D printer manufacturers, including who seems to be the best at marketing, Makerbot (to be fair they make a great product and were one of the first out there). I couldn’t see anything that was actually useful in any of the consumer printer stands, they had mainly printed toys. I was looking for something that would actually compel someone to have a 3D printer in the home.

There were some printed lampshades I thought were neat, home decoration is something people spend money on but I am not sure how often you want to change your lampshades (some people would do this often I guess).

After mooching around on the bottom floor not finding much I went upstairs. Much to my delight I came across what I think was the London school of design exhibit. There they had some weird and wonderful stuff including 3D printed shoes! To be fair only half of the shoes were printed, the base was made out of what seemed like a ceramic material. This was the type of stuff I was looking or; printing clothes at home would certainly revolutionise the fashion industry. The rubber like material didn’t seem that comfortable but I was excited by the possibilities.

Not exactly my style :) but the tops of these shoes were printed

Around the corner was another exhibit that focused on recycling materials, the designer had reused tin cans, jars and bottles by printing out useful items that could be added to them. There was a juicer on top of a jar, a handle that made a very nice mug from another type of jar and they had even made dumbbells by reusing some tin cans. Very smart and something I would want to actually use and pay for.

Actual useful 3D printed stuff! Green credentials too. 

All this got me thinking, I had initially just thought of 3D printing stand alone. I thought people would just download a design, print it out and voilà, something useful. That now doesn’t seem to be very practical, especially as you would need at least two types of material to make most things. What I now think is the way forward is to add on the 3D printed thing to something you already have or you buy from a 3rd party. For example, being able to print on top of the shoe base would enable you to reuse that same base if you wanted to change the style or the top were worn out. The base would be inexpensive and I would buy new designs or just use open sourced designs.

One such idea is the 3D printed headphones, from Teague labs.  The shell is printed and the electronics are bought online for $10 or so. It turns out the electronics that headphones use cost only a fraction of what the headphones are sold for.

I expect to see many more ideas like this in the coming months and years and can't wait for this technology to come to maturity in the consumer space.


CentOS 6.3 minimal install, I like! (update - now not so much)

We don't need much to run Hyperglance server, just Java and PostgreSQL. Looking for a stable, minimal size Linux distro has been a bit of a mission. We have been running on CentOS 5.x but I thought I would look at the Minimal install for CentOS 6.x. After install I had a look at the size and was very pleasantly surprised. Zipped up a minimal install was only 217 meg. After install of java, PostgreSQL and Hyperglance it was 650 meg or so. Not too bad but not sure where the hell all the extra used space came from, I will have to look into that.

There are a few config changes need to actually get a useable system, by default Eth0 is not set to start! I had to modify /etc/sysconfig/network-scripts/ifcfg-eth0 and change ONBOOT="yes".

Iptables is also set so I added in a line to /etc/sysconfig/iptables to allow the port we use.

 All and all a pretty good setup, I know and like CentOS so am happy we can stay with it.


I had to go back to 5.8 due to DHCP issues. It seems as though v6+ udev hard codes the MAC address in /etc/sysconfig/network-scripts/ifcfg-eth0 so whenever you clone the VM eth0 doesn't come up! You could  delete the MAC entry and add a script to delete /etc/udev/rules.d/70-persistent-net.rules every time but what a hassle. I went back to 5.8 and did a minimal install. Turns out I could make a minimal install just over 200 meg so all good on that front

Stopping to smell the flowers

Time seems to be on fast forward when working at a start-up, the weeks and months just flash by. It’s now way past Christmas and since we released last April it’s been a whirlwind of learning, customer meetings and quite a bit of stress. It’s typical that you don’t stop sometimes and realise how great things are. I am working in a great start-up with great people, Hyperglance is gaining a lot of traction and people love the software but it’s been a roller coaster ride. Things don’t always go to plan and I have been frantically trying to learn the software business.

Yesterday I suddenly thought of what we had achieved the last 12 months and how lucky I am. It brought back a memory from last year:

I sometimes like to go on nice long walks alone. It gives me time to think and get back to nature (or the nearest to it I can living in the South East of the UK). I remember vividly one time I was about ½ way through, at 6 or 7 miles and looking at a quite large, steep incline in the Chilterns. I was determined to get up there without stopping so I set off, head down and tail up. It was very steep and back then I wasn’t very fit so started to flag about half way up, I looked to my left and there was a field just to the side of the track. It was a beautiful sunny spring day, the field was covered in wild flowers and lush green grass. I thought of stopping and having lunch in the field, it really was perfect. But the hill was there and I was determined to conquer it so keep going. I thought “I am sure there will be a very nice place at the top with a great view and it will be just as nice” so I kept pushing forward. It was a pretty steep hill but I made it, although not in one go like I wanted! When I got to the top guess what, it was extremely disappointing. Not much view, a nasty ugly barn and the sun had gone. Of course I had the achievement of conquering the hill but I really wish I had stopped and spent some time in that field.

Who knows where the future will lead but I need to keep in mind the now, not just concentrate on where I will be next year or the year after. Life is good, I need to stop sometimes and smell the flowers.


VMware visualization using HyperGlance 1.3

Being able to visualize IT environments is something I have been working on for quite a while now and to start things off I have created a video that went longer than I expected, 10 minutes or so! I show off our new dynamic filtering capability. It really is easy to use and makes day to day tasks so easy. Have a look and see what you think.


Hyperglance 1.3 is out!

We have released HG 1.3 bringing great new features including Dynamic Filters. This allows the user to create filters using the GUI. It's very intuitive and works really fluently. I love how the Physics engine sorts out everything and the movement really catches the eye.

On the nodes you can do the following actions:


  1. Exclude (From the physics and Render)
  2. Glow
  3. Colour
  4. Icon Size
  5. Partially Hide
  6. Repel
  7. Icon Add to set




There is a smaller subset available on the interfaces/endpoints but we will increase over time.


I am planning on adding some youtube Videos in the coming days to show it off!!

Don’t lose visibility when moving your application load to the cloud

Automation and management; that is what VMware is concentrating on at the moment. The reasons are clear, as you scale up and move more application load to the “cloud” you have issues with managing the scale of change and you lose visibility to what is happening with both the applications and especially the infrastructure that the applications are running over.

Whether you move your application load to a virtual machine, an Colo outsourcer, a VM container provider like Amazon EC2, an application container provider like Heroku or to a pure SaaS solution like you lose visibility and control . Of course you gain efficiently, scale and the ability to be agile.

Traditional monitoring solutions concentrated on infrastructure, not applications. More and more people are realising that the applications and their dependencies are when they should be worrying about. Agent based monitoring is the only way to get real data when you move your application to a container totally outside of your infrastructure, but what do you do with that data. It soon gets overwhelming trying to make sense of dynamically changing load and containers that get created automatically.

Of course I am going to say that visualisation is the key, I helped create a way to visualise applications and their dependencies. For me, you have to get a handle on what is dependant in order to make sure your users aren’t affected when the inevitable happens and something fails. Also finding outliers and looking for trends is a great way to predict what might cause issues in the future and what can be removed to save resources.

Pretty much all cloud providers provide an API in some shape or form. Software providers are slowly reasling that they no longer can wall off the data they collect because it is only a part of the whole. Integration, correlation and visualisation, all three enable insight.

Humans see patterns and recognise correlations in milliseconds, why not utilise that powerful ability?


OpenFlow will enable IPv6 migration

The depletion of the IPv4 address space has been news for quite some time now but we are not seeing the migration to IPv6 that was wanted and expected. There are many reasons for this; technology exists to mitigate the increased difficulty of obtaining IPv4 addresses, security concerns, the incompleteness of the IPv6 stack and the elephant in the room, there is no ROI in migrating to IPv6.

The migration to IPv6 will cost a company a lot of time, effort and create a whole new risk position. Many networking devices like firewalls are not IPv6 aware and the number of software platforms that haven’t even started to address IPv6 is legion. Sure Microsoft enables it by default in Windows 7 (opening up these platforms to hackers) but the infrastructure is not equipped to handle IPv6 from either the basic passing of these packets or the management of such. OpenFlow enabled devices can mitigate many of these reasons for the lack of adoption.

IPv6 can be passed either natively or encapsulated within another protocol like IPv4. Most implementations will firstly use the latter method and just keep IPv6 at the edges where the IPv6 clients are. This is done for a variety of reasons, mainly because every device in the path must be IPv6 aware if it is to pass native IPv6 traffic (obviously) and the cost of upgrading and/or testing every device that must pass, inspect or act on a packet is massive for even medium size companies. If companies roll out OpenFlow devices they only need to test and authenticate the control pane and that could be just one software device.

The ability to innovate on a standardised platform is one of OpenFlow’s main advantages. This will enable agile and lean development on the networking stack. This will enable all sorts of great things to be done in the data path instead of having to rely on a choke point like many applications have to today. The distributed platform has worked wonders in the big data world, why not the networking world? Firewalls and IPS/IDS functionality will be built in along the path instead of either forcing all traffic to be passed through one point or slapped alongside the path. Changes and enhancements to the standards only needs to be rolled out once, instead of to each device.

OpenFlow will enable IPv6 to be more reliably and cheaply rolled out, it may even enable new services that leverage IPv6’s strengths and mitigate IPv6’s current weaknesses.



OpenFlow could disrupt the networking world in a big way


The last time I looked at OpenFlow it didn't seem that interesting. There were no networking platforms that supported it and I didn't get why they were bothering. My company was presenting at the GigaOM structure event so  I thought I should have a look at what they are saying and came across a panel on network virtualisation and OpenFlow

A light bulb went off in my head; nothing much has changed in the networking space in over 10 years since MPLS was rolled out. Incremental speed updates but nothing that was as game changing as virtualisation. Virtualisation radically changed compute space and then to the storage space in a very short period of time.

OpenFlow enables you to detach the application from the switch or router, just like the hypervisor did for compute. OpenFlow is just an API to the inner workings of the switch, it's not fancy but the applications it enables are. Add a virtualisation layer on top and you enable the kind of innovation that has been lacking in the networking space. Detach the control plane and centralise it and you enable a whole datacentre view across multiple vendors.

One idea that also stuck in my head from the video is the idea of deploy once, use in many different ways. When you build an infrastructure wouldn’t it be great to build it once and not have to change anything physically when a use case or technology changes? SPB (Shortest Path Bridging) is one such example.

SPB (Shortest Path Bridging) will change the datacentre by getting rid of out-dated and misunderstood concepts such as spanning tree. It will allow flat architectures to be built, no more core, distribution and access layers. The network will be a layer 2 mesh. This mas many benefits; faster point to point transfer, faster convergence after failure and better use of available links to name a few. Having to upgrade all the current switches to support this functionality is a major issue. Each switch has to either to upgraded to newer firmware that supports it, a new line card needs to be added or even the whole chassis needs to be replaced. It the control plane was just software on an appliance, only one application needs to be updated. Because the control plane and data plane are separated the control plane can be changed without any impact to the traffic.

I will be keeping an eagle eye on OpenFlow in the coming months. It will take a little while to gain traction but as more vendors get on board and commodity switches get released that have the same or better performance at a much lower price point people will start to take notice.

With the advent of commodity networking silicon I think we are going to see an explosion of Chinese and Taiwanese switches that you can load up an open source or proprietary image for control. Big networking vendors must either innovate or see a big reduction in revenue.


Big data, Hadoop, Cassandra; what they hell are they and why should I care?

I have been hearing about big data for a while now and even went to a meetup the other day to see what all the fuss was about. while a lot of the talk didn't mean much to me one thing caught my attention, the promise of fast, measureable ROI.

The example I honed in on was amazon and their recommendation feature. Amazon tracks everything you look at and buy on their site. They do some analytics and then recommend other things you might like, the better they get at this, the more you buy from amazon. If you imagine the number of people that go onto the amazon website and the number of items they have on their catalogue that works out to be a lot of data to crunch! Centralised databases find it hard to handle such volume so you need to distribute the load that is what hadoop does.

Hadoop is comprised of 2 basic applications, HDFS (Hadoop Distributed File System) which looks after the distributed database and MapReduce, which looks after the jobs that run on the database. Each instance of the database is generally a commodity server or VM so each has CPU and RAM connected to it, this allows MapReduce to run many small jobs, each close to the data it needs to access.

 As I understand it HDFS is more for batch processing, so you would get you recommendation back for a couple of hours. That is quite a long time, people want that recommendation back in real time. That's where Cassandra comes in. It is a distributed database like HFDS but it's built for real time. People are starting to use Hadoop's MapReduce with the Cassandra database, quicker matching means quicker buying!

There are a lot of applications people use for the storage and analysis of big data, the information about them is all over the place so I made a list of the most common ones I have heard about:


A combination of a distributed database (HDFS) and Mapreduce. Hadoop is not a real-time technology. Web giants such as Facebook use in-house Hadoop clusters to crunch epic amounts of data that can later be applied to live web services.


MapReduce is a framework for processing huge datasets on certain kinds of distributable problems using a large number of computers (nodes), collectively referred to as a cluster (if all nodes use the same hardware) or as a grid (if the nodes use different hardware). Computational processing can occur on data stored either in a file system (unstructured) or within a database (structured).


Hadoop Distributed File System is the primary storage system used by Hadoop applications. HDFS creates multiple replicas of data blocks and distributes them on compute nodes throughout a cluster to enable reliable, extremely rapid computations.


A distributed database based on a piece of Google's backend. A database for real-time applications.


A cassandra competitor.


A SQL-like query language designed for use by programming novices.

Apache Pig:

Apache Pig is a platform for analysing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs.


A distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.


A Hadoop subproject devoted to large-scale log collection and analysis. Chukwa is built on top of the Hadoop distributed file system (HDFS) and MapReduce framework



Silos of data are a big problem

I was reading a great blog by  Archie Hendryx  about the issue of VMware people not caring about storage and how sharing of knowledge and information between the storage and compute people is vital:


Silos will prevent Tier 1 Apps reaching the Cloud — On a recent excursion to a tech event I had the pleasure of meeting a well-known ‘VM Guru’, (who shall remain nameless). Having read some of this individual’s material I was excited and intrigued to know his thoughts on how he was tackling the Storage


Gone are the days when you don't have to care about what resources are upstream or downstream. Storage is a black art to most network or compute guys (or gals). That data silo needs to be brought into the light and people educated that it doesn't "just work" anymore. Capacity management and monitoring is needed to keep applications within SLA.

I know in the last few years I have had to be dragged from my "Network" centric view and I have had to learn about storage and the application space. I.T. is an ecosystem, ignore one aspect at your peril.