Visualising Openflow/SDN - Big Switch floodlight controller data in Hyperglance EDIT: We no longer support Floodlight

A lot has been said about OpenFlow in the last couple of years: what it can or cannot achieve and what it will or will not change. I believe it is a technology whose time has come. Looking at the advances that compute virtualisation and automation have wrought, people are ready to automate and control the networking side of things to gain the same type of benefits: scale, control and the massive cost savings that go with it.

Being a networking geek, I have been looking at OpenFlow for a while now and have been monitoring the progress. I decided, as a side project, to visualise OpenFlow data in HyperGlance. Being able to visualise the end-to-end flow seemed like a very useful thing to be able to do. One of the advantages of OpenFlow, I always thought, was being able to know the actual data path and therefore being able to discount everything not in the path when troubleshooting. In traditional networks, every switch and router is an island and you rarely know the complete end-to-end path without tracing it hop by hop.

I have been looking for a good candidate to bring OpenFlow data into HyperGlance for months and I finally found Floodlight. Floodlight is the Open Source side of Big Switch Networks, one of the luminaries of the Silicon Valley OpenFlow scene. They have just raised a $25 million funding round so they must be doing something right! Floodlight seems to be getting some mind share and has a nice RESTful API so it’s a good candidate to create a collector for HyperGlance. I want to show how visualisation could help monitor and manage OpenFlow networks just as it does for traditional networks. It’s all just structured data to me. So as long as I can get relationship data and then attributes and metrics to lay over the top, I am happy. Luckily, floodlight gives me everything I need.

First things first, I needed to create the data. I don’t have any OpenFlow switches hanging around so I used Mininet to emulate the switches, workstations and data flow between them. 

Mininet is software that allows “rapid prototyping for software defined networks” (their words) and comes out of Stanford, where OpenFlow was born. The plan was to run both Mininet and floodlight in a VM so I could then just hook HyperGlance into it and voilà, an end-to-end monitoring and visualisation proof of concept. (Didn’t I just mention the great things compute virtualisation has enabled?)

Things went pretty smoothly. Mininet has a ready to go VM to download, floodlight is Java, like HyperGlance, so I knew how to handle that. All that was left was to create the collector to pull the data into HyperGlance. Both Mininet and Floodlight were pretty light weight resource-wise so I ran both in the same VM. Once I had them running and talking to each other, building the collector was straightforward; Floodlight’s API is nice and simple (it exports JSON). I am a pretty poor programmer and had a little help from one of our development guys, but if I can create a collector just about anyone can.

It turned out pretty well, if I do say so myself, I ramped up the collection cycle for demo purposes so the adding and deleting of the flows is very snappy as you can see from the video. In real life, I can’t see why you would need it to be any more than every minute or so (even then only when troubleshooting).

In the future, I plan to use Python with Mininet to start pushing/deleting flows to the switches using HyperGlance’s GUI command functionality to see the workflow and also to see how far I can push Mininet and floodlight scale-wise.

I am actively looking for monitoring use cases and workflows in regards to OpenFlow.  So if you have a live network or have done some research around this, please get in touch.

OpenFlow will enable IPv6 migration

The depletion of the IPv4 address space has been news for quite some time now but we are not seeing the migration to IPv6 that was wanted and expected. There are many reasons for this; technology exists to mitigate the increased difficulty of obtaining IPv4 addresses, security concerns, the incompleteness of the IPv6 stack and the elephant in the room, there is no ROI in migrating to IPv6.

The migration to IPv6 will cost a company a lot of time, effort and create a whole new risk position. Many networking devices like firewalls are not IPv6 aware and the number of software platforms that haven’t even started to address IPv6 is legion. Sure Microsoft enables it by default in Windows 7 (opening up these platforms to hackers) but the infrastructure is not equipped to handle IPv6 from either the basic passing of these packets or the management of such. OpenFlow enabled devices can mitigate many of these reasons for the lack of adoption.

IPv6 can be passed either natively or encapsulated within another protocol like IPv4. Most implementations will firstly use the latter method and just keep IPv6 at the edges where the IPv6 clients are. This is done for a variety of reasons, mainly because every device in the path must be IPv6 aware if it is to pass native IPv6 traffic (obviously) and the cost of upgrading and/or testing every device that must pass, inspect or act on a packet is massive for even medium size companies. If companies roll out OpenFlow devices they only need to test and authenticate the control pane and that could be just one software device.

The ability to innovate on a standardised platform is one of OpenFlow’s main advantages. This will enable agile and lean development on the networking stack. This will enable all sorts of great things to be done in the data path instead of having to rely on a choke point like many applications have to today. The distributed platform has worked wonders in the big data world, why not the networking world? Firewalls and IPS/IDS functionality will be built in along the path instead of either forcing all traffic to be passed through one point or slapped alongside the path. Changes and enhancements to the standards only needs to be rolled out once, instead of to each device.

OpenFlow will enable IPv6 to be more reliably and cheaply rolled out, it may even enable new services that leverage IPv6’s strengths and mitigate IPv6’s current weaknesses.

 

 

OpenFlow could disrupt the networking world in a big way

 

The last time I looked at OpenFlow it didn't seem that interesting. There were no networking platforms that supported it and I didn't get why they were bothering. My company was presenting at the GigaOM structure event so  I thought I should have a look at what they are saying and came across a panel on network virtualisation and OpenFlow http://www.livestream.com/gigaomstructure/video?clipId=pla_08af6c92-1426-4058-8921-a8e391f4ed0d&utm_source=lslibrary&utm_medium=ui-thumb

A light bulb went off in my head; nothing much has changed in the networking space in over 10 years since MPLS was rolled out. Incremental speed updates but nothing that was as game changing as virtualisation. Virtualisation radically changed compute space and then to the storage space in a very short period of time.

OpenFlow enables you to detach the application from the switch or router, just like the hypervisor did for compute. OpenFlow is just an API to the inner workings of the switch, it's not fancy but the applications it enables are. Add a virtualisation layer on top and you enable the kind of innovation that has been lacking in the networking space. Detach the control plane and centralise it and you enable a whole datacentre view across multiple vendors.

One idea that also stuck in my head from the video is the idea of deploy once, use in many different ways. When you build an infrastructure wouldn’t it be great to build it once and not have to change anything physically when a use case or technology changes? SPB (Shortest Path Bridging) is one such example.

SPB (Shortest Path Bridging) will change the datacentre by getting rid of out-dated and misunderstood concepts such as spanning tree. It will allow flat architectures to be built, no more core, distribution and access layers. The network will be a layer 2 mesh. This mas many benefits; faster point to point transfer, faster convergence after failure and better use of available links to name a few. Having to upgrade all the current switches to support this functionality is a major issue. Each switch has to either to upgraded to newer firmware that supports it, a new line card needs to be added or even the whole chassis needs to be replaced. It the control plane was just software on an appliance, only one application needs to be updated. Because the control plane and data plane are separated the control plane can be changed without any impact to the traffic.

I will be keeping an eagle eye on OpenFlow in the coming months. It will take a little while to gain traction but as more vendors get on board and commodity switches get released that have the same or better performance at a much lower price point people will start to take notice.

With the advent of commodity networking silicon I think we are going to see an explosion of Chinese and Taiwanese switches that you can load up an open source or proprietary image for control. Big networking vendors must either innovate or see a big reduction in revenue.