Tag Archives: Speed

Humanised security and the real world

TIL opening up the possibility of abstracted security policies probably produces just as many questions as it does answers.

There is often a big bonus to be had from having people from multiple areas of expertise in the same room when they share a common goal but have their own institutionalised view on the uniques of their organisation and the IT platform(s) hosting their apps.  In the case of a particular meeting I was involved in (and I will mention that I’m not typecasting the people involved into the rather blunt description above!), it made approaching the realities of moving from a data centre security model historically defined by the traditional coupling of endpoint ‘location’ and ‘identity’ to one that doesn’t have that “burden”.  This new option could potentially be put under the huge umbrella of ‘software definition’ which, metaphorically speaking, is now an umbrella that has the niagara falls crashing on top of its 2 square mile surface area with a 4 foot tall man cowering underneath!.

On a basic level, the advent of distributed virtual firewalling (and the diminishing no. of designs based on a multi-context service module approach) has addressed:

  1. The problem of hairpinning through an aggregatory/central stateful device to support a virtualised server and desktop workload environment.
  2. The avoidance of larger sizing of firewall appliance(s) when compared to historical sizing (i.e. this includes paying more than we have in the past).  This would be needed to deal with additional East-West [10GE] firewalling in a virtual environment hosting multiple security [sub-]zones.
  3. The ability to position multiple firewall ‘contexts’ close to the virtual workload on commodity x86 hosts… tied to the workload in fact… thus, they’re also mobile and potentially mult-tenant in nature.

In addition, the close proximity to the virtual workload has also added the ability to understand and tie-in with the platform that it sits upon better – i.e. the hypervisor.  It’s possible to define policy based on ‘humanised’ items such as definitions that we put in names and IDs.  For example, very simply put, the name given to a virtual machine could automatically cause it to inherit a pre-defined security policy.  The identity of the endpoint and its location are then separate.  We’re no longer talking about IP host addresses + prefixes when agreeing on where to position or drop VMs.  A server/virtualisation subject matter expert (SME) is no longer potentially looking for the path of least resistance because ‘we know that a virtual network in the drop-down list for vNICs is pinned to a firewall ACL more open than the one above it in the drop-down list’… ‘I just want want to get this working’ is perfectly understandable.

The questions though… How do I take my existing E-W associated firewall policy and migrate it into this new world?  Do I mimic it all?  Do I standardise more general options and drop into those?  Would that actually work?  When the automation engine or an administrator drops a VM into a given virtual network how can I wrap a security policy around that without adding new rules and policy each time?

It’s much easier when I have a blank canvas-style shiny new data centre or all of the services/apps/’tenants’ are completely new!

 

An attempt at answering the above from a Systems Engineer’s perspective…

…based on currently shipping Cisco products such as Cisco Nexus 1000v and Cisco Prime Network Services Controller + Cisco Virtual Security Gateway.

There is a possibility that this effort is part of a larger DC-related project.  A move of security policies may have to include historical definitions of firewall rulebase as other bigger shifts in the DC architecture are quite frankly enough to handle.  We have existing VMs with a naming standard but its not always as granular or ‘selectable’ as what we’d prefer.

My feeling is that the process could look like this:

Migration to VSG

The idea above tags workload in order to identify it easily while not initially linking to any definition of policy.  Nothing is broken by the names of port-profiles but you have an identifier readily available for when you need it (which comes in the latter stages of the flow above).  We can collect the port-profiles and other attributes in groupings (which go by the name of “vZones”) ready for use when needed but a ‘big bang’ approach hasn’t been necessitated up front.

vCenterDropDown

Nexus 1000v view

PNSC

Opinions may differ:

I’m fully aware that my thought process around this subject may go against the thoughts of my peers, individuals or collectives more observant than I and those that have visited this subject a few times more than I.  I’d be interested to hear just what you think about this.  If we look at the future of application hosting you have to feel that this step change is necessary, how different organisations choose to get there will of course differ.

 

Supporting background information in the blogosphere and t’internet:

A brief port-profile introduction
A more in-depth overview
IP Space VSG 101

 

p.s.  For those looking at CLI and thinking ‘Tut’, there are APIs available!

A blogger’s perspective (1st post)…

The year is 2003 and “the boy” is handed his first view of console access to a Cisco switch by a chap going by the name of Mr Ken Worthy.  He gets shown a basic configuration and then a mission starts, a mission to know every little detail about that and every switch and router that they have along with their capabilities, to nail-down the perfect configurations for the particular organisation that he works for and to make the process of adds, moves and changes as optimised and ‘catalogued’ as possible (some network monitoring sensors were disabled in the making of…!).

The need for one such optimisation arises because he is fed up of being pestered to move interface configuration lines from one port on a switch to another port on another switch when he has more pressing and pro-active work to do; there’s a team dedicated to moving IT equipment between desks/buildings and the network changes are the only bit of the process when they need to involve someone else.  He starts by defining some smart port macros for different types of endpoints and pushes them out to every switch.  These macros have variables in them, most notably for the VLAN(s) and L2 security toolkit options that should be configured – different endpoints, for many reasons, sit in different segments.  This makes the process a little faster and standardised but the network team are still involved in this routine and basic task.  He then works with a developer in the IT team to write a web-based application (after evaluating the market for such a tool… that of course would have to be zero or next to zero cost…).  This application will give the ‘move team’ their own means of doing the same task, notably without involving said “boy” or his colleagues.  The app user selects switches + pre-authorised port numbers and the app accesses those switches, defaults the port configurations, and then lastly, it applies a macro with the relevant variables defined.  With the advent of app/server virtualisation and shared services initiatives the same app becomes more relevant to work in the DCs.

What he didn’t appreciate at the time was that he and his colleagues’ were performing a 5* example of what was wrong with the provisioning and changing of ICT services.  Work-day time, personal time, overtime; time and money put into customising-against and optimising a very basic process that could even have been done before by someone else or be a standard need everywhere.  He also later found out, while blogging at http://rbcciequest.wordpress.com, that one of the config lines in one the macro was incorrect.  How?!  He knew the switches and their options inside out, he’d tested it, he’d asked an expert… human error was obviously still a possibility, and now it was being repeated within a non-standard app.  There are many other stories to tell, especially when looking at the to-ing and fro-ing in the DC in the aftermath of this time!

The business’ simple requirement was to move staff, an app had been built, the app had a requirement of the network, the requirement was for the app itself to trigger configuration changes across the network by accessing multiple touch points and then dropping a static script written in a network-device specific language (CLI), replacing variables point-in-time.  It was all too complicated and specific.  Yet, it was still beneficial and worthwhile.

It’s now 2014, the apps are more complicated and demanding, the criticality of ICT to core business is on a different level compared to then, uptime is vital and there are often much bigger inefficiencies as the one described above.  Data Centres are at
the core of ICT services and ICT is an enabler (and disabler!) of business
more than ever before.  We’ve got a lot to look forward to, much non-trivial learning has been brought with us from the past, the top-down push of cloud consumption models are displayed in the innovation and [application programming] interfaces that are here today.  This blog is about looking at this new wave of service consumption models, technologies and dedicated solutions.  Let’s cross the chasm and destroy the hyperbole!

FYI The next couple of posts will be hold a theme of ‘Real-world programmability across the DC stack’.  They also won’t be written in the 3rd person!

About this blog