Tag Archives: x86

Model-based?

“Model-based” sounds like an IT marketing term if ever I heard one.  It has a systematic, structured and business-like ring to it…mr-burns-picture

If you asked one hundred different people what model-based means as far as data centre IT systems + management is concerned you’d get…hmmm… I’d guess about… 63 different answers?  1 of which would be noted down as a catch-all ‘not sure’ covering 38 of the people!?

Even with this in mind, I think we’ve actually touched on a system that comes as close as it gets to ‘model-based’ in the date centre without it causing the person stating the point to blush from a ‘did I really just say that’ feeling!
Cisco Unified Computing System (UCS), a system comprised of HW+SW, empowers by providing re-usable and hierarchical ‘building block items’ with linkages to these items framing a HW+SW infrastructure delivery in a differentiated, yet uniform, way.  ‘Synonymously speaking’, it’s what would generally be seen as correct SW development techniques applied to provisioning and monitoring actions against both the SW and HW of a compute environment.  We touched on the UCS PowerTool‘s hook-in (MS PowerShell) to the UCS XML API in the last post and that’s essentially the starting point for this one.

So, let’s take a look at an image from the last post: UCS PowerShell Network

We essentially have an application above which is written in PowerShell.  The application’s purpose is to instruct a UCS system, via its HTTP-based API (‘connection-to-UCS’ lines are omitted), to create policy associated with characteristics defined using variables by the application’s user.  Those variables, in some cases, refer to the pre-defined ‘modular and re-usable building blocks’ mentioned -> e.g. A “MSWin-Ethernet” QoS Policy can be picked out in the above.  This may be one of many pre-defined QoS policies…  We have an example ‘boxed-up’ automation of a provisioning action which induces a shift towards a provisioning model that aligns to:

  • Speeding up the deployment and re-purposing of infrastructure.
  • Increased accuracy and utilisation of infrastructure.
  • Reduced downtime and Mean Time to Repair (MTTR) by modeling using dependencies of lower-level items -> i.e. packaged, abstracted and portable.

 

However, the application itself is making use of a number of ‘code packages’ or modules that simply aren’t visible above.  In this case its PowerShell, and PowerShell code modules are called cmdlets.  cmdlets are generally/mostly compiled before you download and use them and they are essentially what provide the portability + simplification of PowerShell which makes it so useful and beneficial to [Data Centre] Infrastructure SMEs etc.  They can help bridge the gap between the expertise of [Data Centre] Infrastructure SMEs and Software Development/Programming SMEs to a certain extent.

To provide a view of what’s going on under the hood, take a look at the following:

1. An example PowerShell line (create Service Profile Template – 1st PS line):

SPTPowerShell

2. ‘Expanded’ representation:

ModelBasedBreakdown

3. Linkages between the expanded items and the underlying UCS model:

Model Linkage

Now, if I wanted to work back the other way, in a way that could be thought of as being closer to a developer’s viewpoint, we need to be aware that the cmdlet being referenced above would in essence be part of a wider action.  The action would be to set up a HTTP[s] session with the UCS system and then send pre-crafted XML with text defined above inserted as variables.  To really see ‘under the hood’ we would need visibility of the XML that the cmdlet is actually causing PowerShell to send.
There are ways to do this.  One way is to capture what gets passed between UCS Manager and a UCS system’s API.  The easiest option to capture this XML goes by the name of “
goUCS” and I’ll be showing a use of that particular tool in my next post.  At least I assume that you would prefer to not be spending hours with WireShark captures… or indeed looking at some somebody else has done…

The XML gives us the ‘raw code’ to tap into an underlying model-based environment -> i.e. in this case it’s a Service Profile Template along with its ‘building block’ dependents.  The XML and the underlying system-level abstraction is the magic stuff!

What’s the bigger picture?  It’s not enough to ‘software-define’ systems simply by adding common/standard ways of interfacing and communicating with them without having links in place between the ‘instructions’ sent and the different hierarchies of variable items underneath the interface.  An inability to add new ‘links’ to unique/innovative attributes that sit at a hardware level will also limit innovation in the future and restrict a sustained direction of statelessness and abstraction.  Commoditisation shouldn’t be an end goal in itself, it should be part of a wider plan.  Adding a HTTP-based SOAP/REST API and expecting it to mask the fundamental architecture of a system is essentially flawed.  To sum up: ‘Software-defined’ will not always necessitate a ‘Software-only’ mindset or be benefited by it.

Advertisement

Run that last bit by me again

Near and far: UCS Service Profiles and Roles The image above is a simple representation of the ‘true and absolute’ technical convergence that Cisco’s Unified Computing System (UCS) introduced in 2009.  This led to some considerations regarding roles and demarcations between subject-matter expertise (SME) within ICT Departments/Organisations.

Consolidation, rationalisation, convergence.. whatever apt/buzz word you want to use, ICT has continuously made use of this general concept to move things forward and be more efficient.  From Cisco’ s Architecture for Voice, Video and Integrated Data (AVVID) way back when to LAN and SAN convergence underpinned by the innovation around Data Center Bridging (DCB) and to a certain extent IP-based storage protocol evolution, there are benefits to customers and vendors when moving forward using this general construct.  Vendors can focus their R’n’D, engineering and support efforts on what matters (and also monetise innovation), customers and providers can ‘do more with less’ and more-easily adapt to the ever changing nature of their business or sector.

A couple of general technical themes that slim technology down are  1) Modularisation (inc. ‘re-use’)  and  2) Taking an [often physical] element and emulating it in a new logical form, whether that be abstracted over a [new?] common foundation or by merging two elements using the ‘pros’ of both/all existing paradigms and [hopefully] dropping the things that aren’t so good.

Other than the maturing of these technical shifts, humans are without doubt the main hurdle to deal with.  If we take Voice, Video and Data convergence in the ‘noughties’ we were taking very distinct areas and bringing them together with one area appearing more influential; a case of adapt or risk becoming irrelevant -> individuals with positive and/or negative intent went against the grain…  Back in the DC, UCS didn’t necessitate anything quite as a drastic as that but there is/was potentially at least some blurring of the lines.

One point of control, three areas of expertise… you choose the demarcation lines between humans (if any): UCSM1

 

Holistically speaking:

In addition to some obvious reasons for the lack of a need for severe changes around the alignment + skills of people when adopting UCS, there was also a shift in how we interfaced with the infrastructure… and that’s really the crux of this post and what will make new systems and market shifts easier and easier to adopt… Skills Meeting

UCS introduced a clear single point of control with an associated API for Compute, LAN (Access) and SAN (Edge/Initiator).  Other than the obvious uses of this API; Unified Computing System Manager itself (i.e. the tabs above) and other mainstream software packages with wider remit, we have seen ‘raw’ applications of the native HTTP-based interface and also some adoption of a Microsoft PowerShell option that wraps common API calls into “cmdlets”.

One of the notable differences between convergence today and the convergence of the past is an ‘alleviation’ offered by programmability and standardised scripting + automation.  Taking a broader look at expertise areas, there has been a ‘meet in the middle’ occurring between Infrastructure teams and Programming & Development teams (i.e. not only within the infrastructure bit).  This effort to meet in the middle encompasses some skills development focused on common and universal ways for people from different ‘infrastructure’ SME backgrounds to be more similar to each other than in the past.

i.e. Less of this 😉 (image courtesy of a very talented colleague…):

Traditional Roles

 

Ok ok I get it!… an example please?

Let’s take the creation of a UCS Service Profile.  I’m a Network SME… I might create these items so that they can be used within one or many Service Profiles:

  • A new org/container.
  • Segments (aka VLANs today) to be supported northbound of UCS and made available within the system.
  • MAC Address Pools – Using ‘my own’ prefixes so that I can identify zones/workload-types in a granular and structured way vs. standard non-hierarchical defaults.
  • virtual Network Interface Card (“vNIC”) templates and their associated characteristics such as the VLANs trunked to the OS/Hypervisor, QoS policy, pinning policy, etc.
  • “Dynamic Connection Policies” to bring together a multi-vNIC connection profile that can be associated with a given x86 node/service profile/service profile template (e.g. ‘I want those pre-defined 6 x vNIC templates and those pre-defined 2 x vHBAs as an over-arching template’).
  • etc.

I’m a Compute SME… I will make use of the items created by the Network SME (and others from a Storage SME) and add to them to complete a Service Profile (or SP Template):

  • UUID Pools – Using ‘my own’ prefixes so that I can identify zones/workload-types in a granular and structured way vs. less-structured ‘burned-in’ defaults.
  • Re-usable BIOS policies for different workloads.
  • Boot-order configuration templates inc. boot-from-SAN for different workloads.
  • Full firmware packages.
  • A Service Profile Template including an option from each of the above and a pre-defined dynamic connection policy (or selected vNIC/vHBA templates).
  • Individual Service Profiles spawned from a Service Profile template.
  • etc.

However, if both SMEs wished to interface using the UCS HTTP-based API they could adopt an approach using Microsoft PowerShell (aka “UCS PowerTools” in this case).  Here’s a subset of configuration from each of the lists above:

Network SME:

UCS PowerShell Network

Compute SME:

UCS PowerShell Compute

It all looks pretty consistent to all people involved now doesn’t it?  Static text mixed with variables for the bits that we want to define… all ‘translatable’ if read through to most involved.  The same would apply if we used XML/JSON and a REST/SOAP mechanism instead… which I will detail further in my next post (a bit too much for this one).  These common and universal ways of interfacing with the system(s) can often make it easier for people of different backgrounds to interpret what other SMEs are having to consider and therefore configure.  The view is of the ‘basic requests’ and not of the complexity associated with the old/existing views into technology silos… inc. GUIs and the frightening introductory view that they give!

A blogger’s perspective (1st post)…

The year is 2003 and “the boy” is handed his first view of console access to a Cisco switch by a chap going by the name of Mr Ken Worthy.  He gets shown a basic configuration and then a mission starts, a mission to know every little detail about that and every switch and router that they have along with their capabilities, to nail-down the perfect configurations for the particular organisation that he works for and to make the process of adds, moves and changes as optimised and ‘catalogued’ as possible (some network monitoring sensors were disabled in the making of…!).

The need for one such optimisation arises because he is fed up of being pestered to move interface configuration lines from one port on a switch to another port on another switch when he has more pressing and pro-active work to do; there’s a team dedicated to moving IT equipment between desks/buildings and the network changes are the only bit of the process when they need to involve someone else.  He starts by defining some smart port macros for different types of endpoints and pushes them out to every switch.  These macros have variables in them, most notably for the VLAN(s) and L2 security toolkit options that should be configured – different endpoints, for many reasons, sit in different segments.  This makes the process a little faster and standardised but the network team are still involved in this routine and basic task.  He then works with a developer in the IT team to write a web-based application (after evaluating the market for such a tool… that of course would have to be zero or next to zero cost…).  This application will give the ‘move team’ their own means of doing the same task, notably without involving said “boy” or his colleagues.  The app user selects switches + pre-authorised port numbers and the app accesses those switches, defaults the port configurations, and then lastly, it applies a macro with the relevant variables defined.  With the advent of app/server virtualisation and shared services initiatives the same app becomes more relevant to work in the DCs.

What he didn’t appreciate at the time was that he and his colleagues’ were performing a 5* example of what was wrong with the provisioning and changing of ICT services.  Work-day time, personal time, overtime; time and money put into customising-against and optimising a very basic process that could even have been done before by someone else or be a standard need everywhere.  He also later found out, while blogging at http://rbcciequest.wordpress.com, that one of the config lines in one the macro was incorrect.  How?!  He knew the switches and their options inside out, he’d tested it, he’d asked an expert… human error was obviously still a possibility, and now it was being repeated within a non-standard app.  There are many other stories to tell, especially when looking at the to-ing and fro-ing in the DC in the aftermath of this time!

The business’ simple requirement was to move staff, an app had been built, the app had a requirement of the network, the requirement was for the app itself to trigger configuration changes across the network by accessing multiple touch points and then dropping a static script written in a network-device specific language (CLI), replacing variables point-in-time.  It was all too complicated and specific.  Yet, it was still beneficial and worthwhile.

It’s now 2014, the apps are more complicated and demanding, the criticality of ICT to core business is on a different level compared to then, uptime is vital and there are often much bigger inefficiencies as the one described above.  Data Centres are at
the core of ICT services and ICT is an enabler (and disabler!) of business
more than ever before.  We’ve got a lot to look forward to, much non-trivial learning has been brought with us from the past, the top-down push of cloud consumption models are displayed in the innovation and [application programming] interfaces that are here today.  This blog is about looking at this new wave of service consumption models, technologies and dedicated solutions.  Let’s cross the chasm and destroy the hyperbole!

FYI The next couple of posts will be hold a theme of ‘Real-world programmability across the DC stack’.  They also won’t be written in the 3rd person!

About this blog