Tag Archives: API

Model-based?

“Model-based” sounds like an IT marketing term if ever I heard one.  It has a systematic, structured and business-like ring to it…mr-burns-picture

If you asked one hundred different people what model-based means as far as data centre IT systems + management is concerned you’d get…hmmm… I’d guess about… 63 different answers?  1 of which would be noted down as a catch-all ‘not sure’ covering 38 of the people!?

Even with this in mind, I think we’ve actually touched on a system that comes as close as it gets to ‘model-based’ in the date centre without it causing the person stating the point to blush from a ‘did I really just say that’ feeling!
Cisco Unified Computing System (UCS), a system comprised of HW+SW, empowers by providing re-usable and hierarchical ‘building block items’ with linkages to these items framing a HW+SW infrastructure delivery in a differentiated, yet uniform, way.  ‘Synonymously speaking’, it’s what would generally be seen as correct SW development techniques applied to provisioning and monitoring actions against both the SW and HW of a compute environment.  We touched on the UCS PowerTool‘s hook-in (MS PowerShell) to the UCS XML API in the last post and that’s essentially the starting point for this one.

So, let’s take a look at an image from the last post: UCS PowerShell Network

We essentially have an application above which is written in PowerShell.  The application’s purpose is to instruct a UCS system, via its HTTP-based API (‘connection-to-UCS’ lines are omitted), to create policy associated with characteristics defined using variables by the application’s user.  Those variables, in some cases, refer to the pre-defined ‘modular and re-usable building blocks’ mentioned -> e.g. A “MSWin-Ethernet” QoS Policy can be picked out in the above.  This may be one of many pre-defined QoS policies…  We have an example ‘boxed-up’ automation of a provisioning action which induces a shift towards a provisioning model that aligns to:

  • Speeding up the deployment and re-purposing of infrastructure.
  • Increased accuracy and utilisation of infrastructure.
  • Reduced downtime and Mean Time to Repair (MTTR) by modeling using dependencies of lower-level items -> i.e. packaged, abstracted and portable.

 

However, the application itself is making use of a number of ‘code packages’ or modules that simply aren’t visible above.  In this case its PowerShell, and PowerShell code modules are called cmdlets.  cmdlets are generally/mostly compiled before you download and use them and they are essentially what provide the portability + simplification of PowerShell which makes it so useful and beneficial to [Data Centre] Infrastructure SMEs etc.  They can help bridge the gap between the expertise of [Data Centre] Infrastructure SMEs and Software Development/Programming SMEs to a certain extent.

To provide a view of what’s going on under the hood, take a look at the following:

1. An example PowerShell line (create Service Profile Template – 1st PS line):

SPTPowerShell

2. ‘Expanded’ representation:

ModelBasedBreakdown

3. Linkages between the expanded items and the underlying UCS model:

Model Linkage

Now, if I wanted to work back the other way, in a way that could be thought of as being closer to a developer’s viewpoint, we need to be aware that the cmdlet being referenced above would in essence be part of a wider action.  The action would be to set up a HTTP[s] session with the UCS system and then send pre-crafted XML with text defined above inserted as variables.  To really see ‘under the hood’ we would need visibility of the XML that the cmdlet is actually causing PowerShell to send.
There are ways to do this.  One way is to capture what gets passed between UCS Manager and a UCS system’s API.  The easiest option to capture this XML goes by the name of “
goUCS” and I’ll be showing a use of that particular tool in my next post.  At least I assume that you would prefer to not be spending hours with WireShark captures… or indeed looking at some somebody else has done…

The XML gives us the ‘raw code’ to tap into an underlying model-based environment -> i.e. in this case it’s a Service Profile Template along with its ‘building block’ dependents.  The XML and the underlying system-level abstraction is the magic stuff!

What’s the bigger picture?  It’s not enough to ‘software-define’ systems simply by adding common/standard ways of interfacing and communicating with them without having links in place between the ‘instructions’ sent and the different hierarchies of variable items underneath the interface.  An inability to add new ‘links’ to unique/innovative attributes that sit at a hardware level will also limit innovation in the future and restrict a sustained direction of statelessness and abstraction.  Commoditisation shouldn’t be an end goal in itself, it should be part of a wider plan.  Adding a HTTP-based SOAP/REST API and expecting it to mask the fundamental architecture of a system is essentially flawed.  To sum up: ‘Software-defined’ will not always necessitate a ‘Software-only’ mindset or be benefited by it.

Advertisement

A blogger’s perspective (1st post)…

The year is 2003 and “the boy” is handed his first view of console access to a Cisco switch by a chap going by the name of Mr Ken Worthy.  He gets shown a basic configuration and then a mission starts, a mission to know every little detail about that and every switch and router that they have along with their capabilities, to nail-down the perfect configurations for the particular organisation that he works for and to make the process of adds, moves and changes as optimised and ‘catalogued’ as possible (some network monitoring sensors were disabled in the making of…!).

The need for one such optimisation arises because he is fed up of being pestered to move interface configuration lines from one port on a switch to another port on another switch when he has more pressing and pro-active work to do; there’s a team dedicated to moving IT equipment between desks/buildings and the network changes are the only bit of the process when they need to involve someone else.  He starts by defining some smart port macros for different types of endpoints and pushes them out to every switch.  These macros have variables in them, most notably for the VLAN(s) and L2 security toolkit options that should be configured – different endpoints, for many reasons, sit in different segments.  This makes the process a little faster and standardised but the network team are still involved in this routine and basic task.  He then works with a developer in the IT team to write a web-based application (after evaluating the market for such a tool… that of course would have to be zero or next to zero cost…).  This application will give the ‘move team’ their own means of doing the same task, notably without involving said “boy” or his colleagues.  The app user selects switches + pre-authorised port numbers and the app accesses those switches, defaults the port configurations, and then lastly, it applies a macro with the relevant variables defined.  With the advent of app/server virtualisation and shared services initiatives the same app becomes more relevant to work in the DCs.

What he didn’t appreciate at the time was that he and his colleagues’ were performing a 5* example of what was wrong with the provisioning and changing of ICT services.  Work-day time, personal time, overtime; time and money put into customising-against and optimising a very basic process that could even have been done before by someone else or be a standard need everywhere.  He also later found out, while blogging at http://rbcciequest.wordpress.com, that one of the config lines in one the macro was incorrect.  How?!  He knew the switches and their options inside out, he’d tested it, he’d asked an expert… human error was obviously still a possibility, and now it was being repeated within a non-standard app.  There are many other stories to tell, especially when looking at the to-ing and fro-ing in the DC in the aftermath of this time!

The business’ simple requirement was to move staff, an app had been built, the app had a requirement of the network, the requirement was for the app itself to trigger configuration changes across the network by accessing multiple touch points and then dropping a static script written in a network-device specific language (CLI), replacing variables point-in-time.  It was all too complicated and specific.  Yet, it was still beneficial and worthwhile.

It’s now 2014, the apps are more complicated and demanding, the criticality of ICT to core business is on a different level compared to then, uptime is vital and there are often much bigger inefficiencies as the one described above.  Data Centres are at
the core of ICT services and ICT is an enabler (and disabler!) of business
more than ever before.  We’ve got a lot to look forward to, much non-trivial learning has been brought with us from the past, the top-down push of cloud consumption models are displayed in the innovation and [application programming] interfaces that are here today.  This blog is about looking at this new wave of service consumption models, technologies and dedicated solutions.  Let’s cross the chasm and destroy the hyperbole!

FYI The next couple of posts will be hold a theme of ‘Real-world programmability across the DC stack’.  They also won’t be written in the 3rd person!

About this blog