The Autonomous Data Center – CIO’s End Goal

Most progressive CIO’s want out of the IT market, at least the IT market as we have known it for the past few decades. CIO’s desire the autonomous data center; a data center that effectively operates without the constant supervision of storage managers, network managers, database managers, and outlook managers. They want what the public cloud provides, yet on their terms. Employees focused on applications, not servers. Skilled IT personnel focused on workloads, not spinning disks. Ultimately, they desire cloud administrators versed in how IT can respond quicker to the company’s needs. They want out of the legacy architecture space, and into the autonomous data center.

A realistic expectation? Yes, and coming quicker than you might think. If you go back only a few short years and looked at how applications where provisioned as compared to today, you recognize the exponential pace of automation. Many small to medium size businesses run entirely in the cloud. Large enterprises are looking to this model as well; just not entirely public cloud dependent. Data centers are increasingly reliant not on the hardware, but the software. Software Defined Everything. And as software becomes more intuitive, the role of the traditional IT staff will evolve. Ultimately, the software will manage the application and workloads based on parameters of compute, storage, availability, compliance and BC/DR requirements.

Autonomous data center will be a reality. Are you building towards them?

Advertisements

Private Cloud – Entering the “Slope of Enlightenment”?

Many of you are aware of Gartner’s Hype Cycle and its relevance to technology trends.  I’ve been speaking with analyst, researchers, and customers trying to understand where private cloud should be positioned.  After a lull in private cloud development in 2014, we’re seeing the potential beginnings of the “Slope of Enlightenment” for 2015.  For reference, see how the hype cycle generally works.

Hype Cycle

As time evolves, new technology ideas or products gain a lot of attention (hype) and enter what’s called the “Peak of Inflated Expectations”.  After the attention starts to wane, the new technology tends to get less visibility as companies start to examine how adoption will work for them; referred to as the “Trough of Disillusionment”.  As companies see competitors utilizing the new technology, increased adoption occurs and we enter into the “Slope of Enlightenment”. At some point, the technology becomes widely adopted and enters what is referred to as the “Plateau of Productivity”.

Although I anticipated that 2014 would represent a coming out party for private cloud adoption, it appears that the significant advances and offerings in the public cloud space have actually limited the adoption curve.  And although public cloud adoption has by no means hit a plateau, investment in private cloud will pick up in 2015 as more companies try to reproduce the benefits of public cloud in their own data center development.  We may just be catching the wave of the “Slope of Enlightenment” in private cloud.

Private Cloud Levers

In building your private cloud, there are many trade-offs and determinants that must be considered. In working with customers, I’ve identified the main five that go into any companies decision criteria: Agility, Efficiency, Standards, Governance, and Security.

Agility – Key to all decisions is “how do I create agility for the organization”. How empowered do I want to make other departments to stand up applications or gain access to compute resources. Recognizing that if I don’t create the avenue for greater agility for the business, the business will gain it through additional avenues.
Efficiency – At the very least, I’m trying to do more with my IT infrastructure with less. Creating greater operational efficiency, just for my IT organization, enables me to offer greater agility for the business.
Standards – Do I have the manpower and expertise to adopt open standards, or do I prefer the conveniences of established OS like VMWare or Microsoft? Open standards can provide lower cost licensing, but do I pay more to get the expertise in house when I’ve already invested money and training for the established players.
Governance – On top of everything, I want control. I want to provide for agility, I want to maintain efficiency, but ultimately, I have to manage IT. Who has access, how they get access, what they have access to ultimately needs to reside in my domain.
Security – I want to support the businesses need to move at the speed of web, but I need to understand the trade-offs from different clouds and where my data lies. Security probably sits at the top of IT’s interest, but is off less concern to the departments. They assume it’s taken care off. But security ultimately falls on IT.

The value and importance of each lever will differ for each company. Understanding these levers and their respective trade-offs will be key in building our private cloud.

Private Cloud OS – Fundamental Shift?

Private Cloud OS – Fundamental Shift?

In speaking with a number of customers lately about their intention on Private Cloud OS, the beginning of a fundamental shift may be occurring.  Customers are considering moving away from VMware, moving to either Microsoft or OpenStack for their cloud OS.  The resounding theme is either based around a perception of a more cost efficient and simplified hypervisor license structure, a perceived simpler path to hybrid, or a general interest in adopting a more open standards position.

To be fair, this is only preliminary insight and moving away from the cloud OS of choice (and hypervisor by default) is a significant change to the enterprise, but a change that many are starting to experiment with in their data center. 

Many customers I speak to cite Microsoft Server 2012 as having the features and functionality, coupled with the integration of Microsoft Azure, to give them a path to an integrated hybrid cloud.  They see a path to eliminating a cost for functionality that is already on premise with Server 2012.

Equally, customers who have migrated applications to LINUX recognize that the same trajectory is in place for OpenStack.  Take a look at the job boards in enterprises for open standards developers, specifically for OpenStack developers, and you can begin to see the trend.  Enterprises are looking to add open standards expertise into their data centers.

Whether this fundamental shift will occur, or whether market forces will interrupt this shift, remains to be seen.  Microsoft, who holds a +70% market share of data center OS, is doing everything they can to provide an efficient and agile path to hybrid clouds.  RedHat is quickly putting into place capability for rapid adoption of RedHat OpenStack.  Both enjoy an advantage over VMWare due to their connection to the data center OS.  VMware owns the market share of the hypervisor market, but will that be enough to control the Private Cloud OS as well?

It’s about to get interesting.

OpenStack for the CXO

 

Trend:  The momentum generated by Cloud, and more specifically OpenStack, has gained the interest of IT leadership due to the premise of being built on an open standards platform and the promise of significantly reduced licensing fees in the data center.  OpenStack nomenclature however can be confusing to the uninitiated and downright a second language to the average CXO.  OpenStack likes to use code names for each release and a specific taxonomy the specific services offered.  As of April 2014, we are on the IceHouse release replacing the Havana release with Juno due in October.  It’s not about logic here.

Define:  OpenStack is to Cloud as Linux is to enterprise Operating System.  Where in the past you had Microsoft, UNIX, AIX, SOLARIS, HP/UX plus a few other OS’s; today only Microsoft and LINUX survive and thrive.  In Cloud OS, you have a choice of three: Microsoft, VMware, and OpenStack (CloudStack is dead – sorry).  This is not to imply that OpenStack is as mature as Microsoft or VMware.  OpenStack is rapidly maturing and is reaching the point of leading edge adoption, but not yet in the mainstream of enterprise adoption.

Services:  OpenStack can be viewed by the services that it runs.  The three main services include Compute, Networking, and Storage (see OpenStack diagram below).  These services are often referred to by their taxonomy which includes: Compute (Nova), Networking (Neutron), and Storage (Swift – Object Storage and Cinder – Block Storage).

OpenStack Diagram

Source: http://www.openstack.org/software/

 

Compute (Nova) provides the compute space where your applications will be developed, provisioned and run.  This allows you to provision and manage large networks of virtual machines and ramp up more VM’s as necessary to manage increased workload use.

Networking (Neutron) manages networks and IP addresses and ensures that the network will not be the bottleneck or limiting factor in a cloud deployment.  Spinning up compute clusters require an intelligent network that can actively adapt to your dynamic compute environment.

OpenStack utilizes a slightly different method of storing data; Block (Cinder) and Object (Swift).  Block based storage is utilized for performance sensitive scenarios (databases) while Object storage replaces the tradition file storage system with a more robust, distributed storage system.

The Dashboard (Horizon) provides administrators and users a graphical interface to access, provision and automate cloud-based resources.

There are many more services included with OpenStack including Orchestration (Heat), Image Service (Glance), Telemetry (Ceilometer) and Identity Service (Keystone).

Final Note: OpenStack will quickly become a significant option in the Cloud space.  The IT industry has adopted open standards into their data centers and OpenStack could potentially displace one of the other two Cloud OS over time as was the case with LINUX.  It is still considered a leading edge technology but will quickly approach mainstream adoption as companies like Intel and Boeing integrate OpenStack into their data center model.

 

 

Segmenting the Enterprise Cloud User

Cloud expansion is being driven not by IT, but by everyone but IT.  Gartner predicts that by 2017, CMOs will spend more on IT than CIOs.  Therefore, in looking at how to speak Cloud, it’s important to recognize that you have three very different and distinct customer segments to communicate with (and sell to): Organizational Departments, DevOps and IT.  And these three very different audiences require different messages.  Let’s examine those three.

Organizational Departments – Focused on buying solutions that benefit the business.  Seen typically in purchasing “Software as a Service” applications however it’s much broader.  In the past, it was engineering departments purchasing departmental servers for specific computational needs. Now, it’s marketing department contracting with “cloud based” companies to gain increased insight on customer purchases, assist with Social Media marketing, drive mobile marketing, and manage business intelligence CRM, E-commerce, and so on.  They are trying to find the best ways to market to customer segments like millennials and IT is not their first stop at finding solutions.  Organizational Departments speak solutions, not platforms and infrastructure.  The key conversation needs to revolve around how each solutions needs to integrate to gain maximum value for the organization.

DevOps– Previously thought off as Application Developers, DevOps are a more IT independent group focused on building applications that positively impact the business. Long ago, they discovered that IT was not responsive to delivering a timely compute environment and quickly turned to the cloud for computational cycles.  One of the critical knowledge points to recognize is applications developed on an external to IT environment, often times don’t work “as developed” in the internal IT environment and therefore remain in the cloud forever; no different from applications built on the mainframe decades ago.  DevOps speak platforms for creating solutions, not infrastructure.  Key conversations revolve around how applications developed on one platform will work / integrate with the corporate platform.

IT – Focused on the infrastructure and the technology; not on how that technology is a part of the corporate solution to drive business value.  IT has been in the business of managing the infrastructure for so long, they missed the fact that the infrastructure is second to the applications driving business value.  And those applications don’t necessarily need to live in IT.  IT however is still the logical point for housing many corporate applications.  They ultimately have insight and vision for how mobility, security, and corporate governance can impact and enhance corporate applications and ultimately business solutions.  IT speaks infrastructure, but not necessarily solutions.  Key conversations need to revolve around how IT can become a service to the organization.

Ultimately, IT needs to evolve into IT as a Service (ITaaS).  But for most, that evolution is only just beginning.  Until then, it’s critical that you segment your messaging to each key player focused on their primary drivers and motivation with a view towards solutions integration, platform integration, and ITaaS.

 

 

 

Designing the Private Cloud – Top Down or Bottom Up

In the quest to build a private cloud, it’s important to note how the individual parts fit.  Designed well, and you’ve created an automated, cloud optimized data center allowing the enterprise to respond to the dynamic pace of the changing business landscape.  Designed poorly, and you’ve added another layer of data center controls further emphasizing IT as a cost center, not a business enabler.   

There are two ways to go about the design of your private cloud.  Top down, choosing orchestration component like OpenStack and utilizing commodity parts to build out; or bottom up, utilizing hardware components already in place and determining the best orchestration layer to fit those components.  For many large enterprises, the typical choice is bottom up; utilizing existing architecture and infrastructure already in the production environment and centering on the corporate standard hypervisor and embedded staff expertise.  Others however have shed any preconceived notions of their existing infrastructure and are building from the ground up; based on open standards with the plan to migrate existing application (or building all new applications) to function in the open standards cloud environment.  For purposes of this article, I’m going to focus on the bottom up approach.  Information on building an OpenStack environment can be found at OpenStack.org and will be discussed in a future article.

 Cloud Management Picture

Virtualization-enabled Core Infrastructure:  Embedded and entrenched infrastructure can be the cornerstone of your cloud build.  However, that infrastructure must be able to take advantage of some of the enhanced attributes of the hypervisor.  Capabilities like vMotion, Live Migration, Replication, VDS, vShield, Virtual Switch, etc. are only supported on specific vendor hardware.  If your storage, server, or network can’t take advantage of these capabilities, you pay for unutilized features in both CapEx and OpEx.  Or worse, you pay extra for software and services to make up for the deficiencies in your hardware.

Converged Infrastructure:  A recent trend in the industry is converged infrastructure.  In simple terms, a Converged Infrastructure (CI) allows you to manage you storage, networks and servers as a single unit, from a single console, gaining data center automation and management simplification.  With CI, you can provision a VM from a single console including all storage provisioning, memory allocation and network connections.  This solution though poses the very possible risk of vendor lock-in.  Many vendors sell their version of a converged infrastructure that ties you to their compute, network and storage choices.  They want you to purchase their version of the client-server mainframe.  Vendors need to recognize that customers have entrenched infrastructure that needs to be integrated into the CI, not excluded.  As the consumer, you need to ask how will my existing infrastructure integrate with the CI, and what happens in three years when I need to refresh.  Some vendors currently offer a more open CI while others are moving in that direction.  Caveat emptor.

Virtualization Layer:  The two leaders in virtualization for better or worse are Microsoft (Hyper-V) and VMware (ESXi).  To discuss which is better is to get into a political debate.  Alternative to these entrenched hypervisors, XEN and KVM stand out as excellent alternatives.  Although Hyper-V and ESXi are feature rich and industry leaders in hypervisor technology, XEN and KVM typically don’t have the same price tag associated with them and fit better into a more open standards environment.  For most enterprises, you’re already invested in the skill set and standardized around one of these four hypervisors making the decision easy.

Orchestration Layer:  Quite simply, the orchestration layer, or cloud layer if you will, provides for the five essential characteristics of the cloud.  That doesn’t mean that your cloud must include all 5 to be considered a cloud; only that cloud is made up of at least some of these characteristics.  I would argue though that a fully functioning private cloud comprises of the first three (On-Demand Self-Service, Broad Network Access, Resource Pooling) with only the fourth (Rapid Elasticity) and fifth (Measured Service) being optional.

Cloud Essential Characteristics (per NIST)

  • On Demand Self Service
  • Broad Network Access
  • Resource Pooling
  • Rapid Elasticity or Expansion
  • Measured Service

Cloud Management:  It is only when we reach Hybrid Clouds that Cloud Management becomes a critical construct.  Cloud Management allows for both the management of multiple clouds, and the movement of workloads between clouds, in an automated fashion.  This is really the wholly grail of a strategic, cloud optimized data centers.  The degree to which you can move workloads from one private cloud to another is made possible by the cloud management layer.

 

Ultimately, designing your private cloud is dependent on your entrenched infrastructure and software, the skill set of the team managing the data center, and your future vision.  Whether top down or bottom up, the only wrong choice is status quo.

 

Future Data Centers are Built for Change, Not Built to Last – Gartner

I recently sat through a Gartner presentation “Automation: The Linchpin for Cloud and Data Centers” where I believe they hit upon something profound.  “Future Data Centers are built for Change, Not Built to Last”.  I’ve brought this up with a few large enterprise business leaders and positioned it in this way; if there was a disaster and your entire data center was lost including all applications, how would you design your new data center?  Most when hit with this question sit back, smile, and dream of the possibilities.

To address this, we can look at the start-up market for inspiration.  Most new companies today are completely in the cloud.  They’ve outsourced infrastructure to public cloud providers.  They don’t want to be in the infrastructure business, they want to be in the “thing that creates revenue” business.  As they have grown into mid size businesses,  they are now looking at the migration to hosted private clouds to gain the feeling of greater security and operate in more of a hybrid mode.

Large Enterprises for right or wrong, need their infrastructure; how they would approach building out a new infrastructure however would be vastly different.  Gone would be many of the monolithic compute structures, in would be a more flexible, automated, and open standards environment with application built for cloud (i.e. built on the premise that infrastructure fails but the application must live on).  They would build a more organic, flexible, automated environment that allows for change.  They would deliver services-centric IT.

So, how can we detach ourselves from our infrastructure of today?  The answer is typically, we can’t; at least not immediately.  But we can plan for a future that is built upon these principles.  To achieve this, we have to adopt design platforms, development applications, and infrastructure that will take advantage of a more heterogeneous environment.  We have to envision a future data center that was built to change, not build to last.

The Path to Private Cloud

Whether due to the risk of having corporate data on the public cloud, visions of IT self-service for the organization, embracing the promise of big data and how to pull it all together, or just the recognition that IT must become more relevant to the organization; the move towards private cloud is now.  This movement though requires a fundamental shift in how you build out your data center over the next couple of years.

Let’s begin by acknowledging that you are on the public cloud, whether intentionally or via black-ops IT.  Public Cloud experience is a necessary prerequisite to understanding how you want your private cloud to operate.  Essentially, in developing your private cloud, you are creating IT as a Service and experience with the public cloud helps you understand and answer the questions of:

  • To what degree you will offer self-service to the organization?  Will it be open only to developers or will you provide access to end users in a Software as a Service model?
  • How you will provide for charge-back (or show back)?  Will departments have responsibility for their IT spend? 
  • What limitation (or standardizations) will you place on the platforms supported?  Will you standardize a specific set of development tools?
  • Will BYOD and Desktop as a Service be a part of your offering?
  • What applications are cloud ready, and what work is required to get them ready?

These are all questions you will need to address as you build out your private cloud offering.  And that is only one component of the equation.  You’ll need to make the determination of self-managed vs. outsourced private cloud and determine whether to build the private cloud on-premise or off.  There are many points to consider however the following are some key points to consider.

 

Private Cloud Locations:  Private clouds exist in two formats.  Either operated and controlled by IT on-premise or hosted by a third party provider either on-premise or remotely.  It is a common misconception that a hosted private cloud exists at a remote location.  In reality, many companies are exploring a hosted on-premise private cloud solution placing security of the infrastructure behind the corporate firewall while allowing for a monthly consumption model.  This can be an excellent solution for companies that have determined they want to move to an OpEx financial model for IT.

 

Skill Set:  On one hand, the capability to manage a cloud based infrastructure is lacking in many organizations, especially around OpenStack.  On the other hand, IT leadership is moving towards automating routine infrastructure processes allowing their employees to perform more strategic work that creates direct value to the organization.  The question you ultimately need to ask:  “is IT a core competency to the success of our business”.  If the answer is no, outsourced Private Cloud may be the answer.  Outsourcing Private Cloud allows IT to deliver value to the company through the high availability of infrastructure and platforms allowing for rapid application development and deployment pushing the day to day infrastructure management to a third party. 

Growth:  Many companies are experiencing unprecedented growth, faster than IT can respond.  They simply can’t stand up infrastructure and provision applications fast enough.  A managed private cloud allows IT to provide the immediate infrastructure to provision application to support the company’s growth.

Data Center Capacity:  Do we enlarge or upgrade our facility, purchase denser servers and storage, or look towards hosted private clouds as the solution.  For most companies, CapEx spend on additional data centers is not an option for most companies.  If anything, they would prefer to reduce the size of the data center.  For those companies, external private clouds are the preferred solution. 

CapEx vs. OpEx:  CFO’s are making hard choices between paying up front investments in technology, or looking towards a pay-as-you-go model for technology acquisition.  Private cloud fits extremely well into both categories with managed private cloud being a key solution for OpEx funding. Managed private cloud allows the organization to have infrastructure on site while paying in an OpEx format.

 

Ultimately, Private Clouds in the data center will be the standard; similar to how virtualization is ubiquitous throughout the data center.  And organizations that implement will possess a strategic advantage in the agility they provide for their organization.

Create a free website or blog at WordPress.com.

Up ↑