Hyperconvergence and how to sell it upstream

27 June 2019

Selling the concept of hyperconverged infrastructure to the people that hold the purse strings is something of an uphill battle. The technically-minded will always be excited by the prospect of the type of infrastructure management and capabilities of hyperconvergence, but for those who only have a grasp of the basics, being convinced of the benefits isn’t as clear cut.

Some of that reluctance to jump onto the fully-hyperconverged infrastructure wave comes from a lack of understanding of the technology, of course, but even a complete understanding of the vagaries of fiber channel protocols wouldn’t necessarily convince anyone to start writing checks. And despite a great deal of hype from the marketing departments of software and hardware manufacturers, no-one is ever going to believe that hyperconvergence is the silver bullet that will zap every helpdesk ticket in the average data center.

That’s because technological evolution is stepwise, and anyone who’s been in the IT business more than a few years knows full well that the complexities of what’s essentially the provision of a service to the enterprise isn’t going to be made child’s-play by the arrival of a special box with flashing LEDs on its front.

So why the hype over hyperconvergence? As its name suggests, hyperconvergence is the coming-together of the different elements that make up a data center: compute, storage, and interconnects. In each of those three small words, there is, naturally, vast complexity. Coping with that complexity is something that IT professionals do, every day, and an HCI is the next stepwise progression that makes provision and management of IT to the organization more manageable. How much easier is a moot point— hyperconvergence doesn’t need selling to IT staff at the coal-face; it’s the rest of the enterprise that needs convincing the investment is worthwhile.

Combined increases in speed, power, and capacity meant that it was inevitable that the trio of data center components (storage, compute, and connectivity) should begin to come together in abstraction layers. Cisco’s (now ten-year-old) UCS platform was a case in point, pulling compute and connectivity together.

The massive scales required for public clouds required new technologies, like GFS (Google’s proprietary file system), capable of scaling massively as a giant data ingestor. One of the brains behind GFS, incidentally, went on to found Nutanix, which claims to be the first manufacturer of technology about which the term ‘hyperconvergence’ began to be used.

However, while the technological evolution might be interesting (or not!), it is the reasons behind the new technologies’ emergence that gives us the business motives to adopt hyperconvergence; the reasons why the checks should be signed, if you like. First, we should conceptualize hyperconvergence not as a wave, but as a rising tide. Marketing functions are fond of their waves, stampedes and exponential growth hyperbole, but they’re rarely accurate with hindsight. We needed to process a lot of data, so we needed more cores. We needed to gather from many more sources, so we needed more connectivity. And we needed to store it all— enter the massive storage arrays. From there, hyperconvergence as inevitable as a way of provisioning and managing those needs. So, what capabilities does hyperconvergence offer the enterprise in the future?

Cloudbursts and hybrids

Business has always been cyclical. Most enterprises, even those comprising multiple business verticals, talk about “quiet patches” or “off-seasons”. Increasingly too, there are often artificial events (like Black Friday or Singles’ Day in retail) that create bursts of demand. Coping with peaks, in IT terms, has always meant a portion of delicately balanced over-provisioning. For much of the financial year, resources lie underutilized (at best), to be able to be brought on-stream when necessary.

With hyperconverged infrastructures, demand peaks can be assimilated in the rapid (and/or automated) integration of cloud services, with little in the way of delays caused by manual provisioning. That means that in theory at least, cloud services can be turned on and off when required, thus either freeing up capacity in-house or reducing capital expenditure figures.

ROBOs and Edges

Logistics companies talk a great deal about “truck rolls.” Edge installations and remote offices have made the term a common one in IT departments, too. Deploying new facilities quickly and easily via HCI technologies means that the ideal of plug-and-play is significantly closer. A couple of cables and a power lead are often all that’s necessary to spin up a new deployment, with a seamless integration possible into the overarching infrastructure, utilizing HCI management.

Companies can also start to use edge-based resources to process data on site, rather than impact precious bandwidth allowances. That opens the way, for example, for IoT traffic being dealt with locally: self-contained, powerful resources managed centrally but acting largely independently.

Tier One without tears

The underlying technology that enables resource allocation from cloud(s) according to demand peaks also applies– in exactly the same way– to running Tier One enterprise applications. According to rulesets, triggers or manual oversight, resources like extra cores to duplicate database instances can be swiftly brought up and deployed, as required.

While it’s often a difficult metric to estimate, customer satisfaction figures will receive a boost from this aspect of HCI alone: if there’s a danger of a new customer clicking away from a site or service because of background lag, hyperconvergence can negate this danger. Similarly, call center operators and support staff can respond more quickly, with dynamic resource allocation dictated by demand, not constricted by bottlenecks due to service provision problems laid at the door of the IT department.

Faster creativity

Giving DevOps teams access to the type of facilities that hyperconvergence offers means their daily activities become more effective and efficient. What that means to the enterprise is that time-to-production for new applications and services is quicker, and QA and testing cycles significantly shortened.

Creating test-beds, sandboxes, or duplicate data is a more straightforward matter with the layer of abstraction that convergence brings. It’s HCI’s speed of deployment that means the right tools and facilities can be presented at the right time, according to the production and development schedules. Additionally…

Clever compression

Data compression and de-duplication on-the-fly lower storage overheads. HCI performs an incredibly useful task too, of making backups and disaster recovery massively easier. In fact, DR is effectively built-in to the storage abstraction of the technology, so potential recovery costs are also slashed.

The efficiency of storage and network management inherent in a hyperconverged infrastructure mean that resources and facilities are freed up for practical uses (like availability for DevOps, above, for example), so the entire enterprise benefits.

Freedom to innovate

While convergent solutions like UCS made deployment of compute and interconnections much simpler, the addition of the storage layer into the resource pool means that the IT department has the facilities to be able to support new initiatives like virtualized desktop environments, arrays of cores for ML-based projects, big data heavy lifting or experimental prototyping of new services.

With IT departments now at the core of digital business, hyperconverged infrastructure’s evolution will mean that, once the technology matures even further, IT’s role as empower-er, rather than limiting factor, comes much closer.

There are several dozens of HCI technology producers out there on the market at present, ranging from the smaller players upwards. Here are three of the go-to names in hyperconvergence at enterprise level you will need to have in your Rolodex card index.

CISCO

When Cisco announced it was producing a server line—Cisco UCS (Unified Computing System), now a data center stalwart the world over – many in the industry wondered if it was the right time for the de facto networking infrastructure provider to enter a new market with deeply-entrenched competitors.

But as it turns out, it was the perfect time. Cisco UCS delivered the complete abstraction and automation of hardware in crucial ways that hypervisor and cloud technology still can’t achieve alone. In the ten years since, the company has produced a range of interoperable products that continue to power the internet.

Cisco’s industry-leading hyperconverged platform is aptly named HyperFlex. Designed with a high performance, software-defined storage file system, it embraces the optimized UCS compute and networking architecture. It’s available in node configurations using hybrid HDD/SSD, all-SSD and high-end all NVMe.

Even small edge installations have a two-node deployment option. This ubiquity and interoperability ensures discrete data centers, public cloud provisions, and every remote office or edge deployment deliver an IT platform controlled and provisioned seamlessly.

Data security is foundational for Cisco, as you might expect from a gold-chip company. HyperFlex delivers market-leading performance, as independently validated against three competing solutions.

The HyperFlex platform is designed to serve businesses, DevOps teams, and distributed hybrid computing models in today’s enterprise, such as modular, containerized app development, deployment & scaling. You can read more about HyperFlex on TechHQ by following this link.

HPE

Alongside Cisco, HPE is one of those household-name suppliers of data center infrastructure in everyday use. Over 80 percent of Fortune 500 companies, for example, use some element of hardware and software from the company.

For the IT department, HCI offers many advantages in provisioning, managing, and controlling highly-scalable IT architectures, and the overall business benefits are apparent in just about every area of the enterprise.

HPE’s endgame is one of ITaaS (IT-as-a-service), towards which HCI is a significant step. By creating a malleable infrastructure on which the business can build as market conditions and strategies dictate, IT becomes a function of expansion, not a hindrance on creativity.

When an enterprise adopts an ITaaS model, IT resources can be quickly provisioned for any type of workload, while maintaining the management and control needed across the entire infrastructure – compute, storage, and connections. This capability will allow the business to dictate change as it requires, and have IT respond as a strategic player, underpinning change, not creating problems.

In purely IT management terms, HPE’s hyperconvergance technology improves recovery point objectives (RPOs) because of the DR facilities that come baked-in, as well as recovery time objectives (RTOs). Reducing backup and recovery times to seconds is but one advantage, another being a vast improvement to the ratio between logical data to physical storage.

NUTANIX

The same technologies that powered the first instances of the public cloud are now available for any enterprise or IT organization, thanks to Nutanix’s ground-breaking products. Technologies are delivered as full-stack solutions that provide every infrastructure element to support a broad range of workloads or applications. Hyperconvergance technologies can be deployed initially for one case scenario, and then scaled very simply. So, lowering costs and commodity hardware make the technology available even to small startups.

As a prime example, the Nutanix solution unites clouds regardless of nature, platform, and geography. With central management, all uses – from a single app to multiple, complex deployments in various edge installations and data centers – can be handled seamlessly.

Nutanix claims to be the first to market for hyperconverged data center systems, and its competitors have been slower off the mark to produce full HCI capabilities. These days, of course, there’s a great deal more competition, but many still prefer to go ‘to the source’. Nutanix’s chosen aim: to make IT infrastructure invisible, which it claims its hypervisor Nutanix AHV delivers across multiple in-house and cloud topologies.

*Some of the companies featured on this editorial are commercial partners of TechHQ