September 29, 2016
Louis Cadier

Exploding 6 Myths about High Availability

There’s a lot of confusion out there about high availability solutions — confusion that might keep you from ensuring greater system resiliency.

So, let’s take a look at those common HA myths and separate truth from fiction.

 

Myth 1. Identical hardware is a requirement for high availability

Not anymore. Gone are the days of IBM’s “Parallel Sysplex” with fault tolerance built on system firmware.

Even though hyperconverged solutions require identical boxes to deliver high availability (unless complimented by a software appliance), you can still deliver RPOs and RTOs of milliseconds to/from dissimilar hardware. You just need a virtual or physical Windows/Linux server, storage, and network access to do it.

 

Myth 2. Block-level, image-based ‘availability’ will do

It won’t, and it never did. Using a block-level, image-based backup tool to snapshot VMs every five minutes consumes considerable system resources, is costly to scale, and still only provides RPOs and RTOs of minutes.

Real-time, byte-level replication, however, can deliver more effective optimization of LAN/WAN with superior system performance. And, that means failover can be automatic and genuinely instant.

 

Myth 3. High availability is for enterprise organizations

Not these days. Sure, the larger your company, the greater the cost of downtime. But, from your perspective that means nothing.

Minutes of downtime add up and, in a competitive landscape, they can make the difference between winning and losing business. This is especially true in the mid-market, where IT services are delivered across locations and where the cost of downtime is significant enough to measure in seconds.

 

Myth 4. Hypervisor high availability is as good as it gets

Not so. Not only are RPOs and RTOs longer, with sync/recovery time in minutes, VMware or Hyper V high availability is restricted to its own hypervisor. Out-of-the-box, it cannot detect application failure.

Application-level high availability is your better bet. It:

  • Syncs/recovers immediately
  • Provides cross-hypervisor compatibility
  • Delivers virtual to physical (and vice versa) support
  • Detects application failure automatically

 

Myth 5. High availability is expensive

You guessed it… not anymore. Liberating high availability from identical hardware gives you flexibility in how you provision it. What’s more, redundant hardware is easier to refresh and it allows for the repurposing (or tiering) of servers and storage.

Virtualization drives the cost down again with public clouds, like AWS, Azure, and Arcserve Cloud, offering different rates for cold and hot VMs. Meaning, you only pay for high availability when you use it.

 

Myth 6. High availability and backup are separate tools

Yeah, that’s a no. Arcserve UDP V6 allows users to manage their file based, image-based, and application-level high availability from a single user console. That means you can turn features on and off according to priority and only pay for the features you want to use. For mission critical services, that would mean activating high availability . (Check this out for a purely technical demonstration.

 

Ensure business uptime with high availability

In this 24/7/365 economy, your business can’t afford to be down. And, high availability — shed of all its false limitations — is just what you need to ensure business continuity.