How to achieve the high availability imperative

database book

Data is driving revolutionary changes in computing and the Internet, including new opportunities for generating revenue and more efficient use of current business processes and infrastructure. Data is now the most important and valuable component of modern applications and websites, and downtime or poor performance has a major cost to a business’ bottom line, impacting customers, reputation and revenue.

Achieving increased service capacity with high service availability and low response time is mission critical for many key classes of businesses, including eCommerce, social media, gaming, finance, telecommunications, and enterprise. The business opportunities are substantial, but the demands they place on the datacenter are daunting.

Data centers can fuse together advances in database architecture and commodity server and storage technology to achieve high availability (HA) with excellent performance, scalability and cost.

For the web and application tiers, IT managers of data centers and clouds already use virtualized machine instances hosted on high density commodity servers and storage with load balancing and simple timeouts to achieve HA, better perfromance and scalability in a cost-effective manner. But achieving these goals with databases in scaled production environments presents severe challenges.

For the data access tier, the fundamental HA database approach to maintain service availability is to create and maintain a replica of the database that can be switched when the master database is down due to failure or for routine maintenance.

Traditional loosely-coupled HA database architectures are based on asynchronous replication, and they result in data inconsistency, lost data, long fail-over time (often manual), and very poor performance scalability.

Tightly-coupled database architectures utilizing  parallel synchronous replication exploiting commodity multi-core servers can achieve 99.999% availability with full data integrity; unlimited scaling with exceptional performance and high data consistency; and greatly simplified administration including instantaneous, automatic fail-over and on-line scaling and upgrades. These databases can yield major improvements in data center QOS and TCO for scaled production services.

To achieve these availability benefits with high performance, the database software implementing transaction execution and synchronous replication must be optimized for high-thread parallelism and granular concurrency control. Multi-core thread parallelism is used to concurrently communicate, replicate, and apply master update transactions on all replicas with extremely high throughput and low latency. This high degree of parallelism and concurrency control also enables effective exploitation of flash memory,  providing linear vertical scaling with the cores in modern commodity servers, thereby enabling capital expense reduction (based on consolidation) and operating expense reduction (based on reduced power and space requirements). Unlimited scaling is achieved through transparent partitioning.  Geographic scaling and disaster recovery with HA and high data integrity is achieved by using parallel aycnhronous replication that is coupled to the base synchronous cluster, eliminating WAN data loss and providing WAN automated failover.

With this approach, data centers can fuse together the benefits of architectural improvements in HA software architecture with commodity high density computing to achieve very high service availability with excellent performance, scalability, and cost-effectiveness.  And they can achieve this without sacrificing existing application/data SQL compatibility.

Dr. John Busch is the founder, Chairman, and CTO of Schooner, which provides OLTP database software compatible with MySQL.

Image courtesy of Flickr user mandiberg.


Comments have been disabled for this post