Next gen NoSQL: The demise of eventual consistency?


The vast selection of NoSQL solutions today share qualities that have set them apart from their relational counterparts including shared-nothing, distributed architectures with fault tolerance and scalability. However, to provide these benefits many NoSQL solutions have given up the strong data consistency and isolation guarantees provided by relational databases, coining a new term – “eventually consistent” – to describe their weak data consistency guarantees.

Eventual consistency pushes the pain and confusion of inconsistent reads and unreliable writes onto software developers. Building the complex, scalable systems demanded by todays highly connected world with such weak guarantees is exceptionally difficult. We need to stop accepting eventual consistency and aggressively explore scalable, distributed database designs that provide strong data consistency.

The concept of eventual consistency comes up frequently in the context of distributed databases. Leading NoSQL databases like Riak, Couchbase, and DynamoDB provide client applications with a guarantee of “eventual consistency”. Others, like MongoDB and Cassandra are eventually consistent in some configurations.

Eventual consistency means exactly that: the system is eventually consistent–if no updates are made to a given data item for a “long enough” period of time, sometime after hardware and network failures heal, then, eventually, all reads to that item will return the same consistent value. It’s also important to understand if a client doesn’t wait “long enough” they aren’t guaranteed consistency at all.

The problem with eventual consistency

Though eventual consistency is touted as a new model, the word “eventual” should carry the same negative connotation that as it would in nearly every other context, like “eventual” honesty or “eventual” fidelity or “eventually” paying you back. Doesn’t sound very appealing, right? Well it’s the same with distributed systems.

When an engineer builds an application on an eventually consistent database, they need to answer several tough questions every time that data is accessed from the database:

  • What is the effect on the application if a database read returns an arbitrarily old value?
  • What is the effect on the application if the database sees modification happen in the wrong order?
  • What is the effect on the application of another client modifying the database as I try to read it?
  • And what is the effect that my database updates have on other clients trying to read the data?

That’s an onerous list, and takes up a lot of developer time. Essentially, that engineer needs to manually do the hard work to ensure that multiple clients don’t step on each other’s toes and deal with stale data.
Eventual consistency represents a dramatic weakening of the guarantees that traditional databases provide and places a huge burden on software developers. Designing applications that maintain correct behavior even if the accuracy of the database cannot be relied upon is a huge challenge! In fact, Google addressed the pain points of eventual consistency in a recent paper on its F1 database and noted:

“We also have a lot of experience with eventual consistency systems at Google. In all such systems, we find developers spend a significant fraction of their time building extremely complex and error-prone mechanisms to cope with eventual consistency and handle data that may be out of date. We think this is an unacceptable burden to place on developers and that consistency problems should be solved at the database level.”

How we got here

Building an eventually consistent database has two advantages over building a strongly-consistent database: (1) It’s much easier to build a system with poor guarantees, and (2) database servers separated from the larger database cluster by a network partition can still accept writes from applications. Unsurprisingly, the second justification is the one given by the creators of the first generation NoSQL systems that adopted eventual consistency. Let’s explore that justification more carefully.

Many of the first-generation NoSQL systems which adopted eventual consistency were designed in the context of an early understanding of Eric Brewer’s CAP Theorem. The popular but misleading summary was that developers had to “pick two out of three” of (C)onsistency, (A)vailability, and (P)artition-tolerance.

The theorem applies to any distributed system where communications channels can fail, and it appears to have dramatic consequences. With the assumption that system availability was essential, early NoSQL databases abandoned consistency (i.e. adopted eventual consistency) using the CAP theorem as their justification.

How do we get out?

“Availability” in the CAP sense however, means that every node remains able to read and write even when it is not able to communicate with the rest of the system. Surely that would be desirable, but it is simple to see the impossibility highlighted by the CAP theorem: If a node cannot communicate with anything else, of course it cannot remain consistent.

Yet, an excellent alternative is possible: A system that keeps some, but not all, of its nodes able to read and write during a partition is not available in the CAP sense but is still available in the sense that clients can talk to the nodes that are still connected. In this way fault-tolerant databases with no single point of failure can be built without resorting to eventual consistency.

Developers shouldn’t have to deal with eventual consistency. Vendors should stop hiding behind the CAP theorem as a justification for eventual consistency. New distributed, consistent systems like Google Spanner concretely demonstrate the falsity of a trade-off between strong consistency and high availability.

The next generation of commercial distributed databases with strong consistency won’t be as easy to build, but they will be much more powerful than their predecessors. Like the first generation, they will have true shared-nothing distributed architectures, fault tolerance and scalability. However, rather than accepting eventual consistency, they will adopt far stronger models like ACID transactions, making them more powerful and productive tools in the enterprise.

Dave Rosenthal is a co-founder of FoundationDB.



But what is FoundationDB doing with consistency that, for example, Riak isn’t?


I read elsewhere that system scalability is a function of consistency, contention and coherency, and ofcourse the number of users. Sure I am not contesting this at this time. However, it will be good to have a scalability model with all these variables well-defined. Does anyone have any reference to share on this. Thanks a mill.

Gary Kraft

There is another possible approach very few cloud architects are discussing.

Creating one’s own infinitely scaling low cost zero maintenance multi-user NoSQL solution using Amazon’s own S3.

I did it and so can you. Check out the case study at:


There is nothing funnier than marketing people from garbageDB ( aka all noSQL vendors ) pontificating on topics they actually know nothing about.

Eventual consistency is BS. If your db is anything other than the caching layer, it should be consistent.


Extremely simplistic answer to a very complex problem, and using quotes in a misguided way. Many inaccuracies in this article too, and displays a lack of knowledge in this space.

David Mytton

The problem with moving this up a level in the stack is that it requires the database to have knowledge and intelligence about what is needed from the application. Different use cases require different levels of consistency. In very high throughput environments you might not care about eventual consistency because you get very high performance. In others you may care that data gets written to a majority of data centers.

The point is that the developer knows this, and can configure it at the application level in the code. If you put it into the database it still has to be configured by the developers, just somewhere different. Or let the database decide, which seems like a very complex problem.


While I can fully appreciate your premise, I don’t think you can blindly dismiss the inherent challenge of latency from global and/or large transactions. This isn’t a programming program, but a physics problem and a non-trivial one. Never mind the speed of light over copper or fiber, the very design of the Internet that makes it so robust and scalable also adds overhead to each transaction and ACID, especially locking, just don’t mix.

Comments are closed.