NoSQL Zone is brought to you in partnership with:

Antoine works as a Technical Manager at MongoDB Inc, helping companies architect systems with MongoDB. Previously he worked as a kernel engineer on MongoDB core and the Java driver. He also has spent many years in the CDN industry, at Panther CDN then CDNetworks, designing and developing one of the largest and fastest Content Delivery and Application Acceleration Network. Prior work include the development of a new network protocol in Linux kernel to multiplex several interfaces, used in wireless devices. Antoine received a MS degree in Computer Science from Stevens Institute (Hoboken), and a BS in Computer Science from Epita (Paris France). Antoine is a DZone MVB and is not an employee of DZone and has posted 8 posts at DZone. You can read more from them at their website. View Full User Profile

How to Implement Robust and Scalable Transactions Across Documents with MongoDB

08.06.2014
| 6926 views |
  • submit to reddit

The transactional issue

There are some good reasons why historically databases have provided support for transactions across pieces of data. The typical scenario is that the application needs to modify several independent bits and it would likely get into a bad state if only some of those changes actually made it to the datastore. Hence the concept of the long-revered ACID:

  • Atomicity: all changes are made, or none
  • Consistency: data remains in a consistent state
  • Isolation: other clients cannot see partial changes
  • Durability: once the transaction is acknowledged back to the client, the data is in a safe spot (typically in a journal on disk)

With the introduction of the NoSQL databases, support for ACID transactions across documents was typically thrown away. Many key/value stores still have ACID but it only applies to a single entry. The main reason for dropping it is that it just does not scale! If documents are spread over several servers, transactions become extremely difficult to implement and resource intensive. Imagine if the transaction spreads across dozens of servers, some of them distant, some of them unreliable, how difficult and slow it would be!

MongoDB supports ACID at a single document level. More precisely, you get “ACI” by default and get the “D” if you turn on the “j” WriteConcern option. Mongo has a rich query language that spans across documents, and consequently people are longing for multi-document transactions to port over their SQL code. One natural workaround is to leverage the power of documents: instead of many rows and relationships you can embed things together into a single larger document. Denormalization brings you back the transactions!

This technique actually solves a large number of transactional issues for one-to-one and some one-to-many relationships. It also tends to make your application simpler and the datastore faster, so it’s a win-win! But for other cases where data must be split, how can you deal with it?

Reducing the ACID

It really boils down to this for most applications:

  • Atomicity: really you just want ALL changes to be made.
  • Consistency: it is fine is the system is inconsistent for a short time as long as it is eventually consistent
  • Isolation: lack of isolation exposes temporary inconsistency which is not ideal, but most users are used to it in the world of online services (e.g. customer support: “it takes a few seconds to propagate”)
  • Durability: it is important and supported

The problem really boils down to having robust and scalable eventual consistency!

Solution 1: Synchronization Field

This use case is the simplest and most common: there are fields that need to be kept “in sync” between documents. For example say you have a user document with username “John”, and documents representing comments that John has posted. If the user is allowed to change username, the change needs to be propagated throughout all documents, even if there is an application or database failure in the middle of process.

To achieve this, an easy way is to use of a new field (e.g. “syncing”) in the main document (in this case it’s the user doc). Set the “syncing” field to a Date timestamp within the update of the user document:

db.user.update({ _id: userId }, { $set: { syncing: currentTime }, { rest of updates ... } })

Now the application can go ahead with modifying all comments documents. When done, the flag should be unset:

db.user.update({ _id: userId }, { $unset: { syncing: 1 } })

Now imagine there is a failure during the process: there are some comments left with the old username. Thankfully the flag is still set and the application has a way to know that the process should be retried. For this, you need a background thread that checks for dangling “syncing” documents that have not finished in a conservative time (e.g. 1h). Finding those documents can be made very efficient by using an index on the “syncing” flag. This index should be “sparse” so that only the few documents where it is set are actually indexed, keeping it very small.

db.user.ensureIndex({ syncing: 1 }, { sparse: true })

As a result, your system typically keeps things in sync in a short period of time, or up to 1 hour in the case of system failure. If timing is not important, you could even have the application fix the documents in a lazy fashion if the “syncing” flag is detected upon reading.

Solution 2: Job Queue

The principle above works well if the application does not need much context and just reapplies a generic process (e.g. copying a value). Some transactions need to make specific changes that would be difficult to identify later on. For example, say the user document contains a list of friends:

{ _id: userId, friends: [ userId1, userId2, ... ]}

Now user A and B decide to become friends: you need to add B in A’s list and vice-versa. It is fine if it does not happen exactly at the same time (as long as it doesn’t vex one of them :)). A solution for this, and most transaction problems, is to use a job queue also stored in MongoDB. A job document can look like:

{ _id: jobId, ts: timeStamp, state: "TODO", type: "ADD_FRIEND", details: { users: [ userA, userB ]} }

Either the original thread can insert the job and go forward with the changes, or multiple “worker” threads can be dedicated to picking up jobs. The worker fetches the oldest unprocessed job using findAndModify() which is fully atomic. In the operation it marks the job as being processed and also specifies the worker name and current time for tracking. An index on { state: 1, ts: 1 } makes those calls very fast.

db.job.findAndModify({ query: { state: "TODO" }, sort: { ts: 1 }, update: { $set: { state: "PROCESSING", worker: { name: "worker1", ts: startTime } } } })

Then the worker makes changes to both user documents in a way that is idempotent. It is important that those changes can be reapplied many times with the same effect! Here we will just use a $addToSet for that purpose. A more generic alternative is to add a test on the query side to check if the change has been made already or not.

db.user.update({ _id: userA }, { $addToSet: { friends: userB } })

The last step is to either delete the job or to mark it as done. It may be good to keep jobs around for a while as a safety measure. The only downside is that the previous index gets larger over time, though you could also make use of a nifty sparse index on a special field { undone: 1 } instead (and change the queries accordingly)

db.job.update({ _id: jobId }, { $set: { state: "DONE" } })

If the process dies at any point in time, the job is still in the queue but marked as processing. After a period of inactivity a background thread can mark the job as needing processing again, and the job just starts again from the beginning.

Published at DZone with permission of Antoine Girbal, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)