I specialise MySQL Server performance as well as in performance of application stacks using MySQL, especially LAMP. Web sites handling millions of visitors a day dealing with terabytes of data and hundreds of servers is king of applications I love the most. Peter is a DZone MVB and is not an employee of DZone and has posted 239 posts at DZone. You can read more from them at their website. View Full User Profile

MongoDB Approach to database synchronization

05.03.2010
| 8105 views |
  • submit to reddit

I went to MongoSF today – quite an event, and I hope to have a chance to write more about it. This post is about one replication problem and how MongoDB solves it.

If you’re using MySQL Replication when your master goes down it is possible for some writes to be executed on the master, but not on the slave, which gets promoted to the master. When Master comes back up it has some updates done to it which cause it to be inconsistent with data on the new Master. In MySQL world we can chose to either ignore this problem (or may be even replay those changes on slaves and hope it works out), re-clone it from the slave or use mk-table-checksum to find inconsistencies and re-sync them with mk-table-sync. Both of these operations can be very expensive for large databases.

MongoDB approach used in Replication Sets is for failed master to scan its log files to find all object ids which were modified from the point slave synchronized successfully and retrieve those objects back from the new master (or delete them if they no more exist). Such approach allows quick synchronization without any complex support of rolling back changes. In MongoDB there is a catch with this approach – because there is no local durability this also works as long as network goes down but server stays up, however once Single Server Durability is implemented it will be pretty cool.

What is really interesting – it should be possible to apply the same concept to MySQL Replication, possibly with help of some tools like MMM. Row level Replication makes it possible to identify the objects which were changed on the Master after failover to Slave happened and they can be dumped to local file (in case one wants to synchronize them manually) and when fetched again from the master.
This of course will require IDEMPOTENT slave mode but otherwise it should work unless you have DDL operations in between.

In general listening the great presentation on MongoDB Replication by Dwight Merriman as well as previously looking at how replication done in Redis I should say things can be done a lot more simple way when there is no schema and when you do not have to mess with complex features like triggers or multiple storage engines.

References
Published at DZone with permission of Peter Zaitsev, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)