NoSQL Zone is brought to you in partnership with:

Cody Powell (@codypo) is the cofounder and CTO of Famigo. Famigo's main offering is a cross-platform recommendation engine for mobile content, helping families find things like the best android apps, best iPad apps, and free apps. He's a graduate of Trinity University, an ardent supporter of the Texas Rangers, and he makes a mean mojito. Cody is a DZone MVB and is not an employee of DZone and has posted 26 posts at DZone. You can read more from them at their website. View Full User Profile

A Reliable, Scalable, and (Kinda) Cheap Cloud Hosting Architecture for MongoDB

11.25.2011
| 6736 views |
  • submit to reddit

I did a talk at the Austin NoSQL group about hosting MongoDB in EC2, and I completely forgot to post anything here on it. I will correct that now! Slides here.

At Famigo, we house all of our valuable data in MongoDB and we also serve all requests from Amazon EC2 instances. We've devoted many mental CPU cycles to finding the right architecture for our data in the cloud, focusing on 3 main factors: cost, reliability, and performance.

Reliability
If you're hosting anything in the cloud, you must keep one scary question in mind: what if this node disappears? The node could disappear because of an availability zone outage (wuzzup, EC2?), an actual hardware failure (remember, it's still hosted somewhere on something), or a werewolf attack on a datacenter. Regardless of the cause, it's safe to assume that this will happen at some point.

Since our data is crucial to our business and hard downtime is not an option, we want our data replicated across multiple nodes. Not only that, the ideal scenario would have automatic failover, so that if one node dies, another can take its place without any human involvement. In addition, it'd be great if, when a flaky node comes back up, it automagically rejoins in the replication.

MongoDB replica sets satisifies all of these requirements, and they're quite easy to set up. Replica sets are like asynchronous master/slave replication on steroids. A replica set has 1 primary node, and at least one secondary node. Each node has a special oplog collection, which is an ordered list of writes performed on the data. All writes occur first on the primary node (and represented in its oplog). Secondary nodes are alerted to changes in the primary node's oplog, and so all data changes are made quickly in the right order.

One cool feature of replica sets is automatic failover. When a primary node goes down, an election is held and a new primary node is elected within a couple of seconds. Yep, within seconds; there are no electoral college shenanigans here. Another great feature is automatic recovery. When a node falls behind, it catches up by iterating through the primary node's oplog.

Replica sets sound awesome, but they can be a little bit complicated for the following reason: a set must contain at least 3 nodes. Why? The primary node is determined by voting, so you want an odd number to break any ties. Thankfully for our bank account, there are special, lightweight nodes called arbiters that don't actually store any data, but exist solely to vote. While you do need 3 nodes, you don't need 3 full, high-performance nodes.

Replica Set Performance
Replica sets sound as if they could be slow, since all writes must occur on one particular node first, then filter down to every other node. Also, by default, you can't even read from non-primary nodes. So, that means all writes AND all reads must occur on this one node that's also orchestrating all of the replication. Heavens to Betsy, this sounds terrible!

Thankfully, there is a setting you can apply to your replica set to allow reads on your secondary nodes. This setting is rs.slaveOk() and can be entered into your mongo client at the command line. You can't guarantee that the data from these reads are completely up-to-date with the canonical dataset on your primary node, but practically speaking, it's good enough. I've found this to be a very worthwhile trade-off, and so we always set rs.slaveOk().

Thus, with just a few keystrokes, a replica set allows you to scale your reads across many nodes. We scalability nerds should find that pretty exciting.

Node Performance
What are the ideal specs on each node in the replica set? At the very least, each node needs a 64 bit processor. This is because MongoDB uses memory-mapped files for performance, and so any instance with a 32 bit processor can only access 2.5 GB of data.

With regards to RAM, the best rule of thumb I've seen is that a node should be able to fit its working set into memory for best performance. That's not necessarily your whole database. If you're using a tool like top, it actually looks like Mongo is using a boatload of memory. This is an illusion, since much of this is cached.

If each node needs a 64 bit processor and (probably) a lot of RAM, it's starting to sound like it can get expensive. Thankfully, there's a shortcut here. Remember, if your node is an arbiter and only useful for voting, it does NOT need a 64 bit processor, nor does it need a lot of RAM. An arbiter can be the tiniest instance you can possibly get away with. If you have a Casio watch that you bought for $3.50 at a garage sale, I'd seriously consider using that as your arbiter.

Cost
On Amazon's EC2 service, I've found that a Large instance works best for each primary or secondary node. These are 64 bit instances with around 8GB of RAM. Currently, you can spin one of those up on-demand for $0.34 an hour. Sadly, these are Amazon's cheapest 64 bit instances.

One cool thing about Rackspace Cloud instances is that they're all 64-bit. For a primary or secondary node, you can choose the cheapest instance with enough RAM. If you're looking for something equivalent to the EC2 Large instance, it's currently $0.48 an hour.

Yes, they are expensive, but you don't have to devote these instances solely to MongoDB. They can also host your API, web server, etc.

For an arbiter, I recommend finding the smallest instance you possibly can. I use an EC2 micro instance here, which you can get for free in certain conditions or spin up an on-demand instance for $0.02 an hour. The Rackspace equivalent is $0.015 an hour.

I should note that if you're willing to prepay for these instances, the price goes down significantly. Also, if you have incriminating photos with which to blackmail executives at Rackspace or Amazon, the price goes down further still.

Result
The end result here is that you get your data replicated across multiple nodes, automated failover and recovery, and scaled reads on highly-performant machines for less than $1 an hour. Not too shabby, MongoDB!


source: http://www.codypowell.com/taods/2011/08/a-cloud-hosting-architecture-for-mongodb.html

Published at DZone with permission of Cody Powell, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Dharshan Rangegowda replied on Sun, 2013/03/17 - 10:28pm

MongoDirector (www.mongodirector.com) is a great hosting solution for MongoDB on Amazon EC2. It completely automates the entire process of deploying and managing Mongo replica sets and shards using a simple two step wizard. You can pick the number of replicas and shards and the regions in which you want to place them. Provisioned IOPS and RAID can be used for optimal performance. Automatic backups can also be configured. LVM snapshots are used for backup  - so backups take the same amount of time irrespective of the size of data. 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.