NoSQL Zone is brought to you in partnership with:

Eric is an artist and storyteller who boasts his first computing experience juggling drivers into memory to run as many DOS games as possible without rebooting. Since then he has become a creative, patient reverse engineer who operates on the basic premise that something never has to be the way that it is. Eric has a B.S. in Computer Science from the Georgia Institute of Technology and his experience in the ETL layer across a broad range of industries has instilled in him an appreciation for semantics and the belief that all of mankind's problems are related to our adherence to inaccurate representations of reality. Eric is a DZone MVB and is not an employee of DZone and has posted 8 posts at DZone. You can read more from them at their website. View Full User Profile

Tuning MongoDB Performance with MMS

12.11.2013
| 8973 views |
  • submit to reddit

At MongoLab we manage thousands of MongoDB clusters and regularly help customers optimize system performance. Some of the best tools available for gaining insight into our MongoDB deployments are the monitoring features of MongoDB Management Service (MMS). MMS allows us to quickly determine the health of a MongoDB system and identify the root cause of performance issues. This post covers our general approach to using MMS and MongoDB log files and provides techniques to help you optimize your own MongoDB deployment, whether you’re in the cloud or on your own hardware.

First, we will define the key metrics that we use to guide any performance investigation. Then we will go through the various combinations of metric values, discuss what they mean, and explore how to address the problems they indicate.

Key Metrics

Here we focus primarily on the metrics provided by MMS but augment our analysis with specific log file metrics as well.

MMS Metrics

MMS collects and reports metrics from both the MongoDB server process and the underlying host machine using an agent you install on the same network as your system. All metrics are of interest, but we will focus on the key metrics we use at MongoLab to begin any investigation.

  • PF/OP (derived from the Page Faults and Opcounters graphs)
  • CPU Time (IOWait and User)
  • Lock Percent and Queues

We find that by examining these key metrics you can very quickly get a good picture of what is going on inside a MongoDB system and what computing resources (CPU, RAM, disk) are performance bottlenecks. Let’s look a little more closely at each of these metrics.

PF/OP (Page Faults / Opcounters)
Between 5 and 10 page faults per second (left) compared to more than 4,000 operations per second (right).
A PF/OP of 0.001 (5 / 4000) is close enough to zero to classify as a low disk I/O requirement.

MongoDB manages data in memory using memory mapped files. As indexes and documents are accessed, the data file pages containing them are brought into memory. Meanwhile, data that isn’t accessed remains on disk. If a given memory-mapped page is not in memory when the data in it is needed, a page fault is counted because the OS loads the page from disk. But if a page is already in memory, a page fault does not occur.

The documents and indexes that tend to persist in memory because of regular access–and therefore don’t require page faults to access–are called the working set. As of version 2.4, MongoDB can estimate the working set size using the command:

1 db.serverStatus( { workingSet: 1 } )

Page faults are not necessarily a problem; they can occur on any machine in a cluster that doesn’t have sufficient RAM to hold the working set. If page faults are consistent/predictable and don’t result in queues, or if they are sporadic and don’t result in queues, they can be considered part of normal operations.

That being said, high-load databases and databases with latency-sensitive apps are often optimized with large amounts of RAM with the specific intention of avoiding page faults.

Because the exact number of page faults depends on the current load and what’s currently in memory, a better comparative metric is the ratio of page faults per second to the operation count per second. Calculate this PF/OP ratio using a ballpark sum of your operations per second.

If PF/OP is…

  • near 0 – reads rarely require disk I/O
  • near 1 – reads regularly require disk I/O
  • greater than 1 – reads require heavy disk I/O

Note: In Windows environments, page fault counts will be higher because they include “soft” page faults that don’t involve disk access. When running MongoDB in Windows, be prepared to rely more heavily on Lock, Queue, and IOWait metrics to determine the severity of page faults.

CPU Time (IOWait and User)
CPU graphs from two different instances:
One experiencing high CPU IOWait (left) and the other experiencing high CPU User (right)

The CPU Time graph shows how the CPU cores are spending their cycles. CPU IOWait reflects the fraction of time spent waiting for the network or disk, while CPU User measures computation. Note that to view CPU Time in MMS, you must also install munin.

CPU User time is usually the result of:

  • Accessing and maintaining (updating, rebalancing) indexes
  • Manipulating (rewriting, projecting) documents
  • Scanning and ordering query results
  • Running server-side JavaScript, Map/Reduce, or aggregation framework commands
Lock Percent and Queues
Screen Shot 2013-11-27 at 1.55.29 PM
Lock fluctuating with daily load (left) and corresponding queues (right)

Lock Percent and Queues tend to go hand in hand — the longer locking operations take, the more other operations wait on them. The formation of locks and queues isn’t necessarily cause for alarm in a healthy system, but they are very good severity indicators when you and your app already know things are slow.

Concurrency control in MongoDB is implemented through a per-database reader-writer lock. The Lock Percent graph shows how long MongoDB held a write lock for the database selected in the drop-down at the top of the graph. Note that if “global” is selected, the graph displays a virtual metric: the highest database lock percent on the server at that time. This means two lock spikes at different times might be from different databases, so any time you’re working with a specific database remember to select that particular database to avoid confusion.

Because read and write operations to a database queue when that database’s write lock is held and because all operations queue while the server’s global lock is held, locking is undesirable. Yet, locking is a necessary part of many operations, so it is allowable to an extent and expected when the database is under load.

The Queues graph counts specific operations waiting for the lock at any given time and therefore provides additional information about the severity of the congestion. Because each queued operation likely represents an affected application process, the time ranges of queue spikes are excellent guides for examining log files.

Published at DZone with permission of Eric Sedor, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)