NoSQL Zone is brought to you in partnership with:

Dharshan is the founder of MongoDirector.com. MongoDirector is a Mongodb management solution for public and private clouds. MongoDirector.com manages the entire lifecycle of mongodb servers including provisioning, scale, high availability, disaster recovery, upgrade, patching, monitoring, backup & restore. The supported cloud platforms are AWS, Joyent, Digital ocean & CloudStack Dharshan has posted 7 posts at DZone. You can read more from them at their website. View Full User Profile

When to use GridFS on MongoDB

02.27.2014
| 9992 views |
  • submit to reddit

GridFS is a simple file system abstraction on top of MongoDB. If you are familiar with Amazon S3, GridFS is a very similar abstraction. Now why does a document oriented database like MongoDB provide a file layer abstraction? Turns out there are some very good reasons

1. Storing user generated file content

A large number of web applications allow users to upload files. Historically when working with relational databases these user generated files get stored on the file system separate from the database. This creates a number of problems.  How to replicate the files to all the needed servers?, How to delete all the copies when the file is deleted? How to backup the files for safety and disaster recovery? GridFS solves this problem for the user by storing the files along with the database. You can leverage your database backup to backup your files. Also due to MongoDB replication a copy of your files is stored in each replica. Deleting the file is as easy as deleting an object in the DB.

2. Accessing portions of file content

When a file is uploaded to GridFS, the file is split into chunks of 256k and stored separately.  So when you need to read only a certain range of bytes of the file, only those chunks are brought into memory and not the whole file. This is extremely useful when dealing with large media content that needs to be selectively read or edited.

3. Storing documents greater than 16MB in MongoDB

By default MongoDB document size is capped at 16MB. So if you have documents that are greater than 16MB you can use store them using GridFS.

4. Overcoming file system limitations

If you are storing a large number of files you will need to consider file system limitations like the maximum number of files/directory etc. With GridFS you don't need to worry about the file system limits. Also with GridFS and MongoDB sharding you can distribute your files across different servers without significantly increasing the operational complexity.

Underneath the covers

GridFS uses two collections to store the data

 
> show collections;
fs.chunks
fs.files
system.indexes
>

The fs.files collections contains metadata about the files and the fs.chunks collections stores the actual 256k chunks. If you have a sharded collection the chunks are distributed across different servers and you might get better performance than a filesystem!

> db.fs.files.findOne();
{
"_id" : ObjectId("530cf1bf96038f5cb6df5f39"),
"filename" : "./conn.log",
"chunkSize" : 262144,
"uploadDate" : ISODate("2014-02-25T19:40:47.321Z"),
"md5" : "6515e95f8bb161f6435b130a0e587ccd",
"length" : 1644981
}
>

MongoDB also creates a compound index on files_id and the chunk number to help quickly access the chunks

> db.fs.chunks.getIndexes();
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "files.fs.chunks",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"files_id" : 1,
"n" : 1
},
"ns" : "files.fs.chunks",
"name" : "files_id_1_n_1"
}
]
>

Examples

MongoDB has a built in utility called "mongofiles" to help exercise the GridFS scenarios. Please refer to your driver documentation on how to use GridFS with your driver.

Put
#mongofiles -h  -u  -p  --db files put /conn.log
connected to: 127.0.0.1
added file: { _id: ObjectId('530cf1009710ca8fd47d7d5d'), filename: "./conn.log", chunkSize: 262144, uploadDate: new Date(1393357057021), md5: "6515e95f8bb161f6435b130a0e587ccd", length: 1644981 }
done!

Get
#mongofiles -h  -u  -p  --db files get /conn.log
connected to: 127.0.0.1
done write to: ./conn.log

List
# mongofiles -h  -u  -p  list
connected to: 127.0.0.1
/conn.log 1644981

Delete
[root@ip-10-198-25-43 tmp]# mongofiles -h  -u  -p  --db files delete /conn.log
connected to: 127.0.0.1
done!

Modules

If you would like to serve the file data stored in MongoDB directly from your web server or file system there are serveral GridFS plugin modules available

Limitations
  • Working Set: Serving files along with your database content can significantly churn your memory working set. If you would not like to disturb your working set it might be best to serve your files from a different mongodb server.
  • Performance: The file serving performance will be slower than natively serving the file from your webserver and filesystem. However the added management benefits might be worth the slowdown.
  • Atomic update: GridFS does not provide a way to do an atomic update of a file. If this scenario is necessary you will need to maintain multiple versions of your files and pick the right version.
Published at DZone with permission of its author, Dharshan Rangegowda.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)