In-memory databases: Use a standard product, or roll your own?
Here are a few questions I would like to ask the community:
As memory gets cheaper, more and more application datasets can now be kept fully in memory, and just have changes flushed to disk. No reads from disk (except at startup) - only writes.
Some of the applications I think could benefit from having all data in memory, never to read from disk but only to flush changes to disk (async perhaps), are:*) Multiplayer game server
*) Search engine live index
*) Live analysis systems
1) Have you had any experiences with this kind of setup? All data in memory for your application?
2) Did you use a standard database product?
3) If yes, which product did you use, and what were the benefits?
4) Did you end up just keeping the data as objects in memory, and create your own indexes etc. to search through it?
5) If yes, what are your experiences with that?
6) Does anyone have any speed comparisons?
I remember working for a data warehouse once that exported part of their index to KDB, an in memory lightning fast database. Indices were rebuilt every night, and remained unchanged for 24 hours. The speedup was x 1000 compared to searching in MS SQL Server back then (in 2001).
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)