Mitch Pronschinske is the Lead Research Analyst at DZone. Researching and compiling content for DZone's research guides is his primary job. He likes to make his own ringtones, watches cartoons/anime, enjoys card and board games, and plays the accordion. Mitch is a DZone Zone Leader and has posted 2577 posts at DZone. You can read more from them at their website. View Full User Profile

How to Clone Wikipedia and Index it with Solr

  • submit to reddit
It took 6 weeks, but Fred Zimmerman, a blogger for just completed a very cool use case scenario for Solr indexing.  He cloned all of Wikipedia and then indexed it with Solr:

1.  "Hardware. I found out the hard way that 32-bit Ubuntu machines with 613 MB RAM (Amazon’s ECS “micro” instances) were not big enough—they created time out errors that disappeared when I upgraded to 1.7GB / single cores. You will also need at least 200 GB disk space, 300 is a safe figure."

2.  "Software.  You will need MediaWiki 1.17 or greater, several extensions (listed in this good page by Metachronistic), either mwimport or, mySQL, and Apache Solr 3.4. Install the necessary MediaWiki extensions now, it will make it easier later on verify that your database import was successful."

3. "Data.  Get the latest Wikipedia dump from  You probably want the pages-articles file which is ~ 8 GB compressed and ~ 33 GB uncompressed."


I think this is a great real world example tutorial that could help any developer get familiar with the open source search utility of Solr, or just tune their skills.  A good read.