Mitch Pronschinske is a Senior Content Analyst at DZone. That means he writes and searches for the finest developer content in the land so that you don't have to. He often eats peanut butter and bananas, likes to make his own ringtones, enjoys card and board games, and is married to an underwear model. Mitch is a DZone Zone Leader and has posted 2576 posts at DZone. You can read more from them at their website. View Full User Profile

How to Clone Wikipedia and Index it with Solr

10.23.2011
| 10141 views |
  • submit to reddit
It took 6 weeks, but Fred Zimmerman, a blogger for Nimblebooks.com just completed a very cool use case scenario for Solr indexing.  He cloned all of Wikipedia and then indexed it with Solr:

1.  "Hardware. I found out the hard way that 32-bit Ubuntu machines with 613 MB RAM (Amazon’s ECS “micro” instances) were not big enough—they created time out errors that disappeared when I upgraded to 1.7GB / single cores. You will also need at least 200 GB disk space, 300 is a safe figure."

2.  "Software.  You will need MediaWiki 1.17 or greater, several extensions (listed in this good page by Metachronistic), either mwimport or http://www.mediawiki.org/wiki/Manual:MWDumper, mySQL, and Apache Solr 3.4. Install the necessary MediaWiki extensions now, it will make it easier later on verify that your database import was successful."

3. "Data.  Get the latest Wikipedia dump from http://en.wikipedia.org/wiki/Wikipedia:Database_download#English-language_Wikipedia.  You probably want the pages-articles file which is ~ 8 GB compressed and ~ 33 GB uncompressed."

4....    --Nimblebooks.com


I think this is a great real world example tutorial that could help any developer get familiar with the open source search utility of Solr, or just tune their skills.  A good read.

Source: http://www.nimblebooks.com/wordpress/2011/10/how-to-clone-wikipedia-and-index-it-with-solr/



Tags: