NoSQL Zone is brought to you in partnership with:

Baxter Denney is Director of Marketing at Couchbase. Couchbase is the NoSQL leader, with production deployments at AOL, Deutsche Post, NTT Docomo, Salesforce.com, Starbucks, Zynga, and hundreds of other global enterprises. Couchbase Server, our NoSQL database offering, delivers a more scalable, high-performance and cost-effective approach to data management than relational database technology. It is particularly well suited for storing the data behind web applications deployed on modern virtualized or cloud infrastructures. Baxter has posted 4 posts at DZone. You can read more from them at their website. View Full User Profile

Want to Get Rid of Documents with Duplicate Content?

10.30.2012
| 4653 views |
  • submit to reddit

Whether you’re combining data from two different data sources, have multiple purchases from the same customer or just entered the same data in a web form twice, it seems like everyone faces the problem of duplicate data at one point or the other.

In this blog post, we'll look at using views in Couchbase Server 2.0 to find matching fields among documents and retain the non duplicate documents. For the sake of this example, assume each document has three common user specified fields - first_name, last_name, postal_code. Using the ruby client for Couchbase Server and the faker ruby gem, you can build a simple data generator to load some sample duplicate data into Couchbase. To use ruby as a programming language with Couchbase, you should download the Ruby SDK here.

Here is an execution sample:

$ ruby ./generate.rb --help
Usage: generate.rb [options]
   -h, --hostname HOSTNAME           Hostname to connect to (default: 127.0.0.1:8091)
   -u, --user USERNAME               Username to log with (default: none)
   -p, --passwd PASSWORD            Password to log with (default: none)
   -b, --bucket NAME                 Name of the bucket to connect to (default: default)
   -t, --total-records NUM           The total number of the records to generate (default: 10000)
   -d, --duplicate-rate NUM          Each NUM-th record will be duplicate (default: 30)
   -?, --help                        Show this message

$ ruby ./generate.rb -t 1000 -d 5
     1000 / 1000
Each document in Couchbase has an user specified key which is accessible as meta.id in the map function of the view. In Figure 1 below, there are multiple documents loaded into Couchbase Server using the data generator client above.  

 

Step 1

Write a custom map function that emits the document ID (meta.id) of all the documents if the a particular duplicate pattern matches (first_name, last_name, postal_code in this case).
function (doc, meta) {

      emit([doc.first_name + '-' + doc.last_name + '-' +  doc.postal_code], meta.id);
}
The map function defines when two documents are duplicates.  According to the map function defined above, two documents are duplicate when the first name, last name and postal code match. We use ‘-’ so that we prevent aliasing of the data when we concatenate the first name, last name and the postal code.  

 

Step 2

The reduce function looks like -

function (keys, values, rereduce) {
  if (rereduce) {
    var res = [];
    for (var i = 0; i < values.length; i++){
      res = res.concat(values[i])
    }
    return res;
  } else {
    return values;
  }
}

After grouping, if there are more than one meta.id values, we concatenate them to get a list of meta.id's refering to a duplicate document.

Step 3

The core part of the data cleaner is written in Ruby.
require 'couchbase'
 
connection = Couchbase.connect(options)
ddoc = connection.design_docs[options[:design_document]]
view = ddoc.send(options[:view])
connection.run do
 view.each(:group => true) do |doc|
   dup_num = doc.value.size
   if dup_num > 1
      puts "left doc #{doc.value[0]}, "
      # delete documents from second to last
       connection.delete(doc.value[1..-1])
      puts "removed #{dup_num} duplicate(s)"
   end
 end
end
Connect to Couchbase Server and query the view. The value field is an array of meta.id’s that correspond to duplicate documents (matching first name, last name and postal code). If the array size is greater than 1, we delete all the documents except the one corresponding to the last meta.id.

If the number of meta.id’s in the value array is greater than 2, there are duplicate documents corresponding to that key. As shown in the figure above id19 and id20 are duplicate documents.

The output of the data cleaner script looks like -  As shown in the figure below, duplicate documents are now eliminated. Enjoy!  
Published at DZone with permission of its author, Baxter Denney. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)