DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Containerize Gradle Apps and Deploy to Kubernetes With JKube Kubernetes Gradle Plugin
  • Redis-Based Tomcat Session Management
  • How to Debug the Spring Boot Application in Eclipse? [Video]
  • How to Import the Gradle Spring Boot Application in Eclipse? | Spring Boot Tutorial [Video]

Trending

  • Distributed Consensus: Paxos vs. Raft and Modern Implementations
  • Using Python Libraries in Java
  • Next Evolution in Integration: Architecting With Intent Using Model Context Protocol
  • Building Reliable LLM-Powered Microservices With Kubernetes on AWS
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Running Hadoop MapReduce Application from Eclipse Kepler

Running Hadoop MapReduce Application from Eclipse Kepler

By 
Hardik Pandya user avatar
Hardik Pandya
·
Feb. 21, 14 · Interview
Likes (2)
Comment
Save
Tweet
Share
144.3K Views

Join the DZone community and get the full member experience.

Join For Free

it's very important to learn hadoop by practice.

one of the learning curves is how to write the first map reduce app and debug in favorite ide, eclipse. do we need any eclipse plugins? no, we do not. we can do hadoop development without map reduce plugins

this tutorial will show you how to set up eclipse and run your map reduce project and mapreduce job right from your ide. before you read further, you should have setup hadoop single node cluster and your machine.

you can download the eclipse project from github .

use case:

we will explore the weather data to find maximum temperature from tom white’s book hadoop: definitive guide (3rd edition) chapter 2 and run it using toolrunner

i am using linux mint 15 on virtualbox vm instance.

in addition, you should have

  1. hadoop (mrv1 am using 1.2.1) single node cluster installed and running, if you have not done so, would strongly recommend you do it from here
  2. download eclipse ide, as of writing this, latest version of eclipse is kepler

1. create new java project

new_project

2. add dependencies jars

right click on project properties and select java build path

add all jars from $hadoop_home/lib and $hadoop_home (where hadoop core and tools jar lives)

hadoop_lib

hadoop_lib2

3. create mapper

package com.letsdobigdata;
import java.io.ioexception;
import org.apache.hadoop.io.intwritable;
import org.apache.hadoop.io.longwritable;
import org.apache.hadoop.io.text;
import org.apache.hadoop.mapreduce.mapper;
public class maxtemperaturemapper extends
 mapper<longwritable, text, text, intwritable> {
private static final int missing = 9999;
@override
 public void map(longwritable key, text value, context context)
 throws ioexception, interruptedexception {
string line = value.tostring();
 string year = line.substring(15, 19);
 int airtemperature;
 if (line.charat(87) == '+') { // parseint doesn't like leading plus
 // signs
 airtemperature = integer.parseint(line.substring(88, 92));
 } else {
 airtemperature = integer.parseint(line.substring(87, 92));
 }
 string quality = line.substring(92, 93);
 if (airtemperature != missing && quality.matches("[01459]")) {
 context.write(new text(year), new intwritable(airtemperature));
 }
 }
}

4. create reducer

package com.letsdobigdata;
import java.io.ioexception;
import org.apache.hadoop.io.intwritable;
import org.apache.hadoop.io.text;
import org.apache.hadoop.mapreduce.reducer;
public class maxtemperaturereducer
extends reducer<text, intwritable, text, intwritable> {
@override
public void reduce(text key, iterable<intwritable> values,
 context context)
 throws ioexception, interruptedexception {

 int maxvalue = integer.min_value;
 for (intwritable value : values) {
 maxvalue = math.max(maxvalue, value.get());
 }
 context.write(key, new intwritable(maxvalue));
}
}

5. create driver for mapreduce job

map reduce job is executed by useful hadoop utility class toolrunner

package com.letsdobigdata;

import org.apache.hadoop.conf.configured;
import org.apache.hadoop.fs.path;
import org.apache.hadoop.io.intwritable;
import org.apache.hadoop.io.text;
import org.apache.hadoop.mapreduce.job;
import org.apache.hadoop.mapreduce.lib.input.fileinputformat;
import org.apache.hadoop.mapreduce.lib.output.fileoutputformat;
import org.apache.hadoop.util.tool;
import org.apache.hadoop.util.toolrunner;
/*this class is responsible for running map reduce job*/
public class maxtemperaturedriver extends configured implements tool{
public int run(string[] args) throws exception
 {

 if(args.length !=2) {
 system.err.println("usage: maxtemperaturedriver <input path> <outputpath>");
 system.exit(-1);
 }

 job job = new job();
 job.setjarbyclass(maxtemperaturedriver.class);
 job.setjobname("max temperature");

 fileinputformat.addinputpath(job, new path(args[0]));
 fileoutputformat.setoutputpath(job,new path(args[1]));

 job.setmapperclass(maxtemperaturemapper.class);
 job.setreducerclass(maxtemperaturereducer.class);

 job.setoutputkeyclass(text.class);
 job.setoutputvalueclass(intwritable.class);

 system.exit(job.waitforcompletion(true) ? 0:1); 
 boolean success = job.waitforcompletion(true);
 return success ? 0 : 1;
 }
public static void main(string[] args) throws exception {
 maxtemperaturedriver driver = new maxtemperaturedriver();
 int exitcode = toolrunner.run(driver, args);
 system.exit(exitcode);
 }
}

6. supply input and output

we need to supply input file that will be used during map phase and the final output will be generated in output directory by reduct task. edit run configuration and supply command line arguments. sample.txt reside in the project root.  your project explorer should contain following

project_explorer

input_ourput ]

7. map reduce job execution

mapred_output

8. final output

if you managed to come this far, once the job is complete, it will create output directory with _success and part_nnnnn , double click to view it in eclipse editor and you will see we have supplied 5 rows of weather data (downloaded from ncdc  weather) and we wanted to find out the maximum temperature in a given year from input file and the output will contain 2 rows with max temperature in (centigrade) for each supplied year

1949 111 (11.1 c)
1950 22 (2.2 c)

output

make sure you delete the output directory next time running your application else you will get an error from hadoop saying directory already exists.

happy hadooping!

hadoop Eclipse application MapReduce Kepler (software)

Published at DZone with permission of Hardik Pandya, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Containerize Gradle Apps and Deploy to Kubernetes With JKube Kubernetes Gradle Plugin
  • Redis-Based Tomcat Session Management
  • How to Debug the Spring Boot Application in Eclipse? [Video]
  • How to Import the Gradle Spring Boot Application in Eclipse? | Spring Boot Tutorial [Video]

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!