MAPREDUCE ppt big data computing fall 2014 indranil gupta.ppt

zuhaibmohammed465 13 views 24 slides Jun 20, 2024
Slide 1
Slide 1 of 24
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24

About This Presentation

**MapReduce: A Comprehensive Overview**

MapReduce is a programming model and associated implementation for processing and generating large datasets with a parallel, distributed algorithm on a cluster. It was pioneered by Google and has become a cornerstone of big data processing due to its scalabil...


Slide Content

CS 425 / ECE 428
Distributed Systems
Fall 2014
Indranil Gupta (Indy)
Lecture 3: Mapreduce and Hadoop
All slides © IG

What is MapReduce?
•Terms are borrowed from Functional Language (e.g., Lisp)
Sum of squares:
•(map square ‘(1 2 3 4))
–Output: (1 4 9 16)
[processes each record sequentially and independently]
•(reduce + ‘(1 4 9 16))
–(+ 16 (+ 9 (+ 4 1) ) )
–Output: 30
[processes set of all records in batches]
•Let’s consider a sample application: Wordcount
–You are given a hugedataset (e.g., Wikipedia dump or all of Shakespeare’s works) and asked to list the count for each
of the words in each of the documents therein

Map
•Process individual records to generate
intermediate key/value pairs.
Welcome Everyone
Hello Everyone
Welcome1
Everyone1
Hello1
Everyone1
Input <filename, file text>
Key Value

Map
•ParallellyProcess individual records to
generate intermediate key/value pairs.
Welcome Everyone
Hello Everyone
Welcome1
Everyone1
Hello1
Everyone1
Input <filename, file text>
MAP TASK 1
MAP TASK 2

Map
•Parallelly Processa large number of
individual records to generate intermediate
key/value pairs.
Welcome Everyone
Hello Everyone
Why are you here
I am also here
They are also here
Yes, it’s THEM!
The same people we were thinking of


.
Welcome1
Everyone1
Hello1
Everyone1
Why 1
Are 1
You1
Here1


.Input <filename, file text>
MAP TASKS

Reduce
•Reduce processes and merges all intermediate
values associated per key
Welcome1
Everyone1
Hello1
Everyone1
Everyone2
Hello1
Welcome1
Key Value

Reduce
•Each key assigned to one Reduce
•ParallellyProcesses and merges all intermediate values by partitioning
keys
•Popular: Hash partitioning, i.e., key is assigned to reduce # =
hash(key)%number of reduce servers
Welcome1
Everyone1
Hello1
Everyone1
Everyone2
Hello1
Welcome1
REDUCE
TASK 1
REDUCE
TASK 2

Hadoop Code -Map
public static class MapClassextends MapReduceBase implements
Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one =
new IntWritable(1);
private Text word = new Text();
public void map( LongWritable key, Text value,
OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException {
String line = value.toString();
StringTokenizer itr = new StringTokenizer(line);
while(itr.hasMoreTokens()) {
word.set(itr.nextToken());
output.collect(word, one) ;
}
}
} // Source: http://developer.yahoo.com/hadoop/tutorial/module4.html#wordcount

Hadoop Code -Reduce
public static class ReduceClassextends MapReduceBase implements
Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(
Text key,
Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output,
Reporter reporter)
throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get() ;
}
output.collect(key, new IntWritable(sum)) ;
}
}// Source: http://developer.yahoo.com/hadoop/tutorial/module4.html#wordcount

Hadoop Code -Driver
// Tells Hadoop how to run your Map -Reduce job
public void run(String inputPath, String outputPath)
throws Exception {
// The job. WordCount contains MapClass and Reduce.
JobConf conf = new JobConf(WordCount.class);
conf.setJobName(”mywordcount");
// The keys are words
(strings) conf.setOutputKeyClass(Text.class);
// The values are counts (ints)
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(MapClass.class);
conf.setReducerClass(ReduceClass.class);
FileInputFormat.addInputPath(
conf, newPath(inputPath));
FileOutputFormat.setOutputPath(
conf, new Path(outputPath));
JobClient.runJob(conf);
} // Source: http://developer.yahoo.com/hadoop/tutorial/module4.html#wordcount

Some Applications of
MapReduce
Distributed Grep:
–Input: large set of files
–Output: lines that match pattern
–Map –Emits a line if it matches the supplied pattern
–Reduce –Copies the intermediate data to output

Some Applications of
MapReduce (2)
Reverse Web-Link Graph
–Input: Web graph: tuples (a, b) where (page a page b)
–Output: For each page, list of pages that link to it
–Map –process web log and for each input <source, target>, it outputs
<target, source>
–Reduce -emits <target, list(source)>

Some Applications of
MapReduce (3)
Count of URL access frequency
–Input: Log of accessed URLs, e.g., from proxy server
–Output: For each URL, % of total accesses for that URL
–Map –Process web log and outputs <URL, 1>
–Multiple Reducers -Emits <URL, URL_count>
(So far, like Wordcount. But still need %)
–Chain another MapReduce job after above one
–Map –Processes <URL, URL_count> and outputs <1, (<URL, URL_count> )>
–1 Reducer –Sums up URL_count’sto calculate overall_count.
Emits multiple <URL, URL_count/overall_count>

Some Applications of
MapReduce (4)
Map task’s output is sorted (e.g., quicksort)
Reduce task’s input is sorted (e.g., mergesort)
Sort
–Input: Series of (key, value) pairs
–Output: Sorted <value>s
–Map –<key, value> <value, _> (identity)
–Reducer –<key, value> <key, value> (identity)
–Partitioning function –partition keys across reducers based on ranges (can’t use
hashing!)
•Take data distribution into account to balance reducer tasks

Programming MapReduce
Externally: For user
1.Write a Map program (short), write a Reduce program (short)
2.Specify number of Maps and Reduces (parallelism level)
3.Submit job; wait for result
4.Need to know very little about parallel/distributed programming!
Internally: For the Paradigm and Scheduler
1.Parallelize Map
2.Transfer data from Map to Reduce
3.Parallelize Reduce
4.Implement Storage for Map input, Map output, Reduce input, and Reduce output
(Ensure that no Reduce starts before all Maps are finished. That is, ensure the barrierbetween the Map
phase and Reduce phase)

Inside MapReduce
For the cloud:
1.Parallelize Map: easy!each map task is independent of the other!
•All Map output records with same key assigned to same Reduce
2.Transfer data from Map to Reduce:
• All Map output records with same key assigned to same Reduce task
• use partitioning function, e.g., hash(key)%number of reducers
3.Parallelize Reduce: easy!each reduce task is independent of the other!
4.Implement Storage for Map input, Map output, Reduce input, and Reduce
output
• Map input: from distributed file system
• Map output: to local disk (at Map node); uses local file system
• Reduce input: from (multiple) remote disks; uses local file systems
• Reduce output: to distributed file system
local file system= Linux FS, etc.
distributed file system= GFS (Google File System), HDFS (Hadoop
Distributed File System)

1
2
3
4
5
6
7
Blocks
from DFS
Servers
Resource Manager (assigns maps and reduces to servers)
Map tasks
I
II
III
Output files
into DFS
A
B
C
Servers
A
B
C
(Local write, remote read)
Reduce tasks

The YARN Scheduler
•Used in Hadoop2.x +
•YARN = Yet Another Resource Negotiator
•Treats each server as a collection of containers
–Container = fixed CPU + fixed memory
•Has 3 main components
–Global Resource Manager (RM)
•Scheduling
–Per-server Node Manager (NM)
•Daemon and server-specific functions
–Per-application (job) Application Master (AM)
•Container negotiation with RM and NMs
•Detecting task failures of that job

YARN: How a job gets a
container
Resource Manager
Capacity Scheduler
Node A
Node Manager A
Application
Master 1
Node B
Node Manager B
Application
Master 2
Task (App2)
2. Container Completed
1. Need
container 3. Container on Node B
4. Start task, please!
In this figure
•2 servers (A, B)
•2 jobs (1, 2)

Fault Tolerance
•Server Failure
–NM heartbeats to RM
•If server fails, RM lets all affected AMs know, and AMs take
action
–NM keeps track of each task running at its server
•If task fails while in-progress, mark the task as idle and restart it
–AM heartbeats to RM
•On failure, RM restarts AM, which then syncs up with its
running tasks
•RM Failure
–Use old checkpoints and bring up secondary RM
•Heartbeats also used to piggyback container requests
–Avoids extra messages

Slow Servers
Slow tasks are called Stragglers
•The slowest task slows the entire job down (why?)
•Due to Bad Disk, Network Bandwidth, CPU, or Memory
•Keep track of “progress” of each task (% done)
•Perform proactive backup (replicated) execution of straggler
task: task considered done when first replica complete. Called
Speculative Execution.

Locality
•Locality
–Since cloud has hierarchical topology (e.g., racks)
–GFS/HDFS stores 3 replicas of each of chunks (e.g., 64 MB in size)
•Maybe on different racks, e.g., 2 on a rack, 1 on a different rack
–Mapreduceattempts to schedule a map task on
•a machine that contains a replica of corresponding input data, or
failing that,
•on the same rack as a machine containing the input, or failing that,
•Anywhere

Mapreduce: Summary
•Mapreduce uses parallelization + aggregation to
schedule applications across clusters
•Need to deal with failure
•Plenty of ongoing research work in scheduling and
fault-tolerance for Mapreduce and Hadoop

Announcements
•HW1 released today, due Sep 18
th
•Start early!
•Please fill out Student Survey (course webpage).
Tags