Uses of Class
org.apache.hadoop.mapreduce.RecordReader

Packages that use RecordReader
org.apache.hadoop.mapreduce   
org.apache.hadoop.mapreduce.lib.db   
org.apache.hadoop.mapreduce.lib.input   
 

Uses of RecordReader in org.apache.hadoop.mapreduce
 

Methods in org.apache.hadoop.mapreduce that return RecordReader
abstract  RecordReader<K,V> InputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
          Create a record reader for a given split.
 

Constructors in org.apache.hadoop.mapreduce with parameters of type RecordReader
MapContext(Configuration conf, TaskAttemptID taskid, RecordReader<KEYIN,VALUEIN> reader, RecordWriter<KEYOUT,VALUEOUT> writer, OutputCommitter committer, StatusReporter reporter, InputSplit split)
           
Mapper.Context(Configuration conf, TaskAttemptID taskid, RecordReader<KEYIN,VALUEIN> reader, RecordWriter<KEYOUT,VALUEOUT> writer, OutputCommitter committer, StatusReporter reporter, InputSplit split)
           
 

Uses of RecordReader in org.apache.hadoop.mapreduce.lib.db
 

Subclasses of RecordReader in org.apache.hadoop.mapreduce.lib.db
 class DataDrivenDBRecordReader<T extends DBWritable>
          A RecordReader that reads records from a SQL table, using data-driven WHERE clause splits.
 class DBRecordReader<T extends DBWritable>
          A RecordReader that reads records from a SQL table.
 class MySQLDataDrivenDBRecordReader<T extends DBWritable>
          A RecordReader that reads records from a MySQL table via DataDrivenDBRecordReader
 class MySQLDBRecordReader<T extends DBWritable>
          A RecordReader that reads records from a MySQL table.
 class OracleDataDrivenDBRecordReader<T extends DBWritable>
          A RecordReader that reads records from a Oracle table via DataDrivenDBRecordReader
 class OracleDBRecordReader<T extends DBWritable>
          A RecordReader that reads records from an Oracle SQL table.
 

Methods in org.apache.hadoop.mapreduce.lib.db that return RecordReader
protected  RecordReader<LongWritable,T> DBInputFormat.createDBRecordReader(DBInputFormat.DBInputSplit split, Configuration conf)
           
protected  RecordReader<LongWritable,T> DataDrivenDBInputFormat.createDBRecordReader(DBInputFormat.DBInputSplit split, Configuration conf)
           
protected  RecordReader<LongWritable,T> OracleDataDrivenDBInputFormat.createDBRecordReader(DBInputFormat.DBInputSplit split, Configuration conf)
           
 RecordReader<LongWritable,T> DBInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
          Create a record reader for a given split.
 

Uses of RecordReader in org.apache.hadoop.mapreduce.lib.input
 

Subclasses of RecordReader in org.apache.hadoop.mapreduce.lib.input
 class CombineFileRecordReader<K,V>
          A generic RecordReader that can hand out different recordReaders for each chunk in a CombineFileSplit.
 class DelegatingRecordReader<K,V>
          This is a delegating RecordReader, which delegates the functionality to the underlying record reader in TaggedInputSplit
 class KeyValueLineRecordReader
          This class treats a line in the input as a key/value pair separated by a separator character.
 class LineRecordReader
          Treats keys as offset in file and value as line.
static class SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
          Read records from a SequenceFile as binary (raw) bytes.
 class SequenceFileAsTextRecordReader
          This class converts the input keys and values to their String forms by calling toString() method.
 class SequenceFileRecordReader<K,V>
          An RecordReader for SequenceFiles.
 

Fields in org.apache.hadoop.mapreduce.lib.input declared as RecordReader
protected  RecordReader<K,V> CombineFileRecordReader.curReader
           
 

Fields in org.apache.hadoop.mapreduce.lib.input with type parameters of type RecordReader
protected  Class<? extends RecordReader<K,V>> CombineFileRecordReader.rrClass
           
protected  Constructor<? extends RecordReader<K,V>> CombineFileRecordReader.rrConstructor
           
 

Methods in org.apache.hadoop.mapreduce.lib.input that return RecordReader
 RecordReader<K,V> SequenceFileInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
           
 RecordReader<LongWritable,Text> TextInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
           
 RecordReader<K,V> SequenceFileInputFilter.createRecordReader(InputSplit split, TaskAttemptContext context)
          Create a record reader for the given split
 RecordReader<BytesWritable,BytesWritable> SequenceFileAsBinaryInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
           
 RecordReader<LongWritable,Text> NLineInputFormat.createRecordReader(InputSplit genericSplit, TaskAttemptContext context)
           
 RecordReader<Text,Text> KeyValueTextInputFormat.createRecordReader(InputSplit genericSplit, TaskAttemptContext context)
           
abstract  RecordReader<K,V> CombineFileInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
          This is not implemented yet.
 RecordReader<K,V> DelegatingInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
           
 RecordReader<Text,Text> SequenceFileAsTextInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
           
 

Constructor parameters in org.apache.hadoop.mapreduce.lib.input with type arguments of type RecordReader
CombineFileRecordReader(CombineFileSplit split, TaskAttemptContext context, Class<? extends RecordReader<K,V>> rrClass)
          A generic RecordReader that can hand out different recordReaders for each chunk in the CombineFileSplit.
 



Copyright © 2009 The Apache Software Foundation