This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.

This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.

Hadoop

/Hadoop

Hadoop Interview Questions – Part 6

By | January 10th, 2015|Hadoop|

51. What is Identity Mapper and Identity Reducer in MapReduce? Ans: ◦ org.apache.hadoop.mapred.lib.IdentityMapper: Implements the identity function, mapping inputs directly to outputs. If MapReduce programmer does not set the Mapper Class using JobConf.setMapperClass then IdentityMapper.class is used as a default value. ◦ org.apache.hadoop.mapred.lib.IdentityReducer: Performs no reduction, writing all input values directly to the output. If [...]

Hadoop Interview Questions – Part 5

By | December 23rd, 2014|Hadoop|

41. What do you mean by Task Instance? Ans: Task instances are the actual MapReduce jobs which run on each slave node. The Task Tracker starts a separate JVM processes to do the actual work (called as Task Instance) this is to ensure that process failure does not take down the entire task tracker. Each [...]

Hadoop Interview Questions – Part 4

By | December 10th, 2014|Hadoop|

31. Explain the Reducer’s reduce phase? Ans: In this phase the reduce (MapOutKeyType, Iterable, Context) method is called for each pair in the grouped inputs. The output of the reduce task is typically written to the File System via Context. write (ReduceOutKeyType, ReduceOutValType). Applications can use the Context to report progress, set application-level status messages [...]

Hadoop Interview Questions – Part 3

By | December 4th, 2014|Hadoop|

21. Which object can be used to get the progress of a particular job? Ans: Context 22. What is next step after Mapper or MapTask? Ans: The output of the Mapper is sorted and Partitions will be created for the output. Number of partition depends on the number of reducer. 23. How can we control [...]

Hadoop Interview Questions – Part 2

By | December 3rd, 2014|Hadoop|

11. What Mapper does? Ans: Maps are the individual tasks that transform input records into intermediate records. The transformed intermediate records do not need to be of the same type as the input records. A given input pair may map to zero or many output pairs. 12. What is the Input Split in map reduce [...]

Hadoop Interview Questions – Part 1

By | November 13th, 2014|Hadoop|

1. What is Hadoop framework? Ans: Hadoop is an open source framework which is written in java by apache software foundation. This framework is used to write software application which requires to process vast amount of data (It could handle multi tera bytes of data). It works in-parallel on large clusters which could have 1000 [...]