Bigdata Interview Questions

Top most important Bigdata interview questions and answers by Experts:

Here is a list of Top most important Bigdata interview questions and answers by Experts.If you want to download Bigdata interview questions pdf free ,you can register with RVH techguru. Our experts prepared these Bigdata interview questions to accommodate freshers level to most experienced level technical interviews.

If you want to become an expert in Bigdata ,Register for Bigdata online training here.
1) What is Big Data?
Big data is data that exceeds the processing capacity of traditional database systems. The data is too big, moves too fast, or doesn’t fit the strictures of your database architectures. To gain value from this data, you must choose an alternative way to process it.

Name any org. who is generating Big Data?
Facebook,Google

2) What is NoSQL?
NoSQL is a whole new way of thinking about a database. NoSQL is not a relational database. The reality is that a relational database model may not be the best solution for all situations. The easiest way to think of NoSQL, is that of a database which does not adhering to the traditional relational database management system (RDMS) structure. Sometimes you will also see it revered to as ‘not only SQL’.

3) We have already SQL then Why NoSQL?
NoSQL is high performance with high availability, and offers rich query language and easy scalability.
NoSQL is gaining momentum, and is supported by Hadoop, MongoDB and others. The NoSQL Database site is a good reference for someone looking for more information.

4) What is Hadoop and where did Hadoop come from?
By Mike Olson: The underlying technology was invented by Google back in their earlier days so they could usefully index all the rich textural and structural information they were collecting, and then present meaningful and actionable results to users. There was nothing on the market that would let them do that, so they built their own platform. Google’s innovations were incorporated into Nutch, an open source project, and Hadoop was later spun-off from that. Yahoo has played a key role developing Hadoop for enterprise applications.

5) What problems can Hadoop solve?
By Mike Olson: The Hadoop platform was designed to solve problems where you have a lot of data — perhaps a mixture of complex and structured data — and it doesn’t fit nicely into tables. It’s for situations where you want to run analytics that are deep and computationally extensive, like clustering and targeting. That’s exactly what Google was doing when it was indexing the web and examining user behavior to improve performance algorithms.

6) What is the Difference between Hadoop and Apache Hadoop?
There is no diff, Hadoop, formally called Apache Hadoop, is an Apache Software Foundation project.

7) What is the difference between SQL and NoSQL?

Is NoSQL follow relational DB model?
No

8) Why would NoSQL be better than using a SQL Database? And how much better is it?
It would be better when your site needs to scale so massively that the best RDBMS running on the best hardware you can afford and optimized as much as possible simply can’t keep up with the load. How much better it is depends on the specific use case (lots of update activity combined with lots of joins is very hard on “traditional” RDBMSs) – could well be a factor of 1000 in extreme cases.

9) Name the modes in which Hadoop can run?
Hadoop can be run in one of three modes:
i. Standalone (or local) mode
ii. Pseudo-distributed mode
iii. Fully distributed mode

10) What do you understand by Standalone (or local) mode?
There are no daemons running and everything runs in a single JVM. Standalone mode is suitable for running MapReduce programs during development, since it is easy to test and debug them.

11) What is Pseudo-distributed mode?
The Hadoop daemons run on the local machine, thus simulating a cluster on a small scale.

12) What does /var/hadoop/pids do?
It stores the PID.

13) What is the full form of HDFS?
Hadoop Distributed File System

14) What is the idea behind HDFS?
HDFS is built around the idea that the most efficient approach to storing data for processing is to optimize it for write once, and read many approach.

15) Where does HDFS fail?
Cannot support large number of small files as the file system metadata increases with every new file, and hence it is not able to scale to billions of files. This file system metadata is loaded into memory and since memory is limited, so is the number of files supported.

16) What are the ways of backing up the filesystem metadata?
There are 2 ways of backing up the filesystem metadata which maps different filenames with their data stored as different blocks on various data nodes:
Writing the filesystem metadata persistently onto a local disk as well as on a remote NFS mount.
Running a secondary namenode.

17) What is Namenode in Hadoop?
Namenode is the node which stores the filesystem metadata i.e. which file maps to what block locations and which blocks are stored on which datanode.

18) What is DataNode in Hadoop?
Namenode is the node which stores the filesystem metadata i.e. which file maps to what block locations and which blocks are stored on which datanode.

19) What is Secondary NameNode?
The Secondary NameNode (SNN) is an assistant daemon for monitoring the state of the cluster HDFS, Like the NameNode, Each cluster has one SNN, and it typically resides on its own machine as well.

20) What is JobTracker in Hadoop?
The JobTracker is the service within Hadoop that farms out MapReduce tasks to specific nodes in the cluster, ideally the nodes that have the data, or at least are in the same rack.

21) What are the functions of JobTracker in Hadoop?
Once you submit your code to your cluster, the JobTracker determines the execution plan by determining which files to process, assigns nodes to different tasks, and monitors all tasks as they are running.
If a task fail, the JobTracker will automatically relaunch the task, possibly on a different node, up to a predefined limit of retries.
There is only one JobTracker daemon per Hadoop cluster. It is typically run on a server as a master node of the cluster.

22) What is MapReduce in Hadoop?
Hadoop MapReduce (Hadoop Map/Reduce) is a software framework for distributed processing of large data sets on compute clusters of commodity hardware. It is a sub-project of the Apache Hadoop project. The framework takes care of scheduling tasks, monitoring them and re-executing any failed tasks.

23) What are the Hadoop configuration files?
1. hdfs-site.xml
2. core-site.xml
3. mapred-site.xml

24) Name the most common Input Formats defined in Hadoop? Which one is default?

The two most common Input Formats defined in Hadoop are:

  • TextInputFormat
  • KeyValueInputF5ormat
  • SequenceFileInputFormat

TextInputFormat is the Hadoop default.

25) What is the difference between TextInputFormat and KeyValueInputFormat class?

TextInputFormat: It reads lines of text files and provides the offset of the line as key to the Mapper and actual line as Value to the mapper.

KeyValueInputFormat: Reads text file and parses lines into key, Val pairs. Everything up to the first tab character is sent as key to the Mapper and the remainder of the line is sent as value to the mapper.

26) What is InputSplit in Hadoop?

When a Hadoop job is run, it splits input files into chunks and assign each split to a mapper to process. This is called InputSplit.

27) How is the splitting of file invoked in Hadoop framework?

It is invoked by the Hadoop framework by running getInputSplit()method of the Input format class (like FileInputFormat) defined by the user.

28) Consider case scenario: In M/R system, – HDFS block size is 64 MB

– Input format is FileInputFormat

–We have 3 files of size 64K, 65Mb and 127Mb

29) How many input splits will be made by Hadoop framework?

Hadoop will make 5 splits as follows:

– 1 split for 64K files

– 2 splits for 65MB files

– 2 splits for 127MB files

30) What is the purpose of RecordReader in Hadoop?

The InputSplit has defined a slice of work1, but does not describe how to access it. The RecordReader class actually loads the data from its source and converts it into (key, value) pairs suitable for reading by the Mapper. The RecordReader instance is defined by the Input Format.

31) After the Map phase finishes, the Hadoop framework does “Partitioning, Shuffle and sort”. Explain what happens in this phase?

Partitioning: It is the process of determining which reducer instance will receive which intermediate keys and values. Each mapper must determine for all of its output (key, value) pairs which reducer will receive them. It is necessary that for any key, regardless of which mapper instance generated it, the destination partition is the same.

Shuffle: After the first map tasks have completed, the nodes may still be performing several more map tasks each. But they also begin exchanging the intermediate outputs from the map tasks to where they are required by the reducers. This process of moving map outputs to the reducers is known as shuffling.

Sort: Each reduce task is responsible for reducing the values associated with several intermediate keys. The set of intermediate keys on a single node is automatically sorted by Hadoop before they are presented to the Reducer.

32) If no custom partitioner is defined in Hadoop then how is data partitioned before it is sent to the reducer?

The default partitioner computes a hash value for the key and assigns the partition based on this result.

33)  What is a Combiner?

The Combiner is a ‘mini-reduce’ process which operates only on data generated by a mapper. The Combiner will receive as input all data emitted by the Mapper instances on a given node. The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers.

34) What is JobTracker?

JobTracker is the service within Hadoop that runs MapReduce jobs on the cluster.

35) What are some typical functions of Job Tracker?

The following are some typical tasks of JobTracker:-

– Accepts jobs from clients

– It talks to the NameNode to determine the location of the data.

– It locates TaskTracker nodes with available slots at or near the data.

– It submits the work to the chosen TaskTracker nodes and monitors progress of each task by receiving heartbeat signals from Task tracker.

36) What is TaskTracker?

TaskTracker is a node in the cluster that accepts tasks like MapReduce and Shuffle operations – from a JobTracker.

37) What is the relationship between Jobs and Tasks in Hadoop?

One job is broken down into one or many tasks in Hadoop.

38) Suppose Hadoop spawned 100 tasks for a job and one of the task failed. What will Hadoop do?

It will restart the task again on some other TaskTracker and only if the task fails more than four (default setting and can be changed) times will it kill the job.

39) Hadoop achieves parallelism by dividing the tasks across many nodes, it is possible for a few slow nodes to rate-limit the rest of the program and slow down the program. What mechanism Hadoop provides to combat this?

Speculative Execution.

40) How does speculative execution work in Hadoop?

JobTracker makes different TaskTrackers pr2ocess same input. When tasks complete, they announce this fact to the JobTracker. Whichever copy of a task finishes first becomes the definitive copy. If other copies were executing speculatively, Hadoop tells the TaskTrackers to abandon the tasks and discard their outputs. The Reducers then receive their inputs from whichever Mapper completed successfully, first.

41) Using command line in Linux, how will you

– See all jobs running in the Hadoop cluster

– Kill a job?

Hadoop job – list

Hadoop job – kill jobID

42)  What is Hadoop Streaming?

Streaming is a generic API that allows programs written in virtually any language to be used as Hadoop Mapper and Reducer implementations.

43) What is the characteristic of streaming API that makes it flexible run MapReduce jobs in languages like Perl, Ruby, Awk etc.?

Hadoop Streaming allows to use arbitrary programs for the Mapper and Reducer phases of a MapReduce job by having both Mappers and Reducers receive their input on stdin and emit output (key, value) pairs on stdout.

44) What is Distributed Cache in Hadoop?

Distributed Cache is a facility provided by the MapReduce framework to cache files (text, archives, jars and so on) needed by applications during execution of the job. The framework will copy the necessary files to the slave node before any tasks for the job are executed on that node.

45) What is the benefit of Distributed cache? Why can we just have the file in HDFS and have the application read it?

This is because distributed cache is much faster. It copies the file to all trackers at the start of the job. Now if the task tracker runs 10 or 100 Mappers or Reducer, it will use the same copy of distributed cache. On the other hand, if you put code in file to read it from HDFS in the MR Job then every Mapper will try to access it from HDFS hence if a TaskTracker run 100 map jobs then it will try to read this file 100 times from HDFS. Also HDFS is not very efficient when used like this.

46) What mechanism does Hadoop framework provide to synchronise changes made in Distribution Cache during runtime of the application?

This is a tricky question. There is no such mechanism. Distributed Cache by design is read only during the time of Job execution.1

47) Have you ever used Counters in Hadoop. Give us an example scenario?

Anybody who claims to have worked on a Hadoop project is expected to use counters.

48) Is it possible to provide multiple input to Hadoop? If yes then how can you give multiple directories as input to the Hadoop job?

Yes, the input format class provides methods to add multiple directories as input to a Hadoop job.

49) Is it possible to have Hadoop job output in multiple directories? If yes, how?

Yes, by using Multiple Outputs class.

50) What will a Hadoop job do if you try to run it with an output directory that is already present? Will it

– Overwrite it

– Warn you and continue

– Throw an exception and exit

The Hadoop job will throw an exception and exit.

51) How can you set an arbitrary number of mappers to be created for a job in Hadoop?

You cannot set it.

52) How can you set an arbitrary number of Reducers to be created for a job in Hadoop?

You can either do it programmatically by using method setNumReduceTasks in the Jobconf Class or set it up as a configuration setting.

53) How will you write a custom partitioner for a Hadoop job?

To have Hadoop use a custom partitioner you will have to do minimum the following three:

– Create a new class that extends Partitioner Class

– Override method getPartition

– In the wrapper that runs the Mapreduce, either

– Add the custom partitioner to the job programmatically using method set Partitioner Class or – add the custom partitioner to the job as a config file (if your wrapper reads from config file or oozie)

54)  How did you debug your Hadoop code?

There can be several ways of doing this but most common ways are:-

– By using counters.

– The web interface provided by Hadoop framework.

55) How will you add/delete a Node to the existing cluster?
Add: Add the host name/Ip address in dfs.hosts/slaves file and refresh the cluster with $hadoop dfsamin -refreshNodes
Delete: Add the hostname/Ip address to dfs.hosts.exclude/remove the entry from slaves file and refresh the cluster with $hadoop dfsamin -refreshNodes

56) What is SSH? What is the use of it In Hadoop?
Secure Shell.
57) How will you setup Password-less SSH?
Search in this sitess
58) How will you format the HDFS? How frequently it will be done?
$hadoop namnode -format.
Note: Format had to be done only once that to during initial cluster setup.

59) Do you know about cron jobs? How will you Setup?
 In Ubuntu, go to the terminal and type:
$ crontab -e

this will open our personal crontab (cron configuration file), the first line in that file explains it all, In every line we can define one command to run, and the format is quite simple. So the structure is:

minute hour day-of-month month day-of-week command

For all the numbers you can use lists eg, 5,34,55 in the first field will mean run at 5 past 34 past and 55 past what ever hour is defined.

60) What is the role of /etc/hosts file in setting up of HDFS cluster?
For hostname to Ip address maping
61) What is dfsadmin command in Hadoop?
Via HDFS web UI, we can see no of decommissioned nodes and we need to rebalance the cluster now

62) What is the impact if namenode fails and what are the necessary action items now?
 Entire hdfs will be down and we need to restart the namenode after copying fsimage and edits from secondaryNN
63) What is Log4j?
Logging Framework
64) How do we set logging level for hadoop daemons/commands?
In log4j.properties or in hadoop-env.sh file, hadoop.root.logger=INFO,console (WARN,DRFA)
Is there any impact on mapreduce jobs if there is no mapred-site.xml file created in HADOOP_HOME/conf directory but all

65) the necessary properties are difined in yarn-site.xml?
No
66) How does Hadoop’s CLASSPATH plays vital role in starting or stopping in hadoop daemons.
Classpath will contain list of directories containing jar files required to start/stop daemons for example HADOOP_HOME/share/hadoop/common/lib contains all the common utility jar files.

67) What is the default logging level in hadoop?
hadoop.root.logger=INFO,console.
68) What is the ‘hadoop.tmp.dir’ configuration parameter default to ?
It is user.name. We need a directory that a user can write and also not to interfere with other users. If we didn’t include the username, then different users would share the same tmp directory. This can cause authorization problems, if folks’ default umask doesn’t permit write by others. It can also result in folks stomping on each other, when they’re, e.g., playing with HDFS and re-format their filesystem.
69) How do we verify the status and health of the cluster?
If there is no configuration error at client machine or namenode machine, a common cause for this is the Hadoop service isn’t running. If there is problem with Check that there isn’t an entry for our hostname mapped to 127.0.0.1 or 127.0.1.1 in /etc/hosts.
70) How do we set a configuration property to be unique/constant across the cluster nodes and no slave nodes should override this?
We can achive this by defining this property in core/hdfs/mapred/yarn-site.xml file on namenode with final tag as shown below.
<name>mapreduce.task.io.sort.mb</name>

<value>512</value>

<final>true</final>

71) Does the name-node stay in safe mode till all under-replicated files are fully replicated?
No. The name-node waits until all or majority of data-nodes report their blocks. But name-node will stay in safe mode until a specific percentage of blocks of the system is minimally replicated. minimally replicated is not fully replicated.