Click to rate this post!
[Total: 0 Average: 0]

Big Data Interview Questions and Answers-Hbase 

1. When should we use hbase ?

when we need to work with billions of rows and millions of columns, hbase is the best.

2. Difference b/w hbase and hdfs ?

HDFS is a distributed file system for storing and managing large data across clusters. HBase built on top of HDFS and it provides fast record lookups

3.why to use Hbase?

High capacity storage system.
Distributed design to cater large tables.
Column-Oriented Stores.
Horizontally Scalable.
High performance & Availability.
supports random real time CRUD operations.

4.why to use Hbase?

High capacity storage system
Distributed design to cater large tables
Column-Oriented Stores
Horizontally Scalable
High performance & Availability
supports random real time CRUD operations

5. Mention how many operational commands in Hbase?

Operational command in Hbases is about five types


6) In Hbase what is column families?

Column families comprise the basic unit of physical storage in Hbase to compression are applied.

7) What is the use of row key ?

The use of row key is to have logical grouping of cells which ensures all cells with the same row-key are co-located on the same server.

8) Explain deletion in Hbase? Mention what are the three types of tombstone markers in Hbase?

When you delete the cell in Hbase, the data is not actually deleted but a tombstone marker is set, making the deleted cells invisible. So, Hbase deleted cells are actually removed during compaction.

Three types of tombstone markers are there:

Version delete marker: For deletion, it marks a single version of a column
Column delete marker: For deletion, it marks all the versions of a column
Family delete marker: For deletion, it marks of all column for a column family.

9) Explain how does Hbase actually delete a row?

Whatever you write will be stored from RAM to disk, It can write immutable barring compaction. Hence, during deletion process in Hbase, major compaction process delete marker while minor compaction don’t.

10) Explain what happens if you alter the block size of a column family on an already occupied database?

When you alter the block size of the column family, the new data occupies the new block size while the old data remains within the old block size. During data compaction, old data will take the new block size.

Get more questions and answers from onlineitguru trainers after completion of Big data certification course

to our newsletter

Drop Us A Query

Trending Courses
  • Selenium with python
    Selenium with Python Training
  • As we know, that Selenium with Python Web Browser Selenium Automation is Gaining Popularity Day by Day. So many Frameworks and Tools Have arisen to get Services to Developers.

  • machine learning with python
    Machine Learning with Python Training
  • Over last few years, Big Data and analysis have come up, with Exponential and modified Direction of Business. That operate Python, emerged with a fast and strong Contender for going with Predictive Analysis.

  • Data science with R
    Data Science With R Training
  • Understanding and using Linear, non-linear regression Models and Classifying techniques for stats analysis. Hypothesis testing sample methods, to get business decisions.

  • data science with python
    Data Science with Python Training
  • Everyone starts Somewhere, first you learn basics of Every Scripting concept. Here you need complete Introduction to Data Science python libraries Concepts.

  • devops with azure
    Devops with Azure Training
  • As we Know Azure DevOps is a Bunch of Services, in guiding Developers. It contains CI/CD, pipelines, code Repositories, Visual Reporting Tools and more code management with version control.

  • python training
    Python Certification Training
  • Python is a dynamic interrupted language which is used in wide varieties of applications. It is very interactive object oriented and high-level programming language.


100% Secure Payments. All major credit & debit cards accepted.

Call Now Button