Ques: 1). What is Apache Hive, and how does it work?
Apache Hive is a Hadoop-based, sophisticated warehouse project. This platform focuses on data analysis and includes data query capabilities. Hive is comparable to SQL in that it provides a user interface for querying data stored in files and database systems. And Apache Hive is a popular data analysis and querying technology used by Fortune 500 companies around the world. When it is cumbersome or inefficient to run the logic in HiveQL, Hive allows standard map reduce programmes to customise mappers and reducers (User Defined Functions UDFS).
Ques: 2). What is the purpose of Hive?
Hive is a Hadoop tool that allows you to organise and query data in a database-like format, as well as write SQL-like queries. It can be used to access and analyse Hadoop data using SQL syntax.
Ques: 3). What are the differences between local and remote meta stores?
Local meta store: When using the Local Meta store configuration, the specified meta store service, as well as the Hive service, will run on the same Java Virtual Machine (JVM) and connect to databases that are operating in distinct JVMs, either on the same machine or on a remote machine.
Remote meta store: The Meta store service and the Apache Hive service will execute on distinct JVMs in the Remote Meta store. To connect to meta store servers, all other processes use Thrift Network APIs. You can have many meta store servers in Remote meta store for high availability.
Ques: 4). Explain the core difference between the external and managed tables?
The following are the fundamental distinctions between managed and external tables:
When a managed table is dropped, the complete metadata and table data is lost. The Hive just deletes the metadata information associated with a table and leaves the table data in HDFS, whereas the external table is quite different.
Tables that are managed and tables that are external. Hive manages the data by default when you create a table, which means it moves the data into its warehouse directory. Alternatively, you can construct an external table, which instructs Hive to refer to data stored somewhere other than the warehouse directory.
The semantics of LOAD and DROP show the difference between the two table types. Let's start with a managed table. Data loaded into a managed table is stored in Hive's warehouse directory.
Ques: 5). What is the difference between a read-only schema and a write-only schema?
A table's schema is enforced at data load time in a conventional database. The data being loaded is rejected if it does not conform to the schema. Because the data is validated against the schema when it is written into the database, this architecture is frequently referred to as schema on write.
Hive, on the other hand, verifies data when it is loaded, rather than when it is queried. This is referred to as schema on read.
Between the two approaches, there are trade-offs. Because the data does not have to be read, parsed, and serialized to disc in the database's internal format, schema on read allows for a very quick first load. A file copy or move is all that is required for the load procedure. It's also more adaptable: think of having two schemas for the same underlying data, depending on the analysis. (External tables can be used in Hive for this; see Managed Tables and External Tables.)
Because the database can index columns and compress the data, schema on write makes query time performance faster. However, it takes longer to load data into the database as a result of this trade-off. Furthermore, in many cases, the schema is unknown at load time, thus no indexes can be applied because the queries have not yet been formed. Hive really shines in these situations.
Ques: 6). Write a query to insert a new column? Can you add a column with a default value in Hive?
ALTER TABLE test1 ADD COLUMNS (access_count1 int); You cannot add a column with a default value in Hive. The addition of the column has no effect on the files that support your table. Hive interprets NULL as the value for every cell in that column in order to deal with the "missing" data.
In Hive, you must effectively recreate the entire table, this time with the column filled in. It's possible that rerunning your original query with the additional column will be easier. Alternatively, you might add the column to the table you already have, then select all of its columns plus the new column's value.
Ques: 7). What is the purpose of Hive's DISTRIBUTED BY clause?
DISTRIBUTE BY determines how map output is split between reducers. By default, MapReduce computes a hash on the keys output by mappers and uses the hash values to try to distribute the key-value pairs evenly among the available reducers. Let's say we want all of the data for each value in a column to be collected at the same time. To ensure that the records for each get to the same reducer, we can use DISTRIBUTE BY. In the same way that GROUP BY determines how reducers receive rows for processing, DISTRIBUTE BY does the same.
If the DISTRIBUTE BY and SORT BY clauses are in the same query, Hive expects the DISTRIBUTE BY clause to be before the SORT BY clause. When you have a memory-intensive job, DISTRIBUTE BY is a helpful workaround because it requires Hadoop to employ Reducers instead of having a Map-only job. Essentially, Mappers gather data depending on the DISTRIBUTE BY columns supplied, reducing the framework's overall workload, and then transmit these aggregates to Reducers.
Ques: 8). What occurs when you perform a query in HIVE, please?
The Query Planner examines the query and turns it to a Hadoop Map Reduce job’s DAG (Directed Acyclic Graph).
The jobs are submitted to the Hadoop cluster in the order that the DAG suggests.
Only mappers are used for simple queries. The Input Output format is in charge of splitting an input and reading data from HDFS. After that, the data is sent to a layer called SerDe (Serializer Deserializer). The deserializer part of the SerdDe converts data as a byte stream to a structured format in this example.
Reducers will be included in Map Reduce jobs for aggregate queries. In this case, the serializer of the SerDe converts structured data to byte stream which gets handed over to the Input Output format which writes it to the HDFS.
Ques: 9). What is the importance of STREAM TABLE?
When you need information from several tables, joins are useful, but when you have 1.5 billion or more data in one table and want to link it to a master table, the order of the joining tables is crucial.
Consider the following scenario:
select foo.a,foo.b,bar.c from foo join bar on foo.a=bar.a;
Because Hive streams the right-most table (bar) and buffers other tables (foo) in memory before executing map-side/reduce-side joins. As a result, if you buffer 1.5 billion or more records, your join query will fail since 1.5 billion records will very certainly fill up Java-Heap space exception.
So, to overcome this limitation and free the user to remember the order of joining tables based on their record-size, Hive provides a key-word /*+ STREAMTABLE(foo) */ which tells Hive Analyzer to stream table foo.
select /*+ STREAMTABLE(foo) */ foo.a,foo.b,bar.c from foo join bar on foo.a=bar.a;
Hence, in this way user can be free of remembering the order of joining tables.
Ques: 10). When is it appropriate to use SORT BY instead of ORDER BY?
When working with huge volumes of data in Apache Hive, we use SORT BY instead of ORDER BY. The fact that SORT BY comes with numerous reducers is one of the reasons for utilising it. This cuts down on the amount of time it takes to complete the task. ORDER BY, on the other hand, consists of only one reduce, which means the process takes longer than usual to complete.
Ques: 11). What is the purpose of Hive's Partitioning function?
Partitioning allows users to arrange data in the Hive table in the way they want it. As a result, the system would be able to scan only the relevant data rather than the complete data set.
Consider the following scenario: Assume we have transaction log data from a business website for years such as 2018, 2019, 2020, and so on. So, in this case, you can utilise the partition key to find data for a specified year, say 2019, which will reduce data scanning by removing 2018 and 2020.
Ques: 12). What is dynamic partitioning and how does it work?
The values of partition columns are exposed during runtime in dynamic partitioning, i.e. the values are known when you load data into Hive tables. The following are some examples of how dynamic partitioning is commonly used:
To move data from a non-partitioned table to a partitioned table, which reduces latency and improves sampling.
Ques: 13). In hive, what's the difference between dynamic and static partitioning?
Hive partitioning is highly beneficial for pruning data during queries in order to reduce query times.
When data is inserted into a table, partitions are produced. Partitions are required depending on how data is loaded. When loading files (especially large files) into Hive tables, static partitions are usually preferred. When compared to dynamic partition, this saves time when loading data. You "statically" create a partition in the table and then move the file into that partition.
Because the files are large, they are typically created on HDFS. Without reading the entire large file, you can retrieve the partition column value from the filename, date, and so on. In the case of dynamic partitioning, the entire large file is read, i.e. every row of data is read, and the data is partitioned into the target tables using an MR job based on specified fields in the file.
Dynamic partitions are typically handy when doing an ETL operation in your data pipeline. For example, suppose you use the transfer command to load a large file into Table X. Then you run an idle query into Table Y and split data based on table X fields such as day and country. You could wish to execute an ETL step to partition the data in Table Y's nation partition into a Table Z where the data is partitioned based on cities for a specific country alone, and so on.
Thus depending on your end table or requirements for data and in what form data is produced at source you may choose static or dynamic partition.
Ques: 14).What is ObjectInspector in Hive?
The ObjectInspector is a feature that allows us to analyze individual columns and internal structure of a row object in Hive. This also provides a seamless way to access complex objects that can be stored in varied formats in the memory.
- A standard Java object
- An instance of the Java class
- A lazily initialized object
The ObjectInspector lets the users know the structure of an object and also helps in accessing the internal fields of an object.
Ques: 15). How does impala outperform hive in terms of query response time?
Impala should be thought of as "SQL on HDFS," whereas Hive is more "SQL on Hadoop."
Impala, in other words, does not require Hadoop at whatsoever. It simply runs daemons on all of your nodes that store some of the data in HDFS, allowing these daemons to return data rapidly without having to conduct a full Map/Reduce process.
The rationale for this is that running a Map/Reduce operation has some overhead, so short-circuiting Map/Reduce completely can result in a significant reduction in runtime.
That stated, Impala is not a replacement for Hive; it is useful in a variety of situations. When compared to Hive, Impala does not support fault-tolerance, therefore if there is a problem during your query, it will be gone. I would recommend Hive for ETL processes where a single job failure would be costly, but Impala can be great for tiny ad-hoc queries, such as for data scientists or business analysts who just want to look at and study some data without having to develop substantial jobs.
Ques: 16). Explain the different components used in the Hive Query processor?
Below mentioned is the list of Hive Query processors:
- Metadata Layer (ql/metadata)
- Parse and Semantic Analysis (ql/parse)
- Map/Reduce Execution Engine (ql/exec)
- Sessions (ql/session)
- Type Interfaces (ql/typeinfo)
- Tools (ql/tools)
- Hive Function Framework (ql/udf)
- Plan Components (ql/plan)
- Optimizer (ql/optimizer)
Ques: 17). What is the difference between Hadoop Buffering and Hadoop Streaming?
Using custom made python or shell scripts to implement your map-reduce logic is known as Hadoop Streaming. (Use the Hive TRANSFORM keyword, for example.)
In this context, Hadoop buffering refers to the phase in a map-reduce job of a Hive query with a join when records are read into the reducers after being sorted and grouped by the mappers. The author explains why you should order the join clauses in a Hive query so that the largest tables come last; this helps Hive implement joins more efficiently.
Ques: 18). How will the work be optimised by the map-side join?
Let's pretend we have two tables, one of which is a little table. A Map Reduce local job will be generated before the original join Map Reduce task, which will read data from HDFS and put it into an in-memory hash table. It serialises the in-memory hash table into a hash table file after reading it.
The data in the hash table file is then moved to the Hadoop distributed cache, which populates these files to each mapper's local disc in the following stage, while the original join Map Reduce process is running. As a result, all mappers can reload this permanent hash table file into memory and perform the join operations as previously.
The optimised map join's execution sequence is depicted in the diagram below. The short table just has to be read once after optimization. In addition, if many mappers are operating on the same system, the distributed cache only needs to send a single copy of the hash table file to this machine.
Advantages of using Map-side join:
Using Map-side join reduces the cost of sorting and combining data in theshuffle and reduces stages. The map-side join also aids task performance by reducing the time it takes to complete the assignment.
Disadvantages of Map-side join:
It is only suitable for use when one of the tables on which the map-side join operation is performed is small enough to fit into memory. As a result, performing a map-side join on tables with a lot of data in each of them isn't a good idea.
Ques: 19).What type of user defined functions exists in HIVE?
A UDF operates on a single row and produces a single row as its output. Most functions, such as mathematical functions and string functions, are of this type.
A UDF must satisfy the following two properties:
- A UDF must be a subclass of org.apache.hadoop.hive.ql.exec.UDF.
- A UDF must implement at least one evaluate() method.
A UDAF works on multiple input rows and creates a single output row. Aggregate functions include such functions as COUNT and MAX.
- A UDAF must satisfy the following two properties:
- A UDAF must be a subclass of org.apache.hadoop.hive.ql.exec.UDAF;
- An evaluator must implement five methods:
- A UDTF operates on a single row and produces multiple rows — a table — as output.
- A UDTF must be a subclass of org.apache.hadoop.hive.ql.udf.generic.GenericUDTF;
- A custom UDTF can be created by extending the GenericUDTF abstract class and then implementing the initialize, process, and possibly close methods.
- The initialize method is called by Hive to notify the UDTF the argument types to expect.
- The UDTF must then return an object inspector corresponding to the row objects that the UDTF will generate.
- Once initialize() has been called, Hive will give rows to the UDTF using the process() method.
- While in process(), the UDTF can produce and forward rows to other operators by calling forward().
- Lastly, Hive will call the close() method when all the rows have passed to the UDTF.
Ques: 20). Is the HIVE LIMIT clause truly random?
Although the manual claims that it returns rows at random, this is not the case. Without any where/order by clause, it returns "selected rows at random" as they occur in the database. This doesn't imply it's truly random (or randomly picked), but it does suggest that the order in which the rows are returned can't be predicted.
It returns the last 5 rows of whatever you're picking from as soon as you slap an order by x DESC limit 5 on there. You'd have to use something like order by rand() LIMIT 1 to get rows returned at random.
However, if your indexes aren't set up correctly, it can slow things down. I usually do a min/max to get the IDs on the table, then a random number between them, then choose those records (in your instance, just one), which is usually faster than letting the database do the work, especially on a huge dataset.