June 25, 2019

Top 20 Oracle PL/SQL Interview Questions and Answers


The following technical interview questions and answers are as per my experiences while working in Oracle PL/SQL projects. These may be useful for the freshers and experienced developers whenever facing PL/SQL interview.

 

Ques: 1. What is a mutating table error? How can you get this error?

Answer:

This error can come with triggers, whenever a trigger is trying to update a row it is currently using. The usual fix involves either use of views or temporary tables, so the database is selecting from one while updating the other. This is all because of the table is in middle of a transaction and referencing the same table again in the middle of the updating action causes the trigger to mutate.


Oracle Fusion Applications interview Questions and Answers


Ques: 2. What is the key difference between SQL and PL/SQL?

Answer:

SQL and PL/SQL are used to access data within Oracle databases. SQL is a limited language that allows you to directly interact with the database. You can write queries, manipulate objects and data of oracle database with SQL language. SQL doesn't include the programming concepts.

By using PLSQL, you can extract and manipulate the objects and data of oracle database. And can do all the things that normal programming languages can have, such as looping and controlled executions.

 

Oracle Accounts Payables Interview Questions and Answers

 

Ques: 3. What is the difference between Form triggers and Database level triggers?

Answer:

Form level triggers are use in forms and fire on any level like item level, row level or on block level on requirement of application. And database triggers are written in database directly and fire on behalf of any transaction like Insert, Update and delete on table automatically.

The key difference in form level triggers and database trigger is that form level trigger fire on user or application requirement and database triggers will fire automatically.

 

Oracle ADF Interview Questions and Answers                                 

 

Ques: 4. What is Pseudo column?

Answer:

Pseudo columns are database columns which are used for different purposes in oracle database.

ROWNUM, ROWID, SYSDATE, UID, USER, ORA_ROWSCN, SYSTIMESTAMP are pseudo columns in oracle database.

 

Oracle Access Manager Interview Questions and Answers


Ques: 5. What is exception handling in oracle? How can they be handled?

Answer:

When an error occurs, exception is raised normally, and execution is stopped. Control transfers to exception handling part. Exception is an error situation which arises during program execution. Exception handlers are routines written to handle the exception. The exceptions can be internally defined User-defined exception. In oracle, exception can be handled by using these exception statements:
EXCEPTION WHEN
DUP_VAL_ON_INDEX
NOT_LOGGED_ON
TOO_MANY_ROWS
VALUE_ERROR
NO_DATA_FOUND

 

Oracle Fusion HCM Interview Questions and Answers

 

Ques: 6. What is the key difference between OPEN-FETCH-CLOSE and FOR LOOP in CURSOR?

Answer:

A FOR LOOP in cursor implicitly declares its loop index as a %ROWTYPE record, opens a cursor, repeatedly fetches rows of values from the result set into fields in the record, and closes the cursor when all rows have been processed.

While using the OPEN-FETCH-CLOSE, we have to explicitly open the query and closing the query.

 

Oracle SCM Interview Questions and Answers

 

Ques: 7. What is Dynamic SQL? How can we use it in Oracle?

Answer: 

Dynamic SQL is used by PL/SQL to execute Data Definition Language (DDL) statements, Data Control (DCL) statements, or Transaction Control statements within PL/SQL blocks. These statements can, probably will change from execution to execution means change at runtime. These statements are not stored within the source code but are stored as character variables in the program.

    The SQL statements are created dynamically at runtime by using variables. This is used either using native dynamic SQL or through the DBMS_SQL package. Dynamic SQL supports all SQL data types.

 

Oracle Financials Interview questions and Answers

 

Ques: 8. What is the key difference Between Row Level Trigger and Statement Level Trigger?

Answer:

Row level trigger executes once for each row after or before in the DML event.  It can be defined by using FOR EACH ROW.

Statement Level trigger executes once after or before the DML event, it doesn’t matter many rows are affected by the DML event.  

 

Oracle Cloud Interview Questions and Answers


Ques: 9. What are the various steps included in the compilation process of a PL/SQL block?

Answer:

In the compilation process of a PL/SQL block, the syntax checking, binding, and p-code generation are involved. Syntax checking involves checking PL/SQL code for compilation errors. Syntax errors have been corrected, a storage address is assigned to the variables that are used to hold data for Oracle. This process is called binding. After binding, P-code is generated for the PL/SQL block. P-code is a list of instructions to the PL/SQL engine. For named blocks, p-code is stored in the database, and it is used the next time the program is executed.

 

Oracle SQL Interview Questions and Answers


Ques: 10. What packages have Oracle provided to PLSQL developers?

Answer:

Oracle provides the DBMS_ series of packages to PLSQL developers to smooth the programming. The below packages, the developer should be aware of:

DBMS_DDL

UTL_FILE

DBMS_OUTPUT

DBMS_JOB

DBMS_SQL

DBMS_PIPE

DBMS_TRANSACTION

DBMS_LOCK

DBMS_ALERT

DBMS_UTILITY

 

Oracle RDMS Interview Questions and Answers

 

Ques: 11. How can you protect your PL/SQL source code?

Answer:

Oracle provides a binary wrapper utility that can be used to scramble PL/SQL source code. This utility was introduced in Oracle 7.

This utility use human-readable PL/SQL source code as input and writes out portable binary object code. It can larger than the original in size. The binary code can be distributed without fear of exposing your used algorithms and methods. Oracle will still understand and know how to execute the code. Be careful, there is no "decode" command available. So, always keep your source code saved.

 

BI Publisher Interview Questions and Answers

 

Ques: 12. What are SQLCODE and SQLERRM in PLSQL? What is their importance in programming in PLSQL?

Answer:

In PLSQL, SQLCODE returns the value of the error number for the last encountered error. Whereas the SQLERRM returns the actual error message for the last encountered error. They are used in exception handling in PLSQL. These are very useful for the WHEN OTHERS exception.


Oracle 10g Interview Questions and Answers 



Ques: 13. What do you understand by cursors in PLSQL?


Answer:

Cursor is a pointer variable in a memory and use for data manipulation operations in PLSQL. Basically, cursor is a private SQL memory area. It is also used to improve the performance of the PLSQL block.

Two types of cursors are:

a). Implicit cursor: Implicit cursors use oracles to manipulate the data manipulation operations internally and developers have no control on this type of cursor. We use sql%notfound and sql%rowcount cursor attributes in implicit cursor.

b). Explicit cursor: Explicit cursors are created by the developers and can control it by using the Fetch, Open and close keywords.

There are five types of cursors attributes in PLSQL:

1). %isopen: used to verify whether this cursor is open or not.
2). %found: If cursor fetch the data then %found return true.
3). %notfound: If cursor fetches not data then %notfound return true.
4). %rowcount: It return no. of rows that are in cursor.
5). %bulk_rowcount: It is same as %rowcount but it is used in bulk.

 

Ques: 14. What is the Index-By-Tables in PLSQL?

Answer:

Index-By-Tables is also known as Associative Arrays. These are the sets of key-value pairs, where each key is unique and is used to locate a corresponding value in the array. The key can be an integer or a string.

Syntax : TYPE tab_type_name IS TABLE OF element type

               INDEX  BY  BINARY_INTEGER;

Ex:  TYPE DeptTabTyp IS TABLE OF Dept%ROWTYPE

         INDEX BY BINARY_INTEGER;

         Dept_tab DeptTabTyp;

 

Ques: 15. What do you understand by bulk binding?

Answer:

The bulk binding technique improves performance by minimizing the number of context switches between the PL/SQL and SQL engines. A DML operation statement can transfer all the elements of a collection in a single operation.

           For example, If the collection has 100 elements, It lets you to perform the 100 SELECT, INSERT, UPDATE, or DELETE statements using a single operation.              

 

Ques: 16. What do you under by AUTONOMOUS_TRANSACTION?

Answer:

The pragma AUTONOMOUS_TRANSACTION instructs the PL/SQL compiler to mark a PLSQL block as autonomous i.e. independent. Autonomous transactions let the compiler stop the main transaction to do DML operations, commit or roll back those operations, then resume the main transaction.

 

Ques: 17. What are the two components of LOB datatype in PLSQL?

Answer:

The two components of the LOB datatype in PLSQL are:

1). LOB locator: LOB Locator is a locator, which points to the location in the database where the actual value is stored. This value is stored along with the record in the table row and is like a pointer to the actual location of LOB value.

2). LOB value: It is referred to an actual image, file or value of the LOB datatype.

 

Ques: 18. What are cascading of triggers?

Answer:

If we insert data in one table and that table have trigger on it then trigger fire. And in this trigger, there is another table that we are using for inserting the data in it and this table has also trigger on it then this trigger also fire. This is called cascading of triggers.

 

Ques: 19. What do you understand by Table Functions in PLSQL?

Answer:

We can use the table function like the name of the database table. Table functions are functions that produce a collection of rows (either a nested table or a Varray) that can be queried like a physical database table or assigned to PL/SQL collection variable.

 

Ques: 20. What is the use of NOCOPY in PLSQL?

Answer:

In PLSQL programming, the NOCOPY is Compiler Hint. When the Parameters hold large data structures such as collections and records, all this time copying slows down the execution. To prevent this, we can specify NOCPY. This allows the PL/SQL Compiler to pass OUT and INOUT parameters by reference.


June 05, 2019

Top 20 Hadoop Technical Interview Questions & Answers



Ques: 1. What is the need of Hadoop?

Ans: A large amount of unstructured data is getting dumped into our machines in every single day. The major challenge is not to store large data sets in our systems but to retrieve and analyze the big data in the organizations, that too data present in different machines at different locations.
In this situation, a necessity for Hadoop arises. Hadoop has the ability to analyze the data present in different machines at different locations very quickly and in a very cost-effective way. It uses the concept of MapReduce which enables it to divide the query into small parts and process them in parallel. This is also known as parallel computing.


Ques: 2. What is the Hadoop Framework?

Ans: It is an open source framework, which is written in java by Apache software foundation. This framework is used to write software application which requires to process vast amount of data. Hadoop could handle multi tera bytes of data. It works in-parallel on large clusters which could have 1000 of computers (Nodes) on the clusters. It also processes data very reliably and fault-tolerant manner.


Ques: 3. What are the various basic characteristics of Hadoop?

Ans: Hadoop framework has the capability of solving issues involving Big Data analysis. It is written in Java. Its programming model is based on Google. MapReduce and infrastructure is based on Google’s Big Data and distributed file systems. Hadoop is scalable, and more nodes can be added to it.


Ques: 4. What are the core components of Hadoop?

Ans: HDFS and MapReduce are the core components of Hadoop. Hadoop Distributed File System (HDFS) is basically used to store large data sets and MapReduce is used to process such large data sets.


Ques: 5. What do you understand by streaming access?

Ans: HDFS works on the principle of ‘Write Once, Read Many’. This feature of streaming access is extremely important in HDFS. In HDFS, reading the complete data is more important than the time taken to fetch a single record from the data. HDFS focuses not so much on storing the data but how to retrieve it at the fastest possible speed, especially while analyzing logs.


Ques: 6. What do you understand by a task tracker?

Ans: Task Trackers manage the execution of individual tasks on slave node. Task tracker is also a daemon that runs on DataNodes. When a client submits a job, the job tracker will initialize the job and divide the work and assign them to different task trackers to perform MapReduce tasks.
While performing this action, the task tracker will be simultaneously communicating with job tracker by sending heartbeat. If the job tracker does not receive heartbeat from task tracker within specified time, then it will assume that task tracker has crashed and assign that task to another task tracker in the cluster.


Ques: 7. If a particular file is 40 MB, Will the HDFS block still consume 64 MB as the default size?

Ans: No, 64 mb is just a unit where the data will be stored. In this particular situation, only 40 MB will be consumed by an HDFS block and 24 MB will be free to store something else. It is the MasterNode that does data allocation in an efficient manner.


Ques: 8. What is a Rack in Hadoop?

Ans: Rack is a storage area with all the DataNodes put together. Rack is a physical collection of DataNodes which are stored at a single location. There can be multiple racks in a single location. These DataNodes can be physically located at different places.


Ques: 9. How will the data be stored on a Rack?

Ans: The content of the file will be divided into blocks whenever the client is ready to load a file into the cluster. Now the client consults the NameNode and gets 3 DataNodes for every block of the file which indicates where the block should be stored. While placing the DataNodes, the key rule followed is “for every block of data, two copies will exist in one rack, third copy in a different rack”. This rule is known as “Replica Placement Policy”.


Ques: 10. Can you explain the input and output data format of the Hadoop Framework?

Ans: The MapReduce framework operates exclusively on pairs, that is, the framework views the input to the job as a set of pairs and produces a set of pairs as the output of the job, conceivably of different types.
The flow can be like: [input] -> map -> -> combine/sorting -> -> reduce -> [output]


Ques: 11. How can you use the Reducer?

Ans: Reducer reduces a set of intermediate values which share a key to a (can be smaller one) set of values. The number of reduces for the job is set by the user via Job.setNumReduceTasks(int).


Ques: 12. How can you explain the core methods of the Reducer?

Ans: The API of Reducer is very similar to that of Mapper, there's a run() method that receives a Context containing the job's configuration as well as interfacing methods that return data from the reducer itself back to the framework. The run() method calls setup() once, reduce() once for each key associated with the reduce task, and cleanup() once at the end. Each of these methods can access the job's configuration data by using Context.getConfiguration().
Reduce() method is the heart of any Reducer. This is called once per key; the second argument is an iteratable which returns all the values associated with that key.


Ques: 13. How can you schedule a Task by a Jobtracker?

Ans: The TaskTrackers send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These messages also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated. When the JobTracker tries to find somewhere to schedule a task within the MapReduce operations, it first looks for an empty slot on the same server that hosts the DataNode containing the data, and if not, it looks for an empty slot on a machine in the same rack.


Ques: 14. How many Daemon processes run on a Hadoop cluster?

Ans: There are five daemons run on a Hadoop cluster. Each of these daemons runs in its own JVM.
NameNode, secondary NameNode and JobTracker Daemons run on Master nodes. DataNode and TaskTracker run on each Slave nodes.
·         NameNode: This daemon stores and maintains the metadata for HDFS.
·         Secondary NameNode: Performs housekeeping functions for the NameNode.
·         JobTracker: Manages MapReduce jobs, distributes individual tasks to machines running the Task Tracker.
·         DataNode: Stores actual HDFS data blocks.
·         TaskTracker: It is Responsible for instantiating and monitoring individual Map and Reduce tasks.


Ques: 15. What is Hadoop Distributed File System (HDFS)? How it is different from Traditional File Systems?

Ans: The Hadoop Distributed File System (HDFS), is responsible for storing huge data on the cluster. This is a distributed file system designed to run on commodity hardware.
It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant.
  • HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware.
  • HDFS provides high throughput access to application data and is suitable for applications that have large data sets.
  • HDFS is designed to support very large files. Applications that are compatible with HDFS are those that deal with large data sets. These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS supports write-once-read-many semantics on files.

Ques: 16. What are the IdentityMapper and IdentityReducer in Mapreduce?

Ans:
  • org.apache.hadoop.mapred.lib.IdentityMapper: Implements the identity function, mapping inputs directly to outputs. If MapReduce programmer does not set the Mapper Class using JobConf.setMapperClass then IdentityMapper.class is used as a default value.
  • org.apache.hadoop.mapred.lib.IdentityReducer : Performs no reduction, writing all input values directly to the output. If MapReduce programmer does not set the Reducer Class using JobConf.setReducerClass then IdentityReducer.class is used as a default value.

Ques: 17. What do you mean by commodity hardware? How can Hadoop work on them?

Ans: Average and non-expensive systems are known as commodity hardware and Hadoop can be installed on any of them. Hadoop does not require high end hardware to function.


Ques: 18. Which one is the Master Node in HDFS? Can it be commodity?

Ans: Name node is the master node in HDFS and job tracker runs on it. The node contains metadata and works as high availability machine and single pint of failure in HDFS. It cannot be commodity as the entire HDFS works on it.


Ques: 19. What is the main difference between Mapper and Reducer?

Ans: Map method is called separately for each key/value have been processed. It processes input key/value pairs and emits intermediate key/value pairs.
  • Reduce method is called separately for each key/values list pair. It processes intermediate key/value pairs and emits final key/value pairs.
  • Both are initialize and called before any other method is called. Both don’t have any parameters and no output.

Ques: 20. What is difference between MapSide Join and ReduceSide Join?

Ans:
Joining the multiple tables in mapper side, called map side join. Please note mapside join should has strict format and sorted properly. If data set is smaller tables, goes through reducer phrase. Data should be partitioned properly.
Join the multiple tables in reducer side called reduceside join. If you have large amount of data tables, planning to join both tables. One table is large amount of rows and columns, another one has few number of tables only, goes through Reduceside join. It’s the best way to join the multiple tables.