Tuesday, 25 June 2019

Top 20 Oracle PL/SQL Interview Questions

The following technical interview questions and answers are as per my experiences while working in Oracle PL/SQL projects. These may be useful for the freshers and experienced developers whenever facing PL/SQL interview.


Ques: 1. What is a mutating table error? How can you get this error?

Ans: This error can come with triggers, whenever a trigger is trying to update a row it is currently using. The usual fix involves either use of views or temporary tables, so the database is selecting from one while updating the other. This is all because of the table is in middle of a transaction and referencing the same table again in the middle of the updating action causes the trigger to mutate.


Ques: 2. What is the key difference between SQL and PL/SQL?

Ans: SQL and PL/SQL are used to access data within Oracle databases. SQL is a limited language that allows you to directly interact with the database. You can write queries, manipulate objects and data of oracle database with SQL language. SQL doesn't include the programming concepts.
By using PLSQL, you can extract and manipulate the objects and data of oracle database. And can do all the things that normal programming languages can have, such as looping and controlled executions.


Ques: 3. What is the difference between Form triggers and Database level triggers?

Ans: Form level triggers are use in forms and fire on any level like item level, row level or on block level on requirement of application. And database triggers are written in database directly and fire on behalf of any transaction like Insert, Update and delete on table automatically.
The key difference in form level triggers and database trigger is that form level trigger fire on user or application requirement and database triggers will fire automatically.


Ques: 4. What is Pseudo column?

Ans: Pseudo columns are database columns which are used for different purposes in oracle database.
ROWNUM, ROWID, SYSDATE, UID, USER, ORA_ROWSCN, SYSTIMESTAMP are pseudo columns in oracle database.


Ques: 5. What is exception handling in oracle? How can they be handled?

Ans: When an error occurs, exception is raised normally, and execution is stopped. Control transfers to exception handling part. Exception is an error situation which arises during program execution. Exception handlers are routines written to handle the exception. The exceptions can be internally defined User-defined exception. In oracle, exception can be handled by using these exception statements:
EXCEPTION WHEN
DUP_VAL_ON_INDEX
NOT_LOGGED_ON
TOO_MANY_ROWS
VALUE_ERROR
NO_DATA_FOUND


Ques: 6. What is the key difference between OPEN-FETCH-CLOSE and FOR LOOP in CURSOR?

Ans: A FOR LOOP in cursor implicitly declares its loop index as a %ROWTYPE record, opens a cursor, repeatedly fetches rows of values from the result set into fields in the record, and closes the cursor when all rows have been processed.
While using the OPEN-FETCH-CLOSE, we have to explicitly open the query and closing the query.


Ques: 7. What is Dynamic SQL? How can we use it in Oracle?

Ans: Dynamic SQL is used by PL/SQL to execute Data Definition Language (DDL) statements, Data Control (DCL) statements, or Transaction Control statements within PL/SQL blocks. These statements can, probably will change from execution to execution means change at runtime. These statements are not stored within the source code but are stored as character variables in the program.
    The SQL statements are created dynamically at runtime by using variables. This is used either using native dynamic SQL or through the DBMS_SQL package. Dynamic SQL supports all SQL data types.


Ques: 8. What is the key difference Between Row Level Trigger and Statement Level Trigger?

Ans: Row level trigger executes once for each row after or before in the DML event.  It can be defined by using FOR EACH ROW.
Statement Level trigger executes once after or before the DML event, it doesn’t matter many rows are affected by the DML event.  


Ques: 9. What are the various steps included in the compilation process of a PL/SQL block?

Ans: In the compilation process of a PL/SQL block, the syntax checking, binding, and p-code generation are involved. Syntax checking involves checking PL/SQL code for compilation errors. Syntax errors have been corrected, a storage address is assigned to the variables that are used to hold data for Oracle. This process is called binding. After binding, P-code is generated for the PL/SQL block. P-code is a list of instructions to the PL/SQL engine. For named blocks, p-code is stored in the database, and it is used the next time the program is executed.


Ques: 10. What packages have Oracle provided to PLSQL developers?

Ans: Oracle provides the DBMS_ series of packages to PLSQL developers to smooth the programming. The below packages, the developer should be aware of:
DBMS_DDL
UTL_FILE
DBMS_OUTPUT
DBMS_JOB
DBMS_SQL
DBMS_PIPE
DBMS_TRANSACTION
DBMS_LOCK
DBMS_ALERT
DBMS_UTILITY


Ques: 11. How can you protect your PL/SQL source code?

Ans: Oracle provides a binary wrapper utility that can be used to scramble PL/SQL source code. This utility was introduced in Oracle 7.
This utility use human-readable PL/SQL source code as input and writes out portable binary object code. It can larger than the original in size. The binary code can be distributed without fear of exposing your used algorithms and methods. Oracle will still understand and know how to execute the code. Be careful, there is no "decode" command available. So, always keep your source code saved.


Ques: 12. What are SQLCODE and SQLERRM in PLSQL? What is their importance in programming in PLSQL?

Ans: In PLSQL, SQLCODE returns the value of the error number for the last encountered error. Whereas the SQLERRM returns the actual error message for the last encountered error. They are used in exception handling in PLSQL. These are very useful for the WHEN OTHERS exception.


Ques: 13. What do you understand by cursors in PLSQL?

Ans: Cursor is a pointer variable in a memory and use for data manipulation operations in PLSQL. Basically, cursor is a private SQL memory area. It is also used to improve the performance of the PLSQL block.

Two types of cursors are:

a). Implicit cursor: Implicit cursors use oracles to manipulate the data manipulation operations internally and developers have no control on this type of cursor. We use sql%notfound and sql%rowcount cursor attributes in implicit cursor.

b). Explicit cursor: Explicit cursors are created by the developers and can control it by using the Fetch, Open and close keywords.

There are five types of cursors attributes in PLSQL:

1). %isopen: used to verify whether this cursor is open or not.
2). %found: If cursor fetch the data then %found return true.
3). %notfound: If cursor fetches not data then %notfound return true.
4). %rowcount: It return no. of rows that are in cursor.
5). %bulk_rowcount: It is same as %rowcount but it is used in bulk.


Ques: 14. What is the Index-By-Tables in PLSQL?

Ans: Index-By-Tables is also known as Associative Arrays. These are the sets of key-value pairs, where each key is unique and is used to locate a corresponding value in the array. The key can be an integer or a string.

Syntax : TYPE tab_type_name IS TABLE OF element type
               INDEX  BY  BINARY_INTEGER;

Ex:  TYPE DeptTabTyp IS TABLE OF Dept%ROWTYPE
         INDEX BY BINARY_INTEGER;
         Dept_tab DeptTabTyp;


Ques: 15. What do you understand by bulk binding?

Ans: The bulk binding technique improves performance by minimizing the number of context switches between the PL/SQL and SQL engines. A DML operation statement can transfer all the elements of a collection in a single operation.

           For example, If the collection has 100 elements, It lets you to perform the 100 SELECT, INSERT, UPDATE, or DELETE statements using a single operation.              


Ques: 16. What do you under by AUTONOMOUS_TRANSACTION?

Ans: The pragma AUTONOMOUS_TRANSACTION instructs the PL/SQL compiler to mark a PLSQL block as autonomous i.e. independent. Autonomous transactions let the compiler stop the main transaction to do DML operations, commit or roll back those operations, then resume the main transaction.


Ques: 17. What are the two components of LOB datatype in PLSQL?

Ans: The two components of the LOB datatype in PLSQL are:

1). LOB locator: LOB Locator is a locator, which points to the location in the database where the actual value is stored. This value is stored along with the record in the table row and is like a pointer to the actual location of LOB value.

2). LOB value: It is referred to an actual image, file or value of the LOB datatype.


Ques: 18: What are cascading of triggers?

Ans: If we insert data in one table and that table have trigger on it then trigger fire. And in this trigger, there is another table that we are using for inserting the data in it and this table has also trigger on it then this trigger also fire. This is called cascading of triggers.


Ques: 19. What do you understand by Table Functions in PLSQL?

Ans: We can use the table function like the name of the database table. Table functions are functions that produce a collection of rows (either a nested table or a Varray) that can be queried like a physical database table or assigned to PL/SQL collection variable.


Ques 20: What is the use of NOCOPY in PLSQL?

Ans: In PLSQL programming, the NOCOPY is Compiler Hint. When the Parameters hold large data structures such as collections and records, all this time copying slows down the execution. To prevent this, we can specify NOCPY. This allows the PL/SQL Compiler to pass OUT and INOUT parameters by reference.






Wednesday, 5 June 2019

Top 20 Hadoop Technical Interview Questions



Ques: 1. What is the need of Hadoop?

Ans: A large amount of unstructured data is getting dumped into our machines in every single day. The major challenge is not to store large data sets in our systems but to retrieve and analyze the big data in the organizations, that too data present in different machines at different locations.
In this situation, a necessity for Hadoop arises. Hadoop has the ability to analyze the data present in different machines at different locations very quickly and in a very cost-effective way. It uses the concept of MapReduce which enables it to divide the query into small parts and process them in parallel. This is also known as parallel computing.


Ques: 2. What is the Hadoop Framework?

Ans: It is an open source framework, which is written in java by Apache software foundation. This framework is used to write software application which requires to process vast amount of data. Hadoop could handle multi tera bytes of data. It works in-parallel on large clusters which could have 1000 of computers (Nodes) on the clusters. It also processes data very reliably and fault-tolerant manner.


Ques: 3. What are the various basic characteristics of Hadoop?

Ans: Hadoop framework has the capability of solving issues involving Big Data analysis. It is written in Java. Its programming model is based on Google. MapReduce and infrastructure is based on Google’s Big Data and distributed file systems. Hadoop is scalable, and more nodes can be added to it.


Ques: 4. What are the core components of Hadoop?

Ans: HDFS and MapReduce are the core components of Hadoop. Hadoop Distributed File System (HDFS) is basically used to store large data sets and MapReduce is used to process such large data sets.


Ques: 5. What do you understand by streaming access?

Ans: HDFS works on the principle of ‘Write Once, Read Many’. This feature of streaming access is extremely important in HDFS. In HDFS, reading the complete data is more important than the time taken to fetch a single record from the data. HDFS focuses not so much on storing the data but how to retrieve it at the fastest possible speed, especially while analyzing logs.


Ques: 6. What do you understand by a task tracker?

Ans: Task Trackers manage the execution of individual tasks on slave node. Task tracker is also a daemon that runs on DataNodes. When a client submits a job, the job tracker will initialize the job and divide the work and assign them to different task trackers to perform MapReduce tasks.
While performing this action, the task tracker will be simultaneously communicating with job tracker by sending heartbeat. If the job tracker does not receive heartbeat from task tracker within specified time, then it will assume that task tracker has crashed and assign that task to another task tracker in the cluster.


Ques: 7. If a particular file is 40 MB, Will the HDFS block still consume 64 MB as the default size?

Ans: No, 64 mb is just a unit where the data will be stored. In this particular situation, only 40 MB will be consumed by an HDFS block and 24 MB will be free to store something else. It is the MasterNode that does data allocation in an efficient manner.


Ques: 8. What is a Rack in Hadoop?

Ans: Rack is a storage area with all the DataNodes put together. Rack is a physical collection of DataNodes which are stored at a single location. There can be multiple racks in a single location. These DataNodes can be physically located at different places.


Ques: 9. How will the data be stored on a Rack?

Ans: The content of the file will be divided into blocks whenever the client is ready to load a file into the cluster. Now the client consults the NameNode and gets 3 DataNodes for every block of the file which indicates where the block should be stored. While placing the DataNodes, the key rule followed is “for every block of data, two copies will exist in one rack, third copy in a different rack”. This rule is known as “Replica Placement Policy”.


Ques: 10. Can you explain the input and output data format of the Hadoop Framework?

Ans: The MapReduce framework operates exclusively on pairs, that is, the framework views the input to the job as a set of pairs and produces a set of pairs as the output of the job, conceivably of different types.
The flow can be like: [input] -> map -> -> combine/sorting -> -> reduce -> [output]


Ques: 11. How can you use the Reducer?

Ans: Reducer reduces a set of intermediate values which share a key to a (can be smaller one) set of values. The number of reduces for the job is set by the user via Job.setNumReduceTasks(int).


Ques: 12. How can you explain the core methods of the Reducer?

Ans: The API of Reducer is very similar to that of Mapper, there's a run() method that receives a Context containing the job's configuration as well as interfacing methods that return data from the reducer itself back to the framework. The run() method calls setup() once, reduce() once for each key associated with the reduce task, and cleanup() once at the end. Each of these methods can access the job's configuration data by using Context.getConfiguration().
Reduce() method is the heart of any Reducer. This is called once per key; the second argument is an iteratable which returns all the values associated with that key.


Ques: 13. How can you schedule a Task by a Jobtracker?

Ans: The TaskTrackers send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These messages also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated. When the JobTracker tries to find somewhere to schedule a task within the MapReduce operations, it first looks for an empty slot on the same server that hosts the DataNode containing the data, and if not, it looks for an empty slot on a machine in the same rack.


Ques: 14. How many Daemon processes run on a Hadoop cluster?

Ans: There are five daemons run on a Hadoop cluster. Each of these daemons runs in its own JVM.
NameNode, secondary NameNode and JobTracker Daemons run on Master nodes. DataNode and TaskTracker run on each Slave nodes.
·         NameNode: This daemon stores and maintains the metadata for HDFS.
·         Secondary NameNode: Performs housekeeping functions for the NameNode.
·         JobTracker: Manages MapReduce jobs, distributes individual tasks to machines running the Task Tracker.
·         DataNode: Stores actual HDFS data blocks.
·         TaskTracker: It is Responsible for instantiating and monitoring individual Map and Reduce tasks.


Ques: 15. What is Hadoop Distributed File System (HDFS)? How it is different from Traditional File Systems?

Ans: The Hadoop Distributed File System (HDFS), is responsible for storing huge data on the cluster. This is a distributed file system designed to run on commodity hardware.
It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant.
  • HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware.
  • HDFS provides high throughput access to application data and is suitable for applications that have large data sets.
  • HDFS is designed to support very large files. Applications that are compatible with HDFS are those that deal with large data sets. These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS supports write-once-read-many semantics on files.

Ques: 16. What are the IdentityMapper and IdentityReducer in Mapreduce?

Ans:
  • org.apache.hadoop.mapred.lib.IdentityMapper: Implements the identity function, mapping inputs directly to outputs. If MapReduce programmer does not set the Mapper Class using JobConf.setMapperClass then IdentityMapper.class is used as a default value.
  • org.apache.hadoop.mapred.lib.IdentityReducer : Performs no reduction, writing all input values directly to the output. If MapReduce programmer does not set the Reducer Class using JobConf.setReducerClass then IdentityReducer.class is used as a default value.

Ques: 17. What do you mean by commodity hardware? How can Hadoop work on them?

Ans: Average and non-expensive systems are known as commodity hardware and Hadoop can be installed on any of them. Hadoop does not require high end hardware to function.


Ques: 18. Which one is the Master Node in HDFS? Can it be commodity?

Ans: Name node is the master node in HDFS and job tracker runs on it. The node contains metadata and works as high availability machine and single pint of failure in HDFS. It cannot be commodity as the entire HDFS works on it.


Ques: 19. What is the main difference between Mapper and Reducer?

Ans: Map method is called separately for each key/value have been processed. It processes input key/value pairs and emits intermediate key/value pairs.
  • Reduce method is called separately for each key/values list pair. It processes intermediate key/value pairs and emits final key/value pairs.
  • Both are initialize and called before any other method is called. Both don’t have any parameters and no output.

Ques: 20. What is difference between MapSide Join and ReduceSide Join?

Ans:
Joining the multiple tables in mapper side, called map side join. Please note mapside join should has strict format and sorted properly. If data set is smaller tables, goes through reducer phrase. Data should be partitioned properly.
Join the multiple tables in reducer side called reduceside join. If you have large amount of data tables, planning to join both tables. One table is large amount of rows and columns, another one has few number of tables only, goes through Reduceside join. It’s the best way to join the multiple tables.

Top 50 Advance Java interview Questions


Ques: 1. What do you understand by Private, Protected and Public?

Ans: Private, Protected and Public are the accessibility modifiers. Private is the most restrictive, while public is the least restrictive. There is no real difference between protected and the default type (also known as package protected) within the context of the same package, however the protected keyword allows visibility to a derived class in a different package.


Ques: 2. What is a java package? How is it used?

Ans: A Java package is a naming context for classes and interfaces.
  • A package is used to create a separate name space for groups of classes and interfaces.
  • Packages are used to organize related classes and interfaces into a single API unit and to control accessibility to these classes and interfaces.

Ques: 3. What do you understand by EAR, JAR and WAR File?

Ans:
Enterprise Archives (EAR): An EAR file contains all the components that make up a J2EE application.

Java Archives (JAR): A JAR file encapsulates one or more Java classes, a manifest, and a descriptor. JAR files are the lowest level of archive. JAR files are used in J2EE for packaging EJBs and client-side Java Applications.

Web Archives (WAR): WAR files are like JAR files, except that they are specifically for web applications made from Servlets, JSPs, and supporting classes.


Ques: 4. How Java Source Code files are named?

Ans: A source code file may contain at most one public class or interface. A Java source code file takes the name of a public class or interface that is defined within the file. Source code files use the .java extension.
If a public class or interface is defined within a source code file, then the source code file must take the name of the public class or interface. If no public class or interface is defined within a source code file, then the file must take on a name that is different than its classes and interfaces.


Ques: 5. What do you understand by Numeric Promotion?

Ans: In numerical promotion, byte, char, and short values are converted to int values. It is the conversion of a smaller numeric type to a larger numeric type, so that integer and floating-point operations may take place. if required, the int values are also converted to long values. The long and float values can be converted to double values.


Ques: 6. What is the difference between Time Slicing and Preemptive Scheduling?

Ans: Under time slicing in java, a task executes for a predefined slice of time and then reenters the pool of ready tasks.
But under preemptive scheduling, the highest priority task executes until it enters the waiting or dead states, or a higher priority task comes into existence.

The scheduler then determines which task should execute next, based on priority and other factors.


Ques: 7. What do you mean by a Task's Priority? How is it used in Scheduling?

Ans: A task's priority is an integer value that identifies the relative order in which it should be executed with respect to other tasks. The scheduler attempts to schedule higher priority tasks before lower priority tasks.


Ques: 8. What technologies are included in J2EE?

Ans: The various technologies in J2EE are:
  • Enterprise JavaBeansTM (EJBsTM).
  • JavaServer PagesTM (JSPsTM).
  • Java Servlets.
  • The Java Naming and Directory InterfaceTM (JNDITM).
  • The Java Transaction API (JTA).
  • CORBA.
  • The JDBCTM data access API.

Ques: 9. What is the main purpose of the Wait(), Notify() and Notifyall() methods in java?

Ans: These methods are used to provide an efficient way for threads to wait for a shared resource. When a thread in java executes an object's wait() method, it enters the waiting state. It only enters the ready state after another thread invokes the object's notify() or notifyAll() methods.


Ques: 10. What happens when you invoke a Thread's interrupt method while it is Sleeping or Waiting?

Ans: When a task's interrupt() method is executed, the task enters the ready state. The next time the task enters the running state, an InterruptedException is thrown.


Ques: 11. What are the various restrictions placed on method overriding?

Ans:
  • The overriding method may not limit the access of the method it overrides.
  • Overridden methods must have the same name, argument list, and return type.
  • The overriding method may not throw any exceptions that may not be thrown by the overridden method.

Ques: 12. What do you understand by Java Swing?

Ans: Swing is basically a type of Toolkit which is GUI toolkit for Java. It is one part of the Java Foundation Classes (JFC). Swing includes graphical user interface (GUI) widgets such as text boxes, buttons, split-panes, and tables.
Swing supports pluggable look and feel, not by using the native platform's facilities, but by roughly emulating them. This means you can get any supported look and feel on any platform.
Swing widgets provide more sophisticated GUI components than the earlier Abstract Window Toolkit. Since they are written in pure Java, they run the same on all platforms. The advantage is uniform behavior on all platforms. The disadvantage of lightweight components is slower execution.


Ques: 13. What are the differences between Swing and AWT?

Ans: There are many differences between Swing and AWT:
  • AWT is heavy-weight components, but Swing is light-weight components.
  • AWT is OS dependent because it uses native components, But Swing components are OS independent.
  • Can change the look and feel in Swing which is not possible in AWT.
  • Swing takes less memory compared to AWT.
  • For drawing AWT uses screen rendering where Swing uses double buffering.

Ques: 14. What are the Types of Scaling?

Ans: In java, there are two types of scaling: Horizontal Scaling and Vertical Scaling.

Horizontal Scaling: When Clones of an application server are defined on multiple physical m/c, it is called Horizontal Scaling. The objective is to use more than one less powerful m/c more efficiently.

Vertical Scaling: When multiple server clones of an application server are defined on the same physical m/c, it is called Vertical Scaling. The objective is to use the processing power of that m/c more efficiently.


Ques: 15. What are the types of Class Loaders in java?

Ans: In java, there are three types of class loader as bootstarp class loader, extension class loader and system class loader.
Bootstrap Class Loader: Bootstrap class loader loads java’s core classes like java.lang, java.util etc. These are classes that are part of java runtime environment. Bootstrap class loader is native implementation and so they may differ across different JVMs.
 Extensions Class Loader: JAVA_HOME/jre/lib/ext contains jar packages that are extensions of standard core java classes. Extensions class loader loads classes from this ext folder. Using the system environment property java.ext.dirs you can add ‘ext’ folders and jar files to be loaded using extensions class loader.
System Class Loader: Java classes that are available in the java classpath are loaded using System class loader.


Ques: 16. What are differences between the boolean & operator and the && operator?

Ans: If an expression involving the Boolean & operator is evaluated, both operands are evaluated. Then the & operator is applied to the operand. When an expression involving the && operator is evaluated, the first operand is evaluated.
If the first operand returns a value of true, then the second operand is evaluated. The && operator is then applied to the first and second operands. If the first operand evaluates to false, the evaluation of the second operand is skipped.


Ques: 17. What do you mean by Casting?

Ans: In java, there are two types of casting:

1). casting between primitive numeric types: Casting between numeric types is used to convert larger values, such as double values, to smaller values, such as byte values.

2). casting between object references: Casting between object references is used to refer to an object by a compatible class, interface, or array type reference.


Ques: 18. How does a Try statement determine which Catch clause should be used to handle an exception?

Ans: When an exception is thrown within the body of a try statement, the catch clauses of the try statement are examined in the order in which they appear. The first catch clause that is capable of handling the exception is executed. The remaining catch clauses are ignored.


Ques: 19. What will happens if Remove( ) is never invoked on a session bean?

Ans: In case of stateful session bean, the bean may be kept in cache till either the session times out, in which case the bean is removed or when there is a requirement for memory in which case the data is cached and the bean is sent to free pool.
In case of a stateless session bean it may not matter if we call or not as in both cases nothing is done. The number of beans in cache is managed by the container.


Ques: 20. What Is Java Reflection Api?

Ans: Reflection is one of the most powerful api which help to work with classes, methods and variables dynamically. Basically, it inspects the class attributes at runtime. Also, we can say it provides a metadata about the class.


Ques: 21. What is the Collection API in java?

Ans: The Collection API is a set of classes and interfaces that support operation on collections of objects. These classes and interfaces are more flexible, more powerful, and more regular than the vectors, arrays, and hashtables if effectively replaces.

Example of classes: HashSet, HashMap, ArrayList, LinkedList, TreeSet and TreeMap.

Example of interfaces: Collection, Set, List and Map.


Ques: 22. What is the difference between Session and Entity Beans?

Ans: An entity bean represents persistent global data from the database; a session bean represents transient user-specific data that will die when the user disconnects (ends his session). Generally, the session beans implement business methods that call entity beans .


Ques: 23. What do you understand by Abstract Schema in java?

Ans: In java, you can specify the name of the Abstract schema name in the deployment descriptor. The queries written in EJB QL for the finder methods references this name. It is a part of an entity bean’s deployment descriptor which defines the bean’s persistent fields and their relationship. Abstract schema is specified for entity beans with container managed persistence. The information provided in this Abstract Schema is used by the container for persistence management and relationship management.


Ques: 24. What is the use of the Finally block? Is Finally block in Java guaranteed to be called? When will it is not called?

Ans: The code in finally block will execute even if an exception is occurred. Finally, is the block of code that executes always. Finally block is NOT called in following conditions:
  • If the JVM exits while the try or catch code is being executed, then the finally block may not execute. This may happen due to System.exit() call.
  • if the thread executing the try or catch code is interrupted or killed, the finally block may not execute even though the application as a whole continues.
  • If an exception is thrown in finally block and not handled, then remaining code in finally block may not be executed.

Ques: 25. What do you mean by Local Interface. How values will be passed to them?

Ans: An EJB can use local client view only if it is really guaranteed that other enterprise beans or clients will only address the bean within a single JVM. With local client view, you can do pass-by-reference, which means your bean, as well as the client, will work directly with one copy of the data. Any changes made by the bean will be seen by the client and vice versa. Pass-by-reference eliminates time/system expenses for copying data variables, which provides a performance advantage.


Ques: 26. What is the main difference between Serializable and Externalizable Interfaces?

Ans: Both interfaces are used for implementing serialization. But, the basic difference is Serializable interface does not have any method (it’s a marker interface) and Externalizable interface having 2 methods such as readExternal() and writeExternal(). Serializable interface is the super interface for Externalizable interface.


Ques: 27. What is the relation between Local Interfaces and Container-managed relationships?

Ans: To be the target of a container-managed relationship, an entity bean with container-managed persistence must provide a local interface. Entity beans that have container-managed relationships with other entity beans, must be accessed in the same local scope as those related beans, and therefore typically provide a local client view.


Ques: 28. What are the differences between Creating String as New() and Literal?

Ans: When we create string with new() Operator, it’s created in heap and not added into string pool while String created using literal are created in String pool itself which exists in PermGen area of heap.
String s = new String(“Test”);
It does not put the object in String pool, we need to call String.intern() method which is used to put  them into String pool explicitly. its only when you create String object as String literal e.g. String s = “Test” Java automatically put that into String pool.


Ques: 29. What is the main difference between Final, Finally And Finalize?

Ans: 
Final is used to apply restrictions on class, method and variable. Final class can’t be inherited, final method can’t be overridden, and final variable value can’t be changed.

Finally is used to place important code, it will be executed whether exception is handled or not.
Finalize is used to perform clean up processing just before object is garbage collected.


Ques: 30. How can an Object's Finalize() Method be invoked while it is reachable?

Ans: An object's finalize() method cannot be invoked by the garbage collector while the object is still reachable. However, an object's finalize() method may be invoked by other objects.


Ques: 31. What do you mean by an Object’s Lock? Which Object Have Locks?

Ans: A thread may execute a synchronized method of an object only after it has acquired the object’s lock. An object’s lock is a mechanism that is used by multiple threads to obtain synchronized access to the object. All objects and classes have locks. A class’s lock is acquired on the class’s Class object.


Ques: 32. What are the uses of Observer and Observable?

Ans: Objects that subclass the Observable class maintain a list of observers. When an Observable object is updated it invokes the update() method of each of its observers to notify the observers that it has changed state. The Observer interface is implemented by objects that observe Observable objects.


Ques: 33. How ConcurrentHashMap Works?

Ans: Basically, ConcurrentHashMap locks each of the box (by default 16) which can be locked independently and thread safe for operation. The basic design of ConcurrentHashMap is to handling threading. And it does not expose the internal lock process.


Ques: 34. What are Transient Variables in java?

Ans: In Java, Transient variables can’t be serialized. For example, if a variable is declared as transient in a Serializable class and the class is written to an ObjectStream, the value of the variable can’t be written to the stream instead when the class is retrieved from the ObjectStream the value of the variable becomes null.


Ques: 35. What do you understand by JFC?

Ans: JFC stands for Java Foundation Classes. The Java Foundation Classes (JFC) are a set of Java class libraries provided as part of Java 2 Platform, Standard Edition (J2SE) to support building graphics user interface (GUI) and graphics functionality for client applications that will run on popular platforms such as Microsoft Windows, Linux, and Mac OSX.


Ques: 36. What is the main difference between Find and Select methods In EJB?

Ans: A select method can return a persistent field (or a collection thereof) of a related entity bean. A finder method can return only a local or remote interface (or a collection of interfaces).

A select method is defined in the entity bean class. For bean-managed persistence, a finder method is defined in the entity bean class, but for container-managed persistence it is not.
Because it is not exposed in any of the local or remote interfaces, a select method cannot be invoked by a client. It can be invoked only by the methods implemented within the entity bean class. A select method is usually invoked by either a business or a home method.


Ques: 37. If some new data has entered in the database, how can a servlet refresh automatically?

Ans: It depends on the scenario, how you can handle them. You need to handle this in dao layer, when doing insert operation, you can call a utility method which will load the context ServletContextListener. Because, servlets are basically used for handling request and give the response.


Ques: 38. What do you mean by In-memory replication?

Ans: The process by which the contents in the memory of one physical m/c are replicated in all the m/c in the cluster is called in-memory replication.


Ques: 39. What do you understand by synchronization? Why is it so Important?

Ans: In respect of multithreading, synchronization is the capability to control the access of multiple threads to shared resources. In the absence of synchronization, it is possible for one thread to modify a shared object while another thread is in the process of using or updating that object's value. This often leads to significant errors.


Ques: 40. How many Bits are used to represent Unicode, ASCII, UTF-16 and UTF-8 characters?

Ans: Although the ASCII character set uses only 7 bits, it is usually represented as 8 bits. Unicode requires 16 bits and ASCII require 7 bits. UTF-8 represents characters using 8, 16, and 18 bits patterns. UTF-16 uses 16-bit and larger bit patterns.


Ques: 41. What is a Clone in java?

Ans: In java, the copies of a server group are called Clones. But unlike a Server Group Clones are associated with a node and are real server process running in that node.


Ques: 42. How does the Garbage Collection guarantee that a program will not run out of memory?

Ans: In java, garbage collection does not guarantee that a program will not run out of memory. It is possible for programs to use up memory resources faster than they are garbage collected. It is also possible for programs to create objects that are not subject to garbage collection.


Ques: 43. What do you mean by AWT?

Ans: AWT is stands for Abstract Window Toolkit. AWT enables programmers to develop Java applications with GUI components, such as windows, and buttons. The Java Virtual Machine (JVM) is responsible for translating the AWT calls into the appropriate calls to the host operating system.


Ques: 44. What Advantage do Java's Layout Managers provide over traditional Windowing Systems?

Ans: Java uses layout managers to lay out components in a consistent manner across all windowing platforms. Since Java's layout managers aren't tied to absolute sizing and positioning, they are able to accommodate platform-specific differences among windowing systems.


Ques: 45. Is Decorator an EJB design pattern?

Ans: Decorator design pattern is not an EJB design pattern. It is the one which exhibits very low-level runtime polymorphism, for the specific and single object (Instance of the class). But not for at least for a class. It is the stuff to add specific functionality to a single & pointed object and leaves others like it unmodified. It is having close similarities like AspectJ stuff, but not with EJB stuff.


Ques: 46. What is the main purpose of the Enableevents() method?

Ans: The enableEvents() method is used to enable an event for a particular object. Normally, an event is enabled when a listener is added to an object for a particular event.
The enableEvents() method is used by objects that handle events by overriding their event-dispatch methods.


Ques: 47. What is the main difference between Paint() and PaintComponent()?

Ans: The main point is that the paint() method invokes three methods in the following order:
  • PaintComponent()
  • PaintBorder()
  • paintChildren()
As a general rule, in Swing, we should be overriding the paintComponent method unless we know what we are doing paintComponent() paints only component (panel) but paint() paints component and all its children.


Ques: 48. What do you understand by double buffering ?

Ans: Double buffering is the process of using two buffers rather than one to temporarily hold data being moved to and from an I/O device. Double buffering increases data transfer speed because one buffer can be filled while the other is being emptied.


Ques: 49. What is the main difference between the File and RandomAccessFile Classes?

Ans: The File class encapsulates the files and directories of the local file system.
The RandomAccessFile class provides the methods needed to directly access data contained in any part of a file.


Ques: 50. Can you Name some of the Layoutmanagers In Java?

Ans: Some of the LayoutManagers in java are:
  • Flow Layout Manager
  • Grid Layout Manager
  • Box Layout Manager
  • Border Layout Manager
  • Card Layout Manager
  • GridBag Layout Manager