Wednesday, 5 January 2022

Top 20 Apache Struts 2 Interview Questions and Answers

  

Struts 2 is a Java enterprise application framework for constructing web applications. It was created by the Apache Software Foundation. In the year 2006, it was first released. It's written in the Java programming language. It is cross-platform compatible. It is built on the MVC architecture, which is a software design paradigm for creating applications. Struts 2 includes features such as simplified testability, Ajax support, Thread-safety, and Template support, among others.

Apache Cassandra Interview Questions and Answers

Ques. 1): What exactly is Struts2?

Answer:

Apache Struts2 is a Java web application framework that is free source. The OpenSymphony WebWork framework is the foundation for Struts2. It's a significant improvement over Struts1, making it more adaptable, simple to use, and extendable. Action, Interceptors, and Result pages are the three main components of Struts2.

Struts2 offers a variety of options for creating Action classes and configuring them using struts.xml or annotations. For common jobs, we can make our own interceptors. Struts2 includes a large number of tags and makes use of the OGNL expression language. To render result pages, we can design our own type converters. JSPs and FreeMarker templates can be used as result pages.

 Apache Camel Interview Questions and Answers

Ques. 2): What are some of Struts2's features?

Answer:

Here are some of the fantastic features that can persuade you to use Struts2.

POJO forms and POJO actions − Struts2 has done away with the Action Forms that were an integral part of the Struts framework. With Struts2, you can use any POJO to receive the form input. Similarly, you can now see any POJO as an Action class.

Tag support − Struts2 has improved the form tags and the new tags allow the developers to write less code.

AJAX support − Struts2 has recognised the take over by Web2.0 technologies, and has integrated AJAX support into the product by creating AJAX tags, that function very similar to the standard Struts2 tags.

Easy Integration − Integration with other frameworks like Spring, Tiles and SiteMesh is now easier with a variety of integration available with Struts2.

Template Support − Support for generating views using templates.

Plugin Support − The core Struts2 behaviour can be enhanced and augmented by the use of plugins. A number of plugins are available for Struts2.

Apache Ant Interview Questions and Answers

Ques. 3): What's the difference between Struts 1 and Struts 2?

Answer: 

This is a list of the most common Strut 2 interview questions. The action class in Strut 1 is not a POJO, hence it must inherit the abstract class. An action servlet is used as the front controller. Only JSP is used for the component view. A configuration file can be inserted in the WEB-INF directory in section 1. When processing requests, it makes use of the Request Processor class. Actions and models are separated in Strut 1.

The action class in Strut 2 is a POJO, therefore there's no need to inherit any classes or implement any interfaces. For the view component, it has JSP, free-market, and so on. The function controller in strut 2 is the Struts Prepare and Execute filter. In this, a configuration file must be named as struts.xml and placed inside the classes directory. It uses the concept of Interceptors while processing the request. In strut 2, action and models are combined within the action class.

Apache Tomcat Interview Questions and Answers

Ques. 4): In Struct2, What Is The Use Of Struts.properties?

Answer:

This configuration file allows you to override the framework's default behaviour. In fact, all of the properties in the struts.properties configuration file can be defined in the web.xml using the init-param, as well as in the struts.xml configuration file using the constant tag. However, if you prefer to keep things separate and more struts specific, you can create this file in the WEB-INF/classes folder. The default values configured in default.properties, which is included in the struts2-core-x.y.z.jar distribution, will be overridden by the values configured in this file.

Apache Kafka Interview Questions and Answers

Ques. 5): Explain The Life Cycle Of A Request In Struct2 Application?

Answer :

Following is the life cycle of a request in Struct2 application −

  • User sends a request to the server for requesting for some resource (i.e pages).
  • The FilterDispatcher looks at the request and then determines the appropriate Action.
  • Configured interceptors functionalities applies such as validation, file upload etc.
  • Selected action is executed to perform the requested operation.
  • Again, configured interceptors are applied to do any post-processing if required.
  • Finally the result is prepared by the view and returns the result to the user.

Apache Tapestry Interview Questions and Answers

Ques. 6): What are the inbuilt themes that are provided by strut 2?

Answer:

There are 3 different inbuilt themes:

Simple theme: It is a minimal theme which is having very little content. It means that the text field tag renders the HTML tag without label, validation, error reporting or any other formatting or functionality.

XHTML theme: It is referred to as the default theme used by struts 2 and provides all the basics that a simple theme provides. It adds the other several features like standard two-column table layout for the HTML labels for each of the HTML, validation and error reporting etc.

Css_xhtml theme: This is the theme that provides all the basics that the simple theme provides, and it adds other several features like the standard two-column CSS-based layout and using div tag for HTML struts tags, labels for each of the HTML struts tags and placed according to the CSS style sheet.

Apache Ambari interview Questions & Answers

 Ques. 7): What is internationalization and how does it work?

Answer:

This is one of the most common Struts 2 Interview Questions that is asked during an interview. Localization refers to the process of planning and implementing products and services so that they may be easily modified to specific local languages and cultures, whereas internationalization refers to the act of enabling localization.

Apache Hive Interview Questions & Answers

Ques. 8): What is the difference between an interceptor and a filter?

Answer:

The interceptors are built around struts 2. It runs for all requests that qualify for a servlet filter front controller and can be customised to run extra interceptors for specific action execution. Interceptor methods can be configured to execute or not to execute using exclude and include methods.

Servlet specifications are used to create the filters. Executes on request and non-configurable method calls if the pattern matches.

Apache Spark Interview Questions & Answers

Ques. 9): Explain struts 2's XML-based validation.

Answer:

XML-based validation in Struts 2 adds more validation options, such as email validation, integer range validation, form validation field, expression validation, regex validation, needed validation, string length validation, and necessary string validation, among others. The XML file must be titled 'actionclass'-validation.xml in Struts 2.

Apache NiFi Interview Questions & Answers

Ques. 10): How Does Validation in Struts 2 Work?

Answer:

When the user clicks the submit button, Struts 2 will run the validate method, and if any of the if statements inside the method are true, Struts 2 will call the addFieldError method. Struts 2 will not proceed to invoke the execute method if any errors have been introduced. The Struts 2 framework, on the other hand, will return input as a result of calling the action.

When validation fails and Struts 2 returns input, the view file is redisplayed by the Struts 2 framework. Because we utilised Struts 2 form tags, the error messages will appear directly above the completed form.

These are the error messages we specified in the call to the addFieldError function. The addFieldError method takes two arguments. The first is the form field name to which the error applies and the second is the error message to display above that form field.

 

Ques. 11): What Types Of Validations Are Available In Xml Based Validation In Struts2?

Answer :

Following is the list of various types of field level and non-field level validation available in Struts2 −

  • date validator
  • double validator
  • email validator
  • expression validator
  • int validator
  • regex validator
  • required validator
  • requiredstring validator
  • stringlength validator
  • url validator

 

Ques. 12):  How Does Struts 2's Interceptor Work?

Answer:

The actual action will be performed by calling invocation.invoke() from the interceptor. So, depending on your needs, you can conduct some pre-processing and some post-processing.

The framework initiates the process by using the invoke method on the ActionInvocation object (). When invoke() is called, ActionInvocation consults its state and performs the next available interceptor. The invoke() method will cause the action to be executed once all of the configured interceptors have been invoked.

 

Ques. 13): What Is Value Stack?

Answer :

The value stack is a set of several objects which keeps the following objects in the provided order −

Temporary Objects − There are various temporary objects which are created during execution of a page. For example the current iteration value for a collection being looped over in a JSP tag.

The Model Object − If you are using model objects in your struts application, the current model object is placed before the action on the value stack.

The Action Object − This will be the current action object which is being executed.

Named Objects − These objects include #application, #session, #request, #attr and #parameters and refer to the corresponding servlet scopes.

 

Ques. 14): What Is The Difference Between Valuestack And OGNL?

Answer:

ValueStack is the storage space where Struts2 stores application data for processing client requests. The information is saved in ActionContext objects that use ThreadLocal to store values that are unique to each request thread.

OGNL (Object-Graph Navigation Language) is a sophisticated Expression Language for manipulating data on the ValueStack. Both interceptors and result pages can use OGNL to access data stored on ValueStack, as shown in the architectural diagram.

 

Ques. 15): What Is The Struts-default Package And How Does It Help?

Answer:

Struts-default is an abstract package that specifies all of the Struts2 interceptors as well as the most widely used interceptor stack. To prevent having to configure interceptors twice, it's best to extend this package while configuring our application package. This is provided to assist developers by making the work of configuring interceptor and result pages in our application a lot easier.

 

Ques. 16): What Is The Purpose Of @after Annotation?

Answer :

The @After annotation marks a action method that needs to be called after the main action method and the result was executed. Return value is ignored.

public class Employee extends ActionSupport{

   @After

   public void isValid() throws ValidationException {

      // validate model object, throw exception if failed

   }

   public String execute() {

      // perform secure action

      return SUCCESS;

   }

}

 

Ques. 17): What Is The Purpose Of @before Annotation?

Answer :

The @Before annotation marks a action method that needs to be called before the main action method and the result was executed. Return value is ignored.

public class Employee extends ActionSupport{

   @Before

   public void isAuthorized() throws AuthenticationException {

      // authorize request, throw exception if failed

   }

   public String execute() {

      // perform secure action

      return SUCCESS;

   }

}

 

Ques. 18): What Is The Difference Between Using An Action Interface And Using An Actionsupport Class For Our Action Classes, And Which Would You Prefer?

Answer:

To develop our action classes, we can use the Action interface. This interface only has one function, execute(), which we must implement. The main advantage of utilising this interface is that it includes some constants that can be used on result pages, such as SUCCESS, ERROR, NONE, INPUT, and LOGIN.

The ActionSupport class implements the Action interface by default, as well as interfaces for Validation and i18n support. Action, Validateable, ValidationAware, TextProvider, and LocaleProvider are all implemented by the ActionSupport class. To implement field level validation login in our action classes, we can override the validate() method of the ActionSupport class.

Depending on the requirements, we can use any of the approaches to create struts 2 action classes, my favorite is ActionSupport class because it helps in writing validation and i18n logic easily in action classes.

 

Ques. 19): How Do We Get Servlet Api Requests, Responses, Httpsessions, and Other Objects Into Action Classes?

Answer:

Servlet API components such as Request, Response, and Session are not directly accessible through Struts2 action classes. However, in some action classes, such as checking the HTTP method or adding cookies in the response, these accesses are required.

As a result, the Struts2 API exposes a number of *Aware interfaces through which we can access these objects. Struts2 API injects Servlet API components into action classes using dependency injection. SessionAware, ApplicationAware, ServletRequestAware, and ServletResponseAware are some of the most essential Aware interfaces.

 

Ques. 20): Is Struts2 Interceptors And Action Thread Safe?

Answer:

Because an object is instantiated for each request to process, Struts2 Action classes are thread safe.

Because Struts2 interceptors are singleton classes that launch a new thread to handle the request, they are not thread safe, and we must construct them carefully to avoid any shared data concerns.

 

 

 

Tuesday, 4 January 2022

Top 20 Apache Cassandra Interview Questions and Answers

  

                Apache recommends Cassandra as one of the most popular NoSQL distributed database management systems. Cassandra is an open-source database that is meant to store and manage enormous amounts of data without failure. Apache Cassandra is a Java database with flexible schemas that is highly scalable for Big Data models and was originally built by Facebook. There is no single point of failure with Apache Cassandra. Cassandra is a combination of column-oriented and key–value store databases, and it is one of the most popular NoSQL databases. The keyspace entity in Cassandra is the table or column family, which is the application's outermost container.

 Apache Camel Interview Questions and Answers

Ques. 1): What is the purpose of Cassandra and why should you utilise it?

Answer:

Cassandra was created with the goal of handling large data workloads across several nodes with no single point of failure. Cassandra's use is influenced by a number of things.

  • It's fault-tolerant and reliable.
  • Scalability from gigabytes to petabytes
  • It's a database with columns.
  • There are no singular points of failure.
  • There isn't a requirement for a separate caching layer.
  • Schema design that is adaptable
  • It has a flexible data storage system, simple data distribution, and quick write speeds.
  • The ACID (Atomicity, Consistency, Isolation, and Durability) qualities are supported.
  • Cloud and multi-data centre capabilities
  • Compression of data

 Apache Ant Interview Questions and Answers

Ques. 2): What are Cassandra's applications?

Answer:

When it comes to app development and data management, Cassandra has become the go-to solution for many businesses. Because of the ease with which an operator can work, even fresh start-ups choose it.

Cassandra is a fantastic application for collecting data from a variety of sources at a rapid rate. Cassandra could be used in an internet of things application. It might also be utilised in product and retail apps, as well as messaging, social media analytics, and even a recommendation engine.

 Apache Tomcat Interview Questions and Answers

Ques. 3): What are the advantages of utilising Cassandra?

Answer:

  • Apache Cassandra, unlike any other database, provides near real-time speed, making the work of Developers, Administrators, Data Analysts, and Software Engineers much easier.
  • Cassandra is built on a peer-to-peer architecture rather than a master–slave design, assuring no failure.
  • It also ensures incredible flexibility by allowing numerous nodes to be added to each Cassandra cluster in any data centre. In addition, any client can send a request to any server.
  • Cassandra supports extensible scalability and can be simply scaled up and down depending on the needs. This NoSQL application does not need to be restarted while scaling because it has a high throughput for read and write operations.
  • Cassandra is also known for its powerful data replication on nodes feature, which allows users to store data in numerous locations and recover data from a different location if one node fails. The amount of copies that users want to make can be set by them.
  • When used for large datasets, it performs admirably, making it the NoSQL DB of choice for most businesses.
  • Operates on a column-oriented structure, which speeds up and simplifies the slicing process. With a column-based data model, even data access and retrieval become more efficient.
  • Furthermore, Apache Cassandra features a schema-free/schema-optional data model, which eliminates the need to display all of the columns that your application requires.
  • Learn how Cassandra vs. MongoDB can help you advance your career.

 Apache Kafka Interview Questions and Answers

Ques. 4): In Cassandra, explain the idea of adjustable consistency.

Answer:

Cassandra's tunable consistency is a fantastic feature that makes it a popular database among developers, analysts, and big data architects. Consistency refers to all replicas having up-to-date and synced data rows. Cassandra's adjustable consistency allows users to choose the level of consistency that best suits their needs. It encourages two types of constancy: eventual and strong consistency.

The former ensures consistency when no new updates are made to a data item, i.e., all accesses eventually return the most recently modified value. Replica convergence is a term used to describe systems that achieve eventual consistency.

For strong consistency, Cassandra supports the following condition:

R + W > N where,

N – Number of replicas

W – Number of nodes that need to agree for a successful write

R – Number of nodes that need to agree for a successful read

 Apache Tapestry Interview Questions and Answers

Ques. 5): What is Cassandra's data storage method?

Answer:

Bytes are used to store all data.

Cassandra ensures that the bytes are encoded correctly when you specify validator.

The column is then ordered using a comparator depending on the encoding's specific ordering.

While composites are simply byte arrays with a specific encoding, each component holds a two-byte length, a byte encoded component, and a termination bit.

Apache Ambari interview Questions & Answers

Ques. 6): What is the definition of memtable?

Answer:

A memTable is a place where data is written and temporarily stored. After the data in the commit log has been completed, it is written to memtable.

In Cassandra, Memtable is a storage engine. Because each column category has its own MemTable, data in MemTable is classified into a key, and data is retrieved using the key. When the write memory is filled, the messages are automatically deleted.

 Apache Hive Interview Questions & Answers

Ques. 7): Explain the Bloom Filter concept.

Answer:

Bloom filter is an off-heap (off the Java heap to native memory) data structure associated with SSTable that checks whether there is any data accessible in the SSTable before conducting any I/O disc action.

 Apache Spark Interview Questions & Answers

Ques. 8): What are the functions of the shell commands "Capture" and "Consistency"?

Answer:

Cassandra has a number of Cqlsh shell commands. The command "Capture" saves the result of a command to a file, whereas the command "Consistency" shows the current consistency level or sets a new one.

 Apache NiFi Interview Questions & Answers

Ques. 9): What is the purpose of the read repair request?

Answer:

When the coordinator node sends requests, it checks in with the nodes to see if they have any outdated information. This data is transmitted to be read and repaired in the background before being replaced with the updated data. Read and repair requests are a way to maintain the data current while also ensuring that the requested row is consistent across all replicas.

 

Ques. 10): How does Cassandra write?

Answer:

Cassandra executes the write operation in two steps: first, it writes to a disc commit log, and then it commits to an in-memory structure called memtable. The write is complete after the two commits are successful. SSTables are used to store writes in the table structure (sorted string tables). Cassandra is more efficient when it comes to writing.

 

Ques. 11): What are the best Cassandra monitor tools?

Answer:

Despite the fact that Cassandra has built-in tolerance mechanisms, it still needs to be monitored for optimal outcomes. Cassandra utilises the following tools to keep track of its databases:

  • Solarwind server and application monitor
  • Instana
  • Instaclustr
  • AppDynamics
  • Dynatrace
  • Machine engine applications manager.

 

Ques. 12): What is Cassandra- CQL collections?

Answer:

Cassandra Multiple values can be stored in a single variable using CQL collections. CQL collections can be used in Cassandra in the following ways.

List: It is used when the order of the data needs to be maintained, and a value is to be stored multiple times (holds the list of unique elements)

SET: It is used for group of elements to store and returned in sorted orders (holds repeating elements)

MAP: It is a data type used to store a key-value pair of elements

 

Ques. 13): What is Super Column in Cassandra?

Answer:

Cassandra Super Column is a unique element consisting of similar collections of data. They are actually key–value pairs with values as columns. It is a sorted array of columns, and they follow a hierarchy when in action: keystore > column family > super column > column data structure in JSON.

Similar to the row keys, super column data entries contain no independent values but are used to collect other columns. It is interesting to note that super column keys appearing in different rows do not necessarily match and will not ever.

 

Ques. 14): Describe the CAP Theorem.

Answer:

With a strong necessity to scale systems when new resources are required, the CAP Theorem is critical to the scaling strategy's success. It's a good approach to deal with scaling in distributed systems. The Consistency, Availability, and Partition Tolerance (CAP) theorem asserts that customers can only have two of these three qualities in distributed systems like Cassandra.

It's necessary to sacrifice one of them. Consistency ensures that the client receives the most recent writing; availability ensures a sensible response in the shortest time possible; and partition tolerance ensures that the system continues to operate even if network partitions occur. AP and CP are the two alternatives available.

 

Ques. 15): What is the difference between Column and Super Column?

Answer:

Both elements work on the principle of tuples having name and value. However, the former’s value is a string, while the value of the latter is a map of columns with different data types.

Unlike Columns, Super Columns do not contain the third component of timestamp.

 

Ques. 16): What exactly is a Column Family?

Answer:

A column family, as the name implies, is a structure with an endless number of rows. A key–value pair is used to refer to these, with the key being the column name and the value being the column data. In Java, it's equivalent to a hashmap, while in Python, it's analogous to a dictionary. Remember that the columns in the rows are not confined to a specified list. Furthermore, the column family is extremely adaptable, with one row having 100 columns and the other simply having two.

 

Ques. 17): Define the management tools in Cassandra.

Answer:

DataStax OpsCenter: It is the Internet-based management and monitoring solution for Cassandra cluster and DataStax. It is free to download and includes an additional edition of OpsCenter.

SPM primarily administers Cassandra metrics and various OS and JVM metrics. Besides Cassandra, SPM also monitors Hadoop, Spark, Solr, Storm, ZooKeeper, and other Big Data platforms. The main features of SPM include correlation of events and metrics, distributed transaction tracing, creating real-time graphs with zooming, anomaly detection, and heartbeat alerting.

 

Ques. 18): In Cassandra, explain the distinctions between a node, a cluster, and a data centre.

Answer:

Cassandra is made up of several parts. A cluster is a collection of nodes that have comparable sorts of data organised together, whereas a node is a single machine running Cassandra. When serving consumers from different parts of the world, data centres are essential components. You can divide a cluster's nodes into various data centres.

 

Ques. 19): What is the purpose of the Bloom Filter in Cassandra?

Answer:

A bloom filter is a space-saving data structure for determining if an element belongs to a set. In other words, it's used to see if an SSTable contains data for a specific row. When executing a KEY LOOKUP in Cassandra, it is utilised to save IO.

 

Ques. 20): What exactly is SSTable? What makes it unique among relational tables?

Answer:

SSTable stands for 'Sorted String Table,' and it refers to a crucial Cassandra data file that supports regular written memtables. They exist for each Cassandra table and are kept on disc. Because of their immutability, SSTables do not allow the insertion or removal of data items once they have been written. Cassandra creates three different files for each SSTable: a partition index, a partition summary, and a bloom filter.

 

Top 20 Apache Camel Interview Questions and Answers

  

                    The Apache Camel software is a message-oriented middleware system that is free and open source. The mediation takes place according to the guidelines that have been established. It's used by a lot of companies for data processing and analysis. Camel can be thought of as a routing engine at a high level. Camel allows us to establish routing rules for messages to be sent from a specific source to a specified destination.

                    Camel has built-in support for a variety of protocols, making it simple to interconnect diverse systems. Camel can easily integrate two separate apps that function with ftp and jms. Camel handles all of the protocol and datatype conversions for us internally.

 Apache Spark Interview Questions & Answers

Ques. 1): What is Apache Camel, and how does it work?

Answer:

There are a variety of oscillate systems in an organisation. Some of them could be legacy systems, while others could be new. These systems frequently interact in imitation of one another and require integration. Relationships or integration are more difficult than system implementations because message formats may differ. One method to achieve this is to agree on a code that bridges these gaps. However, there will be a decrease in mitigation integration as a result of this. If there is a fiddle contemplating in a system tomorrow, the press on may have to be tainted, which is not pleasant. Instead of this narrowing to mitigation integration which causes tight coupling, we can espouse a supplementary descent to mediate the differences along surrounded by the systems.

 Apache Hive Interview Questions & Answers

Ques. 2): In Apache Camel, what are EIPs?

Answer:

EIP (Enterprise Integration Patterns) is the abbreviation for Enterprise Integration Patterns. In the form of a pattern, these are design patterns for the usage of enterprise application integration and message-oriented middleware. Apache Camel makes use of a number of EIPs. Here are a few:

Splitter Pattern: Split the data on the basis of some token and then process it.

Content Based Router: The Content-Based Router inspects the content of a message and routes it to another channel based on the content of the message. Using such a router enables the message producer to send messages to a single channel and leave it to the Content-Based Router to inspect messages and route them to the proper destination. This alleviates the sending application from this task and avoids coupling the message producer to specific destination channels.

Message Filter: A Message Filter is a special form of a Content-Based Router. It examines the message content and passes the message to another channel if the message content matches certain criteria. Otherwise, it discards the message.

Recipient List: A Content-Based Router allows us to route a message to the correct system based on message content. This process is transparent to the original sender in the sense that the originator simply sends the message to a channel, where the router picks it up and takes care of everything.

Wire Tap: Wire Tap allows you to route messages to a separate location while they are being forwarded to the ultimate destination.

 Apache Ambari interview Questions & Answers

Ques. 3): What are Apache Camel Components?

Answer:

In Apache Camel, a component is a factory or a group of Endpoint instances. When using Spring or Guice, we can explicitly configure Component instances and connect them to a Camel Context in an IoC container. URIs can be used to discover components automatically.

Apache Camel comes with a lot of pre-built components. Some key Camel components from the core module are listed below.

  • Bean
  • Direct
  • File
  • Log
  • SEDA
  • Timer

 Apache Tapestry Interview Questions and Answers

Ques. 4): What is an exchange in Apache camel?

Answer:

The message that will be routed through the Camel route is already in the Exchange. It's the holder of the message. Message Exchange Patterns are used by Apache Camel (MEP). Any type of communication can be stored in an Apache camel exchange. It accepts a variety of file types, including XML, JSON, and others.

 Apache Kafka Interview Questions and Answers

Ques. 5): What is an ESB? Have you deployed a camel for any ESB?

Answer:

Enterprise Service Bus (ESB) stands for Enterprise Service Bus. It is a tool that is used to help a severely pained application employing SOA concepts. When projects need integrating a number of Endpoints in front of Webservices, JMS, FTP, and other systems, an optimal unmodified ESB should be employed. JBoss Fuse ESB has been installed for Apache Camel Deployment.

 Apache Tomcat Interview Questions and Answers

Ques. 6): What is a row in the Apache camel?

Answer:

In the Exchange, the declaration to be routed via the Camel route is a gift. It is the possessor of the statement. Message Exchange Patterns are used by Apache Camel (MEP). Any nice of publication can be retained by Apache camel disputes. It supports a multitude of formats, including XML, JSON, and others.

 Apache Ant Interview Questions and Answers

Ques. 7): What are the apache camel endpoints?

Answer:

The Endpoint interface in Camel implements the Message Endpoint pattern. Endpoints are usually established by Components, and their URIs are used to refer to them in the DSL.

 Apache NiFi Interview Questions & Answers

Ques. 8): What is Apache JMeter?

Answer:

JMeter is an Apache project that is used as a load balancing tool to evaluate the performance of a variety of facilities, including the most sophisticated web applications.

JMeter can also be used as a unit-testing tool for JDBC database connections, FTP, Web services, JMS, HTTP, and TCP connections, as well as OS native processes. JMeter can also be configured as a monitor, though this is more of a monitoring tool than a believer control tool, and it can be used for active scrutiny as well. Jmeter also fosters integration when compared to Selenium, allowing it to control automation scripts in addition to performing or load testing.

 

Ques. 9): What is the Idempotent Consumer pattern in Apache Camel?

Answer:

We employ the Idempotent Consumer pattern in Apache Camel to filter out duplicate messages. Consider a case in which we must handle files without assistance while taking into account. Duplicates should be skipped if there are any. We may utilise Idempotent Consumer right within the component using Apache Camel, and it will skip files that have already been processed. The environment's idempotent=real choice will enable this feature. To do this, Apache Camel uses a statement-id, which is stored in the Idempotent Repository, to keep track of the consumed files. IdempotentRepository types are provided by Apache Camel.

 

Ques. 10): How does Apache Camel handle exceptions?

Answer:

The try> catch> block, the OnException> block, or the errorHandler> block can all be used to handle exceptions.

Any uncaught Exception thrown during the routing and processing of a message is handled by the errorHandler. On the other hand, when certain Exception types are thrown, onException is used to manage them.

 

Ques. 11): What is CamelContext, and how does it work?

Answer:

A single Camel routing rulebase is represented by the CamelContext. The CamelContext is comparable to the Spring ApplicationContext in that it is used in the same way. interface for the general public SuspendableService and RuntimeConfiguration are both extended by CamelContext. The context used to configure routes and policies to utilise during message exchanges between endpoints is represented by this interface.

 

Ques. 12): What is the best way to restart CamelContext?

Answer:

To govern the camel lifecycle, the camel context provides many methods such as start, stop, suspend, and resume. A service can be restarted using one of two techniques. Stopping the context and then starting it is one option. It's known as a cold restart, and it clears the internal state, cache, and other data to render all endpoints useless. Suspend and resume operations are another option. It saves all of your endpoints so you can use them again after you restart.

 

Ques. 13): What is an URI?

 Answer:

In camel, a URI is a naming system for referring to an endpoint. An URI informs Camel about the component being used, the context path, and the options that have been applied to it. The URI is made up of three parts:

  • Scheme
  • Context path
  • Options

Example of a file URI working as a consumer –

 from(“file:src/data?fileName=demo.txt&fileExist=Append”);

Here the scheme points to file, the context path is “src/data” and the options are “fileName and fileExist” are options that can be used with file component or file endpoint.

 

Ques. 14): In Apache Camel, what is the Idempotent Consumer pattern?

Answer:

We employ the Idempotent Consumer pattern in Apache Camel to filter out duplicate messages. Consider the case where we only need to process messages once. Duplicates should be skipped if there are any. We can utilise Idempotent Consumer directly within Apache Camel to bypass messages that have already been processed. The idempotent=true option is used to enable this feature. To do this, Apache Camel uses a message id, which is stored in the Idempotent Repository, to keep track of the consumed messages. IdempotentRepository is a kind of IdempotentRepository provided by Apache Camel.

 

Ques. 15): How to connect with Azure from Apache Camel?

Answer:

Yes. We can connect with azure services using Apache camel connector components.

Maven Dependency:

<dependency>

    <groupId>org.apache.camel</groupId>

    <artifactId>camel-azure-storage-queue</artifactId>

    <version>x.x.x</version>

  <!-- use the same version as your Camel core version -->

</dependency>

Code Example:

from(”azure-storage-queue://storageAccount/messageQueue?accessKey=yourAccessKey”).to(”file://queuedirectory”);

 

Ques. 16): What is a Message in Apache Camel ?

Answer:

Message implements the Message pattern and represents an inbound or outbound message as part of an Exchange.It contains the data being transferred using Routes- It consists of the following fields-

  • Unique Identifier
  • Headers
  • Body
  • Fault Flag

 

Ques. 17): What is the relationship between Apache Camel and ActiveMQ?

Answer:

Apache Camel is embedded in the ActiveMQ broker using the ActiveMQ component. It allows you a lot of versatility when it comes to extending the message broker. Additionally, the ActiveMQ component eliminates the serialisation and network costs associated with connecting to ActiveMQ remotely.

This component is built on the JMS component and allows users to send and consume messages over the JMS Queue.

 

Ques. 18): How do I disable JMX in Apache Camel?

Answer: 

The JMX instrumentation agent is enabled in Camel by default. To disable the JMX instrumentation agent, set the following property in the Java VM system property,

Dorg.apache.camel.jmx.disabled=true

Another way of disabling is by adding the JMX agent element inside the camel context element in the Spring configuration,

<camelContext id=”camel” xmlns=”https://camel.apache.org/schema/spring”>

<jmxAgent id=”agent” disabled=”true”/>

  

</camelContext>

 

Ques. 19): What Jars Do I Need?

Answer :

Camel is designed to be small lightweight and extremely modular so that you only pay for what you use. The core of camel, camel-core.jar is small and has minimal dependencies.

On Java 6 camel-core.jar only depends on

commons-management.jar (for Camel 2.8 or older)

commons-logging.jar (for Camel 2.6 or older)

slf4j-api.jar (from Camel 2.7 onwards)

On Java 5 camel-core.jar depends also on activation.jar and a JAXB2 implementation which typically involves jaxb-api.jar, jaxb-impl.jar and a StAX API which may be stax-api.jar and *woodstox.jar

 

Ques. 20): What Is The Difference Between A Producer And A Consumer Endpoint?

Answer:

A camel route is similar to a data channel in that it transports data. At either end of the channel, there are two endpoints: producer and consumer.

The route's beginning point is called a consumer endpoint. Writing a camel consumer endpoint is the first step in defining a camel route.

A producer endpoint appears at the end of the route (but not always). The data that is passed through the route is consumed by it.

 


Top 20 Apache Ant Interview Questions and Answers

 

                        According to James Duncan Davidson, the ANT stands for "Another Neat Tool." Ants are tiny, but they can carry a lot of weight. As with the Apache ant's job. Apache Ant is a command-line program and Java library for automating software construction processes. In the early 2000s, it arose from the Apache Tomcat project. It was built as a replacement for Unix's Make build tool due to a number of issues with Unix's make. It is responsible for driving the processes or instructions provided in build files as targets and extension points that are interdependent. Ant's most well-known application is the creation of Java applications. Ant comes with a number of built-in jobs for compiling, assembling, testing, and running Java programs.

                        Ant can also be used to create non-Java applications, such as those written in C or C++. Ant can be used to pilot any process that can be specified in terms of targets and tasks in general. For developers looking for Apache Ant Interview questions and answers, we've compiled a list of possible Apache Ant Interview questions and answers.

Apache Tomcat Interview Questions and Answers

Ques: 1): What does the acronym ANT stand for?

Answer:

The ant, according to James Duncan Davidson, is an acronym for "Another Neat Tool." Ants are little but strong. The Apache ant's job is similar. For automating software build processes, Apache Ant is a Java library and command-line programme. It was developed in the early 2000s as part of the Apache Tomcat project. It was developed to replace Unix's Make build tool, which had a number of flaws. It controls the processes or instructions provided as targets and extension points in build files.

Apache Kafka Interview Questions and Answers

Ques. 2): What Are The Ant Concepts?

Answer:

Ant is a build tool that is based on Java. The following are the functions of a build tool:

Open: Ant is an open source project available under the Apache license. Therefore, its source code can be downloaded and modified.

Additionally, Ant uses XML build files which make its development easy.

Cross Platform: Use of XML along with Java makes Ant makes it the perfect solution for developing programs designed to run or be built across a range of different operating systems.

Extensible: New tasks are used to extend the capabilities of the build process, while build listeners are used to help hook into the build process to add extra error tracking functionality.

Integration: As Ant is extensible and open, it can be integrated with any editor or development environment easily.

Apache Tapestry Interview Questions and Answers

Ques. 3): Why Is Ant A Fantastic Construction Tool?

Answer:

Ant is a fantastic build tool for the following reasons:

  • Ant is a cross-platform, user-friendly, extensible, and scalable Java-based build tool.
  • Ant can be utilised in a small personal project as well as a large, multi-team software development effort.
  • Ant syntax is simple to grasp.
  • The XML format was utilised in the Ant syntax.
  • We only need to provide our task in the build.xml file.
  • Ant is simple to use.
  • On large Make-based software projects, eliminating the full-time make file engineer is usual.

Apache Ambari interview Questions & Answers

Ques. 4):  Explain Ant Functionality?

Answer :

Ant is an open source project available under the Apache license. Therefore, its source code can be downloaded and modified.

Additionally, Ant uses XML build files which make its development easy.

Cross Platform: Use of XML along with Java makes Ant makes it the perfect solution for developing programs designed to run or be built across a range of different operating systems.

Extensible: New tasks are used to extend the capabilities of the build process, while build listeners are used to help hook into the build process to add extra error tracking functionality.

As Ant is extensible and open, it can be integrated with any editor or development environment easily.

Apache Hive Interview Questions & Answers

 Ques. 5):  How Do You Make An Ant User Interactive?

Answer:

org.apache.tools.ant.input is the correct answer.

The user input is implemented using the InputHandler interface. The programme creates an InputRequest object, which is sent to InputHandler to perform user input. If the user input is invalid, it will be denied.

handleInput is the only method on the InputHandler interface (InputRequest request). If the input is invalid, this function throws an org.apache.tools.ant.BuildException.

Apache Spark Interview Questions & Answers

Ques. 6):  Explain with the help of Ant and a small example?

Answer: Before we begin using ANT, we must be certain of the project name, the.java files, and, most crucially, the location of the.class files.

For example, we want to use the HelloWorld programme with ant. The Java source files should be placed in the Dirhelloworld subdirectory, and the.class files should be placed in the Helloworldclassfiles subdirectory.

1. The build file by name build.xml is to be written. The script is as follows

<project name=”HelloWorld” default=”compiler” basedir=”.”>

<target name=”compiler”>

<mkdir dir = “Helloworldclassfiles”>

<javac srcdir=”Dirhelloworld” destdir=”Helloworldclassfiles”>

</target>

</project>

2. Now run the ant script to perform the compilation:

C :\> ant

Buildfile: build.xml

and see the results in the extra files and directory created:

c:\>dir Dirhelloworld

c:\>dir Helloworldclassfiles

All the .java files are in Dirhelloworld directory and all the corresponding .class are in Helloworldclassfiles directory.

Apache NiFi Interview Questions & Answers

Ques. 7):  Explain How To Use Runtime In Ant?

Answer :

There is no need to use Runtime in ant. Because ant has Runtime counterpart by name ExecTask. ExecTask is in the package org.apache.tools.ant.taskdefs. The Task is created by using the code in the customized ant Task. The code snippet is as follows:

ExecTask execTask = (ExecTask)project.createTask (“exec”);

 

Ques. 8):  How to do conditional statement in ant?

There are many ways to solve the problem.

Since target if/unless all depend on some property is defined or not, you can use condition to define different NEW properties, which

in turn depends on your ant property values. This makes your ant script very flexible, but a little hard to read.

Ant-contrib has <if> <switch> tasks for you to use.

Ant-contrib also has <propertyregex> which can make very complicate decisions.

 

Ques. 9):  How many different ways are there to set properties in a Build Ant file?

Answer:

There are six different ways to set properties:

Both the name and value attributes must be provided.

/>src.dir/>src.dir/>src.dir/>src.dir/>src.dir/>src.dir/>src.dir/

Both the name and the refid property must be provided.

Setting the filename of the property file to load as the file attribute.

Setting the url property to the url from which the properties should be loaded.

Setting the resource attribute to the property file's resource name to load.

Setting a prefix for the environment attribute.

All of the above can be combined in our build files.

However, only one should be used at any given moment.

 

Ques. 10):  How Can We Make A Jar With Ant?

Answer:

To make a jar of classes, change the target to jar. This target requires the creation of a directory in which the jar will be kept. To finish the jar, we'll need a jar tag. We have passed two attributes in this tag: the first is the name of the destination directory, and the second is the name of the base directory, which contains all of our class files. To make a jar file, we'll need a manifest. We have two attributes in the manifest tag: the first is the name of the manifest file and the second is its value.

 

Ques. 11): What Exactly Is Ivy?

Answer:

Ivy is a well-known dependency manager. IVY is primarily concerned with adaptability and simplicity.

Ivy 2.1.0 is the most recent version.

The following are some of the highlights of the 2.1.0 release:

Ivy's main feature is improved Maven2 compatibility, which includes various bug fixes and more pom functionality.

several bug fixes and improvements as stated in Jira and the release notes additional options for the Ivy Ant jobs and commandline configuration intersections and configuration groups

 

Ques. 12): What method does ant use to read properties? How do I set up my property management system?

Answer:

Ant sets properties in a sequential order, thus once something is set, later properties with the same name cannot replace the prior ones. This is the polar opposite of the Java setters. This allows us to preset all properties in one location and just overwrite the ones that are needed. Let me give you an example. You need a password for a task but don't want to disclose it with anyone on your team, even outside engineers.

Store your password in your ${user.home}/prj.properties

pswd=yourrealpassword

In your include directory master prj.properties

pswd=password

In your build-common.xml read properties files in this order

The commandline will prevail, if you use it: ant -Dpswd=newpassword

${user.home}/prj.properties (personal)

yourprojectdir/prj.properties (project team wise)

your_master_include_directory/prj.properties (universal)

[code lang=”java”]<cvsnttask password="${pswd} … />[/code]

 

Ques. 13): How can I use ant to perform a command from the command line? How can I get the outcome of a perl script running?

Answer:

Use the exec ant task to solve the problem.

Don't forget that ant is a Java programme. That is why the ant is so useful, strong, and adaptable. You must consider Unix if you want ant to get unix commands and results. In MS-Windows, it's the same. Ant just assists you in automating the procedure.

 

Ques. 14): How to copy files without extention?

Answer:

If files are in the directory:

[code lang=”xml”]<include name="a,b,c"/>[/code]

If files are in the directory or subdirectories:

[code lang=”xml”]<include name="**/a,**/b,**/c"/>[/code]

If you want all files without extension are in the directory or subdirectories:

[code lang=”xml”]<exclude name="**/*.*"/>[/code]

 

Ques. 15): How can I troubleshoot my Ant script?

Answer:

There are a variety of options.

Do an echo on the areas where you are unsure. You'll quickly figure out what the issue is. Like the classic printf() function in C or the Java System. println()

In your javascript or custom ant task, use project.log("msg"). Run Ant with -verbose or -debug to learn more about what it's doing and where it's doing it. However, you may grow tired of it quickly because it provides you with too much information.

 

Ques. 16). Why did I get such warning in ant?

Answer:

compile:

[javac] Warning: commons-logging.properties modified in the future.

[javac] Warning: dao\\DAO.java modified in the future.

[javac] Warning: dao\\DBDao2.java modified in the future.

[javac] Warning: dao\\HibernateBase.java modified in the future.

Possible causes of the system time problem include:

You altered the system's clock.

I encountered the same issue before, when I checked out files from cvs to windows and transferred them to a unix machine, I received a large number of these warnings due to a system timing issue.

You'll have the same trouble transferring files from Australia, China, or India to the United States. True, I've done it before and encountered the issue.

 

Ques. 17): How can I dynamically add pieces to an existing path?

Answer:

Yes, this is conceivable. You must, however, create a custom ant job, obtain the path, add/modify it, and use it. What I'm doing is defining a path reference to lib.classpath, then using my own task to add/modify the lib.classpath.

 

Ques. 18): How can I reorganize my jar/war/ear/zip file's directory structure? Is it necessary for me to unarchive them first?

No, you are not required to unarchive them first. To put the files into your destination jar/ear/war files, you don't need to unzip them from the archive.

To extract files from an old archive to a separate directory in your new archive, utilise zipfileset in your jar/war/ear task.

You can also use zipfileset in your jar/war/ear operation to send files from a local directory to a new archive location.

See the follow example:

[code lang=”java”] <jar destfile="${dest}/my.jar">

<zipfileset src="old_archive.zip" includes="**/*.properties" prefix="dir_in_new_archive/prop"/>

<zipfileset dir="curr_dir/abc" prefix="new_dir_in_archive/xyz"/>

</jar>[/code]

 

Ques. 19): How to exclude multi directories in copy or delete task?

Answer:

Here is an example.

[code lang=”xml”]<copy todir="${to.dir}" >

<fileset dir="${from.dir}" >

<exclude name="dirname1" />

<exclude name="dirname2" />

<exclude name="abc/whatever/dirname3" />

<exclude name="**/dirname4" />

</fileset>

</copy>[/code]

  

Ques. 20): How do I get started to use ant? Can you give me a “Hello World” ant script?

Answer:

Download the most recent version of ant from Apache; unzip it somewhere on your machine.

Install j2sdk 1.4 or above.

Set JAVA_HOME and ANT_HOME to the directory your installed them respectively.

Put %JAVA_HOME%/bin;%ANT_HOME%/bin on your Path. Use ${JAVA_HOME}/bin:${ANT_HOME}/bin on UNIX. Yes, you can use forward slash on windows.

Write a “Hello world” build.xml

[code lang=”xml”]

<project name="hello" default="say.hello" basedir="." >

<property name="hello.msg" value="Hello, World!" />

<target name="say.hello" >

<echo>${hello.msg}</echo>

</target>

</project>[/code]

* Type ant in the directory your build.xml located.




Monday, 3 January 2022

Top 20 Apache Tomcat Interview Questions and Answers

  

       Tomcat is a Java Servlet container and web server developed by the Apache Software Foundation's Jakarta project. Client browsers send queries to a web server, which the server answers to with web pages. Web servers can generate dynamic content based on the user's requests. Because it supports both Java servlet and JavaServerPages (JSP) technologies, Tomcat excels at this. Even if a free servlet and JSP engine is required, Tomcat can be utilised as a web server for a variety of applications. It can run on its own or alongside standard web servers like Apache httpd, delivering static pages while Tomcat handles dynamic servlet and JSP queries.

    Apache Tomcat is an open source Java Servlet, JavaServer Pages, Java Expression Language, and Java WebSocket implementation platform. Many firms are hiring Devops engineers, Apache Tomcat administrators, Linux Apache Tomcat jobs, and Hadoop developers at varying levels of experience. The most popular Web server is Apache, and you must be familiar with it if you plan to work as a Middleware/System/Web administrator. Apache HTTP is a free and open-source web server that runs on Windows and Linux.

 Apache Kafka Interview Questions and Answers

Ques. 1): Who is in charge of Tomcat?

Answer:

The Apache Software Foundation is the correct answer. The Apache Software Foundation is a non-profit organisation that oversees several Open Source projects.

The Apache Software Foundation's Java-based projects are referred to as Jakarta.

Tomcat is an Apache Jakarta project that manages server-side Java (in the form of Servlets and JSPs). Tomcat is the "reference" implementation of the Servlet and JSP specifications, which means that anything that runs in Tomcat should run in any compliant Servlet / JSP container.

 Apache Tapestry Interview Questions and Answers

Ques. 2): Difference between apache and apache-tomcat server?

Answer: 

Apache: Apache is mostly used to serve static content, but there are numerous add-on modules (some of which are included with Apache) that allow it to modify the content and serve dynamic content written in Perl, PHP, Python, Ruby, and other languages.

Apache is an HTTP server that serves HTTP requests.

Tomcat is a servlet/JSP container developed by Apache. It's written in the Java programming language. Although it can provide static information, its primary function is to host servlets and JSPs.

JSP files (which are comparable to PHP and older ASP files) are converted into Java code (HttpServlet), which is then compiled into.class files and run by the Java virtual machine by the server.

Apache Tomcat is used to deploy your Java Servlets and JSPs. So in your Java project, you can build your WAR (short for Web ARchive) file, and just drop it in the deploy directory in Tomcat.

Although it is possible to get Tomcat to run Perl scripts and the like, you wouldn’t use Tomcat unless most of your content was Java.

Tomcat is a Servlet and JSP Server serving Java technologies

 Apache Ambari interview Questions & Answers

Ques. 3):  What exactly is Coyote?

Answer:

Coyote is a Tomcat Connector component that acts as a web server and supports the HTTP 1.1 protocol. This enables Catalina, which is ostensibly a Java Servlet or JSP container, to additionally serve local files as HTTP documents.

Coyote monitors a specific TCP port for incoming connections to the server and transmits the request to the Tomcat Engine, which processes the request and returns a response to the requesting client.

Coyote is Tomcat's HTTP connector, which offers an interface for browsers to connect to.

 Apache Hive Interview Questions & Answers

Ques. 4): What is a servlet container?

Answer:

A servlet container is a web server component that communicates with Java servlets. The servlet container is in charge of managing servlet lifecycles, mapping URLs to specific servlets, and ensuring that the URL requester has the appropriate access privileges.

Requests to servlets, JavaServer Pages (JSP) files, and other types of files containing server-side code are handled by the servlet container. The Web container generates servlet instances, loads and unloads servlets, creates and manages request and response objects, and handles other servlet-related operations.

The web component contract of the Java EE architecture is implemented by the servlet container, which defines a runtime environment for web components that includes security, concurrency, lifecycle management, transaction, deployment, and other services.

Apache Spark Interview Questions & Answers 

Ques. 5): How Do I Can Change The Default Home Page Loaded By Tomcat?

Answer :

We can easily override home page via adding welcome-file-list in application $TOMCAT_HOME/webapps//WEB-INF /web.xml file or by editing in container $TOMCAT_HOME/conf/web.xml

In $TOMCAT_HOME/conf/web.xml, it may look like this:

    index.html

    index.htm

    index.jsp

Request URI refers to a directory, the default servlet looks for a "welcome file" within that directory in following order: index.html, index.htm and index.jsp

Apache NiFi Interview Questions & Answers 

Ques. 6): what is a difference between Apache and Nginx web server?

Answer:

Both are classified as Web Servers, but there are a few key differences. Nginx is an event-driven web server, whereas Apache is a process-driven web server.

Nginx has a reputation for being faster than Apache.

Whereas Nginx does not support OpenVMS or IBMi, Apache supports a wide range of operating systems.

Nginx is still catching up to Apache in terms of module interoperability with backend application servers.

Nginx is a lightweight web server that is rapidly gaining market share. If you're new to Nginx, you might be interested in reading some of my Nginx articles.

 Apache Ant Interview Questions and Answers

Ques. 7): How Do You Create Multiple Virtual Hosts?

Answer :

If you want tomcat to accept requests for different hosts e.g. www.myhostname.com then you must

Create ${catalina.home}/www/appBase , ${catalina.home}/www/deploy, and ${catalina.home}/conf/Catalina/www.myhostname.com

Add a host entry in the server.xml file

Create the the following file under conf/Catalina/www.myhostname.com/ROOT.xml

Add any parameters specific to this hosts webapp to this context file

Put your war file in ${catalina.home}/www/deploy

When tomcat starts, it finds the host entry, then looks for any context files and will start any apps with a context.

 

Ques. 8): In Apache Tomcat, what is Catalina?

Answer:

Once Jasper has completed the compilation, it turns JSP into a servlet, which Catalina can then manage. Catalina is a servlet container for Tomcat. It also implements all of the Java server page and servlet specs. Catalina is a Java engine embedded into Tomcat that provides an efficient environment for servlets to execute in.

 

Ques. 9): What exactly do you mean by Tomcat's default port, and can it be used with SSL?

Answer:

Tomcat uses port 8080 as its default port. Well, you can change it by editing the server.xml file in the Tomcat install directory's conf folder. By adjusting the property to the desired port connection port="8080" and then restarting Tomcat, the modifications will take effect.

Tomcat can use SSL, but it will require some configuration. You must complete the following tasks:

Generate a keystore

Then add a connector in server.xml

Restart Tomcat

 

Ques. 10): What is a mod_evasive module, and what does it do?

Answer:

Mod_evasive is a third-party module that accomplishes one simple task really well. It identifies when your site is under attack by a Denial of Service (DoS) attack and mitigates the harm that the attack causes. When a single client makes repeated requests in a short period of time, mod evasive recognises this and refuses additional requests from that client. The ban can last for a very short time because it is simply reissued the following time a request is discovered from that same host.

 

Ques. 11): Explain Directory Structure Of Tomcat?

Answer :

Directory structure of Tomcat are:

bin - contain startup, shutdown, and other scripts (*.sh for UNIX and *.bat for Windows systems) and some jar files also there.

conf - Server configuration files (including server.xml) and related DTDs. The most important file in here is server.xml. It is the main configuration file for the container.

lib - contains JARs those are used by container and Servlet and JSP application programming interfaces (APIs).

logs - Log and output files.

webapps – deployed web applications reside in it .

work - Temporary working directories for web applications and mostly used during in JSP compilation where JSP is converted to a Java servlet.

temp - Directory used by the JVM for temporary files .

 

Ques. 12): Explain How Running Tomcat As A Windows Service Provides Benefits?

Answer :

Running Tomcat as a windows service provides benefits like:

Automatic startup: It is crucial for environment where you may want to remotely re-start a system after maintenance

Server startup without active user login: Tomcat is run oftenly on blade servers that may not even have an active monitor attached to them. Windows services can be started without an active user

Security: Tomcat under window service enables you to run it under a special system account, which is protected from the rest of the user accounts

 

Ques. 13): How Do Servlet Life Cycles Work?

Answer:

The life-cycle of a typical Tomcat servlet is as follows:

Through one of its connectors, Tom-cat receives a request from a client.

This request will be processed. This request is routed through Tomcat to the proper server.

Tomcat checks that the servlet class has been loaded after the request has been forwarded to the proper servlet. If it isn't, Tomcat wraps the servlet in Java Bytecode, which is executed by the JVM and creates a servlet instance.

The servlet is started by Tomcat by invoking its init method. The servlet includes code that can inspect Tomcat configuration files and take appropriate action, as well as declare any resources it might need.

Once the servlet has been started, Tomcat can call the servlet’s service method to proceed the request

Tomcat and the servlet can co-ordinate or communicate through the use of listener classes during the servlet’s lifecycle, which tracks the servlet for a variety of state changes.

To remove the servlet, Tomcat calls the servlets destroy method.

 

Ques. 14): In Tomcat, what is the difference between a host and a context?

Answer:

In Tomcat, the host is a component. It's a network name association for the server. On the other hand, context is an element that indicates a web application that is running on a certain virtual host. Web applications are built on top of a Web Application Archive (WAR) file or a corresponding directory that contains all of the unpacked content indicated in the servlet description.

 

Ques. 15): What Is The Distinction Between A Webserver And An Application Server?

Answer:

The main distinction between a web server and an application server is that a web server can only execute web applications, such as servlets and JSPs, and has just one container, the Web container, that is used to understand and execute web applications. The application server has the ability to run Enterprise applications, i.e. (servlets, jsps, and EJBs)

it is having two containers:

Web Container(for interpreting/executing servlets and jsps)

EJB container(for executing EJBs).

it can perform operations like load balancing , transaction demarcation etc.

 

Ques. 16): Apart from Apache Tomcat, what are the different kinds of Web Servers?

Answer:

There are many web servers as mentioned below:

LiteSpeed Web Server

GWS Web Server

Microsoft IIS Web Server

Nginx Web Server

Jigsaw Web Server

Sun Java System Web Server

Lighttpd Web Server

 

Ques. 17): How to limit upload size?

Answer:

I have a web application that allows users to upload files such as word documents, pdf and so on.  How do I limit file upload by users?

You can make use of the LimitRequestBody directive to limit upload file size.

<Directory "usr/local/apache2/uploads">

LimitRequestBody 9000

</Directory>

The value assigned to the LimitRequestBody allows Apache to accept and store file uploads of 9000 bytes by users. You can adjust the value based on the requirement.

 

Ques. 18): Explain how to use WAR files to deploy a web application.

Answer:

JSPs, servlets, and their associated files are placed under Tomcat's web applications directory in the appropriate subdirectories. You can combine all of the files in the web apps directory into a single compressed file with the extension.war. A web application can be run by placing a WAR file in the webapps directory. When a web server starts up, it extracts the contents of the WAR file and places them in the proper webapps sub-directories.

 

Ques. 19): How can an Apache Service be stopped by its control script?

Answer:

The Apache Service is controlled using a script called the apachectl.

So, to stop the service, we need to run the below-mentioned commands.

#apachectl stop [for Ubuntu based system]

# /etc/inid.t/httpd.stop [for red hat based system]

 

Ques. 20): What is the purpose of the Listen property in Apache Tomcat?

Listening is very important for Apache Tomcat and the developers.

If a developer has numerous IPs on the server, we must explicitly indicate IP and PORT in the Listen Drive if we want Apache to evaluate only one of them.

For example: 10.10.10.20

 

 

Top 20 Apache Kafka Interview Questions and Answers

 

Apache Kafka is a free and open-source streaming platform. Kafka began as a messaging queue at LinkedIn, but it has since grown into much more. It's a flexible tool for working with data streams that may be used in a wide range of situations. Because Kafka is a distributed system, it can scale up and down as needed. All that's left to do now is expand the cluster with new Kafka nodes (servers).

In a short length of time, Kafka can process a big volume of data. It also has a low latency, allowing for real-time data processing. Despite the fact that Apache Kafka is written in Scala and Java, it may be utilised with a wide range of computer languages.

 Apache Hive Interview Questions & Answers

Ques. 1): What exactly do you mean when you say "confluent kafka"? What are the benefits?

Answer:

Confluent is an Apache Kafka-based data streaming platform that can do more than just publish and subscribe. It can also store and process data within the stream. Confluent Kafka is a more extensive version of Apache Kafka. It improves Kafka's integration capabilities by adding tools for optimising and maintaining Kafka clusters, as well as methods for ensuring the security of the streams. Because of the Confluent Platform, Kafka is simple to set up and use. Confluent's software is available in three flavours:

A free, open-source streaming platform that makes working with real-time data streams a breeze;

A premium cloud-based version with more administration, operations, and monitoring features; an enterprise-grade version with more administration, operations, and monitoring tools.

Following are the advantages of Confluent Kafka :

  • It features practically all of Kafka's characteristics, as well as a few extras.
  • It greatly simplifies the administrative operations procedures.
  • It relieves data managers of the burden of thinking about data relaying.

 Apache Ambari interview Questions & Answers

Ques. 2): What are some of Kafka's characteristics?

Answer:

The following are some of Kafka's most notable characteristics:-

  • Kafka is a fault-tolerant messaging system with a high throughput.
  • A Topic is a built-in patriation system in Kafka.
  • Kafka also comes with a replication mechanism.
  • Kafka is a distributed messaging system that can manage massive volumes of data and transfer messages from one sender to another.
  • The messages can also be saved to storage and replicated across the cluster using Kafka.
  • Kafka works with Zookeeper for synchronisation and collaboration with other services.
  • Kafka provides excellent support for Apache Spark.

 Apache Tapestry Interview Questions and Answers

Ques. 3): What are some of the real-world usages of Apache Kafka?

Answer:

The following are some examples of Apache Kafka's real-world applications:

Message Broker: Because Apache Kafka has a high throughput value, it can handle a large number of similar sorts of messages or data. Apache Kafka can be used as a publish-subscribe messaging system that makes it simple to read and publish data.

To keep track of website activity, Apache Kafka can check if data is successfully delivered and received by websites. Apache Kafka is capable of handling the huge volumes of data generated by websites for each page as well as user actions.

To keep track of metrics connected to certain technologies, such as security logs, we can utilise Apache Kafka to monitor operational data.

Data logging: Apache Kafka provides data replication between nodes functionality that can be used to restore data on failed nodes. It can also be used to collect data from various logs and make it available to consumers.

Stream Processing with Kafka: Apache Kafka can also handle streaming data, the data that is read from one topic, processed, and then written to another. Users and applications will have access to a new topic containing the processed data.

 Apache NiFi Interview Questions & Answers

Ques. 4): What are some of Kafka's disadvantages?

Answer:

The following are some of Kafka's drawbacks:

  • When messages are tweaked, Kafka performance suffers. Kafka works well when the message does not need to be updated.
  • Kafka does not support wildcard topic selection. It's crucial to use the appropriate issue name.
  • When dealing with large messages, brokers and consumers degrade Kafka's performance by compressing and decompressing the messages. This has an effect on Kafka's performance and throughput.
  • Kafka does not support several message paradigms, such as point-to-point queues and request/reply.
  • Kafka lacks a comprehensive set of monitoring tools.

 Apache Spark Interview Questions & Answers

Ques. 5): What are the use cases of Kafka monitoring?

Answer:

The following are some examples of Kafka monitoring use cases:

  • Monitor the use of system resources: It can be used to track the usage of system resources like memory, CPU, and disc over time.
  • Threads and JVM consumption should be monitored: To free up memory, Kafka relies on the Java garbage collector, which ensures that it runs frequently, ensuring that the Kafka cluster is more active.
  • Maintain an eye on the broker, controller, and replication statistics so that partition and replica statuses can be changed as needed.
  • Identifying which applications are producing excessive demand and performance bottlenecks may aid in quickly resolving performance issues.

 

Ques. 6): What is the difference between Kafka and Flume?

Answer:

Flume's main application is ingesting data into Hadoop. Hadoop's monitoring system, file types, file system, and tools like Morphlines are all incorporated into the Flume. When working with non-relational data sources or streaming a huge file into Hadoop, the Flume is the best option.

Kafka's main use case is as a distributed publish-subscribe messaging system. Kafka was not created with Hadoop in mind, therefore using it to gather and analyse data for Hadoop is significantly more difficult than using Flume.

When a highly reliable and scalable corporate communications system, such as Hadoop, is required, Kafka can be used.

 

Ques. 7): Explain the terms "leader" and "follower."

Answer:

In Kafka, each partition has one server that acts as a Leader and one or more servers that operate as Followers. The Leader is in charge of all read and write requests for the partition, while the Followers are responsible for passively replicating the leader. In the case that the Leader fails, one of the Followers will assume leadership. The server's load is balanced as a result of this.

 

Ques. 8): What are the traditional methods of message transfer? How is Kafka better from them?

Answer:

The classic techniques of message transmission are as follows: -

Message Queuing: -

The message queuing pattern employs a point-to-point approach. A message in the queue will be discarded once it has been eaten, similar to how a message in the Post Office Protocol is removed from the server once it has been delivered. These queues allow for asynchronous messaging.

If a network difficulty prevents a message from being delivered, such as when a consumer is unavailable, the message will be queued until it is transmitted. As a result, messages aren't always sent in the same order. Instead, they are distributed on a first-come, first-served basis, which in some cases can improve efficiency.

Publisher - Subscriber Model:-

The publish-subscribe pattern entails publishers producing ("publishing") messages in multiple categories and subscribers consuming published messages from the various categories to which they are subscribed. Unlike point-to-point texting, a message is only removed once it has been consumed by all category subscribers.

Kafka caters to a single consumer abstraction, the consumer group, which contains both of the aforementioned. The advantages of adopting Kafka over standard communications transfer mechanisms are as follows:

Scalable: Data is partitioned and streamlined using a cluster of devices, which increases storage capacity.

Faster: A single Kafka broker can handle megabytes of reads and writes per second, allowing it to serve thousands of customers.

Durability and Fault-Tolerant: The data is kept persistent and tolerant to any hardware failures by copying the data in the clusters.

  

Ques. 9): What is a Replication Tool in Kafka? Explain how to use some of Kafka's replication tools.

Answer:

The Kafka Replication Tool is used to define the replica management process at a high level. Some of the replication tools available are as follows:

Replica Leader Election Tool of Choice: The Preferred Replica Leader Election Tool distributes partitions to many brokers in a cluster, each of which is known as a replica. The favourite replica is a term used to describe the leader. For various partitions, the brokers generally distribute the leader position fairly across the cluster, but due to failures, planned shutdowns, and other circumstances, an imbalance might develop over time. By reassigning the preferred copies, and hence the leaders, this tool can be utilised to maintain the balance in these instances.

Topics tool: The Kafka topics tool is in charge of all administration operations relating to topics, including:

  • Listing and describing the topics.
  • Topic generation.
  • Modifying Topics.
  • Adding a topic's dividers.
  • Disposing of topics.

Tool to reassign partitions: The replicas assigned to a partition can be changed with this tool. This refers to adding or removing followers from a partition.

StateChangeLogMerger tool: The StateChangeLogMerger tool collects data from brokers in a cluster, formats it into a central log, and aids in the troubleshooting of state change issues. Sometimes there are issues with the election of a leader for a particular partition. This tool can be used to figure out what's causing the issue.

Change topic configuration tool: used to create new configuration choices, modify current configuration options, and delete configuration options.

 

Ques. 10):  Explain the four core API architecture that Kafka uses.

Answer:

Following are the four core APIs that Kafka uses:

Producer API:

The Producer API in Kafka allows an application to publish a stream of records to one or more Kafka topics.

Consumer API:

The Kafka Consumer API allows an application to subscribe to one or more Kafka topics. It also allows the programme to handle streams of records generated in connection with such topics.

Streams API: The Kafka Streams API allows an application to process data in Kafka using a stream processing architecture. This API allows an application to take input streams from one or more topics, process them with streams operations, and then generate output streams to send to one or more topics. In this way, the Streams API allows you to turn input streams into output streams.

Connect API:

The Kafka Connector API connects Kafka topics to applications. This opens up possibilities for constructing and managing the operations of producers and consumers, as well as establishing reusable links between these solutions. A connector, for example, may capture all database updates and ensure that they are made available in a Kafka topic.

  

Ques. 11): Is it possible to utilise Kafka without Zookeeper?

Answer:

As of version 2.8, Kafka can now be utilised without ZooKeeper. When Kafka 2.8.0 was released in April 2021, we all had the opportunity to check it out without ZooKeeper. This version, however, is not yet ready for production and is missing a few crucial features.

It was not feasible to connect directly to the Kafka broker without using Zookeeper in prior versions. This is because the Zookeeper is unable to fulfil client requests when it is down.

 

Ques. 12): Explain Kafka's concept of leader and follower.

Answer:

Each partition in Kafka has one server acting as a Leader and one or more servers acting as Followers. The Leader is in control of the partition's read and write requests, while the Followers are in charge of passively replicating the leader. If the Leader is unable to lead, one of the Followers will take over. As a result, the server's load is balanced.

 

Ques. 13): In Kafka, what is the function of partitions?

Answer:

From the standpoint of the Kafka broker, partitions allow a single topic to be partitioned across many servers. This gives you the ability to store more data in a single topic than a single server. If you have three brokers and need to store 10TB of data in a topic, you can create a subject with only one partition and store the entire 10TB on one broker. Another option is to create a three-partitioned topic with 10 TB of data distributed across all brokers. From the consumer's perspective, a partition is a unit of parallelism.

 

Ques. 14): In Kafka, what do you mean by geo-replication?

Answer:

Geo-replication is a feature in Kafka that allows you to copy messages from one cluster to a number of other data centres or cloud locations. You can use geo-replication to replicate all of the files and store them all over the world if necessary. Using Kafka's MirrorMaker Tool, we can achieve geo-replication. We can ensure data backup without fail by employing the geo-replication strategy.

 

Ques. 15): Is Apache Kafka a platform for distributed streaming? What are you going to do with it?

Answer:

Yes. Apache Kafka is a platform for distributed streaming data. Three critical capabilities are included in a streaming platform:

  • We can easily push records using a distributed streaming infrastructure.
  • It has a large storage capacity and allows us to store a large number of records without difficulty.
  • It assists us in processing records as they arrive.
  • The Kafka technology allows us to do the following:
  • We may create a real-time stream of data pipelines using Apache Kafka to send data between two systems.
  • We could also create a real-time streaming platform that reacts to data.

 

Ques. 16): What is Apache Kafka Cluster used for?

Answer:

Apache Kafka Cluster is a messaging system that is used to overcome the challenges of gathering and processing enormous amounts of data. The following are the most important advantages of Apache Kafka Cluster:

We can track web activities using Apache Kafka Cluster by storing/sending events for real-time processes.

We may use this to both alert and report on operational metrics.

We can also use Apache Kafka Cluster to transform data into a common format.

It enables the processing of streaming data to the subjects in real time.

It is currently ruling over some of the most popular programmes such as ActiveMQ, RabbitMQ, AWS, and others due to its outstanding characteristics.

 

Ques. 17): What is the purpose of the Streams API?

Answer:

Streams API is an API that allows an application to function as a stream processor, ingesting an input stream from one or more topics and providing an output stream to one or more output topics, as well as effectively changing the input streams to output streams.

 

Ques. 18): In Kafka, what do you mean by graceful shutdown?

Answer:

Any broker shutdown or failure will be detected automatically by the Apache cluster. In this case, new leaders will be picked for partitions previously handled by that device. This can occur as a result of a server failure or even when the server is shut down for maintenance or configuration changes. Kafka provides a graceful approach for ending a server rather than killing it when it is shut down on purpose.

When a server is turned off, the following happens:

Kafka guarantees that all of its logs are synced onto a disc to avoid having to perform any log recovery when it is restarted. Purposeful restarts can be sped up since log recovery requires time.

Prior to shutting down, all partitions for which the server is the leader will be moved to the replicas. The leadership transfer will be faster as a result, and the period each partition is inaccessible will be decreased to a few milliseconds.

  

Ques. 19): In Kafka, what do the terms BufferExhaustedException and OutOfMemoryException mean?

Answer:

A BufferExhaustedException is thrown when the producer can't assign memory to a record because the buffer is full. If the producer is in non-blocking mode and the pace of production over an extended period of time exceeds the rate at which data is transferred from the buffer, the allocated buffer will be emptied and an exception will be thrown.

An OutOfMemoryException may occur if the consumers send large messages or if the quantity of messages sent increases faster than the rate of downstream processing. As a result, the message queue becomes overburdened, using RAM.

 

Ques. 20): How will you change the retention time in Kafka at runtime?

Answer:

A topic's retention time can be configured in Kafka. A topic's default retention time is seven days. While creating a new subject, we can set the retention time. When a topic is generated, the broker's property log.retention.hours are used to set the retention time. When configurations for a currently operating topic need to be modified, kafka-topic.sh must be used.

The right command is determined on the Kafka version in use.

The command to use up to 0.8.2 is kafka-topics.sh --alter.

Use kafka-configs.sh --alter starting with version 0.9.0.