Showing posts with label Apache. Show all posts
Showing posts with label Apache. Show all posts

April 15, 2022

Top 20 Apache Storm Interview Questions and Answers

     

                                    Apache Storm is a Clojure-based distributed stream processing platform that is free and open-source. The project was founded by Nathan Marz and the BackType team, and it was open-sourced after Twitter acquired it. Storm makes it simple to reliably process unbounded streams of data, resulting in real-time processing instead of batch processing as Hadoop does. Storm is simple to use and may be used with a variety of computer programming languages.


Apache Kafka Interview Questions and Answers 


Ques. 1): Where would you put Apache Storm to good use?

Answer:

Storm is used for: Stream processing—Apache Storm is used to process real-time streams of data and update many databases. The input data must be processed at a rate that is equal to that of the input data. Apache Storm's Distributed RPC may parallelize a complex query, allowing it to be computed in real time. Continuous computation—Data streams are processed in real time, and Storm displays the results to clients. This may necessitate processing each message as it arrives or developing it in small batches over a short period of time. Continuous computation is demonstrated by streaming popular themes from Twitter into web browsers. Real-time analytics: Apache Storm will evaluate and respond to data as it arrives in real-time from multiple data sources.

 

Apache Struts 2 Interview Questions and Answers


Ques. 2): What exactly is Apache Storm? What are the Storm components?

Answer:

Apache Storm is an open source distributed real-time compute system for real-time large data analytics processing. Apache Storm, unlike Hadoop, supports real-time processing and may be utilised with any programming language.

Apache Storm contains the following components:

Nimbus: It functions as a Job Tracker for Hadoop. It distributes code across the cluster, uploads computation for execution, assigns workers to the cluster, and monitors and reallocates workers as needed.

Zookeeper: It serves as a communication intermediary between the Storm Cluster Supervisor and the Zookeeper. Through Zookeeper, it interacts with Nimbus and runs the procedure based on the signals received from Nimbus.

 

Apache Spark Interview Questions and Answers


Ques. 3): Storm's Codebase has how many unique layers?

Answer:

Storm's codebase is divided into three tiers.

First: Storm was built from the ground up to work with a variety of languages. Topologies are defined as Thrift structures in Nimbus, which is a Thrift service. Storm may be utilised from any language thanks to the use of Thrift.

Second: Storm specifies all of its interfaces as Java interfaces. So, despite the fact that Storm's implementation contains a lot of Clojure, all usage must go through the Java API. This means that Storm's whole feature set is always accessible via Java.

Third: Storm’s implementation is largely in Clojure. Line-wise, Storm is about half Java code, half Clojure code. But Clojure is much more expressive, so in reality, the great majority of the implementation logic is in Clojure.


Apache Hive Interview Questions and Answers


Ques. 4): What are Storm Topologies and How Do They Work?

Answer:

A Storm topology contains the philosophy for a real-time application. Storm is similar to MapReduce in terms of topology. A key distinction is that a MapReduce job has a finish, whereas a topology can go on indefinitely (or until you kill it, of course). A topology is a graph made up of spouts, bolts, and stream groupings.


Apache Tomcat Interview Questions and Answers


5) What do you mean when you say "nodes"?

Answer:

The Master Node and the Worker Node are the two types of nodes. The Master Node manages the Nimbus daemon, which assigns work to devices and monitors their performance. The Supervisor daemon, which runs on the Worker Node, assigns responsibilities to other worker nodes and supervises them as needed.


Apache Ambari interview Questions and Answers


Ques. 6): Explain how Apache Storm processes a message completely.

Answer:

Storm requests a tuple from the Spout by invoking the nextTuple function or method on the Spout. To discharge a tuple to one of its output streams, the Spout uses the SpoutoutputCollector provided in the open method. The Spout assigns a "message id" to each tuple it discharges, which will be used to recognise the tuple afterwards.

The tuple is then transmitted to consuming bolts, and storm is in charge of tracking the message tree that is generated. If the storm is certain that a tuple has been properly digested, it can invoke the ack operation on the original Spout job, passing the message id that the Spout has provided to the Storm.


Apache Tapestry Interview Questions and Answers


Ques. 7): When my topology is being set up, why am I getting a NotSerializableException/IllegalStateException?

Answer:

Prior to the topology being performed, the topology is instantiated and then serialised to byte format and stored in ZooKeeper as part of the Storm lifecycle.

Serialization will fail in this stage if a spout or bolt in the topology has an initialised unserializable property. If an unserializable field is required, initialise it in the prepare method of the bolt or spout, which is called after the topology is supplied to the worker.


Apache Ant Interview Questions and Answers


Ques. 8): In Storm, how do you kill a topology?

Answer:

Storm kill topology-name [-w wait-time-secs]

The topology with the name topology-name is destroyed. Storm will turn off the topology's spouts for the duration of the topology's message timeout to allow all currently processing messages to complete. Storm will then turn off the generators and clean up the area. With the -w parameter, you can stop Storm from pausing between deactivation and shutdown.


Apache Camel Interview Questions and Answers


Ques. 9): What is the best way to write integration tests in Java for an Apache Storm topology?

Answer:

For integration testing, you can use LocalCluster. For ideas, have a look at some of Storm's own integration tests. FeederSpout and FixedTupleSpout are the tools you should employ. Using the tools in the Testing class, a topology where all spouts implement the CompletableSpout interface can be run until fulfilment. Storm tests can additionally choose to "simulate time," which means the Storm topology will be idle until LocalCluster.advanceClusterTime is called. This allows you to do assertions, for example, in between bolt emits.


Apache Cassandra Interview Questions and Answers


Ques. 10): What's the difference between Apache Kafka and Apache Storm, and why should you care?

Answer:

Apache Kafka: Apache Kafka is a distributed and scalable messaging system that can manage massive amounts of data while allowing messages to flow from one end-point to the next. Kafka is meant to allow a single cluster to function as a huge organization's central data backbone. It may be expanded elastically and transparently without any downtime. To allow data streams greater than the capability of any single machine and clusters of coordinated users, data streams are partitioned and dispersed across a cluster of machines.

Whereas.

Apache Storm is a real-time message processing system that allows users to update and manipulate data in real time. Storm takes the data from Kafka and does the necessary processing. It makes it simple to process limitless streams of data safely, doing real-time processing in the same way as Hadoop does batch processing. A storm is easy to use, works with any computer language, and is a lot of fun.


Apache NiFi Interview Questions and Answers


Ques. 11): When do you invoke the cleanup procedure?

Answer:

When a Bolt is shut down, the cleanup method is called, and it should clean up any resources that were opened. There's no guarantee that this method will be called on the cluster: for example, if the machine where the task is running crashes, the function will be unavailable.

When you run topologies in local mode (where a Storm cluster is mimicked), and you want to be able to run and kill several topologies without experiencing resource leaks, you should use the cleanup approach.

 

Ques. 12): What are the common configurations in Apache Storm?

Answer:

Config.TOPOLOGY_WORKERS : This sets the number of worker processes to use to execute the topology.

Config.TOPOLOGY_ACKER_EXECUTORS : This sets the number of executors that will track tuple trees and detect when a spout tuple has been fully processed By not setting this variable or setting it as null, Storm will set the number of acker executors to be equal to the number of workers configured for this topology.

Config.TOPOLOGY_MAX_SPOUT_PENDING : This sets the maximum number of spout tuples that can be pending on a single spout task at once

Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS : This is the maximum amount of time a spout tuple has to be fully completed before it is considered failed.

Config.TOPOLOGY_SERIALIZATIONS : You can register more serializers to Storm using this config so that you can use custom types within tuples.

 

Ques. 13):  What are the advantages of using Apache Storm?

Answer:

Using Apache Storm for real-time processing has a number of advantages.

First:  Storm is a breeze to use. Its pre-configured configurations make it simple to deploy and use.

Second: it works extremely quickly, with each node capable of processing 100 messages in a single second.

Third: Apache Storm is a scalable option since it can work with any programming language, making it simple to deploy across a large number of machines.

Fourth: it has the capability of automatically identifying errors and restarting the workers.

Fifth: it is one of the most significant benefits of Apache Storm that it is reliable. It ensures the data to be executed once and in some cases more than once.

 

Ques. 14): In Storm, tell us about the stream groups that are built-in?

Answer:

Storms consist of 8 stream grouping that is built-in. These are :

1.      Shuffle Grouping: In Shuffle Grouping, the same quantity of tuples is distributed to every bolt in a random manner.

2.      Field Grouping: The stream is divided with the help of fields that are specified in the grouping, in Field Grouping.

3.      Partial Key Grouping: Partial Grouping is similar to field grouping as the stream is divided with the help of the field that is specified in the grouping but is different in the sense that it gives resources that are better utilized.

4.      All Grouping: All Grouping should be used carefully since the stream is duplicated across the task of all bolts.

5.      Global Grouping: In Global Grouping, the complete stream goes to each of the tasks of bolts.

6.      None Grouping: In None Grouping, the grouping of stream need not have cared.

7.      Direct Grouping: Direct Grouping is a different type of grouping than others. In this, it is decided by the producer that the task will be received by which task.

8.      Local Grouping: Local Grouping is also known as Shuffle Grouping.

 

Ques. 15): What role does real-time analytics play?

Answer:

Real-time analytics is critical, and the need is rapidly increasing. The application appears to deliver quick solutions with real-time insights. It covers a wide range of industries, including retail, telecommunications, and finance. In the banking industry, many frauds are reported. Fraud transactions are one of the most common types of fraud. Such frauds occur often, and real-time analytics can assist in detecting and identifying them. It also has a place in the world of social media sites like Twitter. The most popular subjects on Twitter are displayed to users. Real-time analytics plays a part in attracting visitors and generating cash.

 

Ques. 16): Why SSL is not included in Apache?

Answer:

SSL is not included in Apache due to some significant reasons. Some governments don't allow import, export and do not give permission for using the encryption technology which is required by SSL data transport. If SSL would be included in Apache then it won't be available freely due to various legal matters. Some technology of SSL which is used for talking to current clients is under the patent of RSA Data Security and it does not allow its usage without a license.

 

Ques. 17): What is the purpose of the Server Type directive in the Apache server?

Answer: The server type directive in Apache's server determines whether Apache should keep everything in one process or spawn as a child process. The server type directive is not accessible in Apache 2.0, hence it is not discovered. It is, nevertheless, accessible in Apache 1.3 for background compatibility with Apache UNIX versions.

 

Ques. 18): Worker processes, executors, and tasks: what forms a running topology?

Answer:

Storm differentiates between the three major entities that are utilised to run a topology in a Storm cluster:

        A worker process is responsible for executing a subset of a topology. A worker process is associated with a specific topology and may run one or more executors for one or more of the topology's components (spouts or bolts). Within a Storm cluster, a running topology comprises of many such processes running on multiple machines.

        A thread produced by a worker process is known as an executor. It may do one or more tasks for the same component at the same time (spout or bolt).

·         A task performs the actual data processing — each spout or bolt that you implement in your code executes as many tasks across the cluster. The number of tasks for a component is always the same throughout the lifetime of a topology, but the number of executors (threads) for a component can change over time.

 

Ques. 19): When a Nimbus or Supervisor daemon dies, what happens?

Answer:

The Nimbus and Supervisor daemons are designed to be stateless and fail-fast (the process self-destructs whenever an unexpected circumstance occurs) (all state is kept in Zookeeper or on disk).

Using a tool like daemon tools or monit, the Nimbus and Supervisor daemons must be operated under supervision. As a result, if the Nimbus or Supervisor daemons die, they will restart as if nothing had happened.

The death of Nimbus or the Supervisors, in particular, has little effect on worker procedures. In Hadoop, on the other hand, if the JobTracker dies, all the running jobs are destroyed.

 

Ques. 20): What are some general guidelines you can provide me for customising Storm+Trident?

Answer:

The number of workers is a multiple of the number of machines; parallelism is also a multiple of the number of workers; and the number of Kafka partitions is a multiple of the number of spout parallelism.

Use one worker per machine per topology.

Begin with fewer, larger aggregators, one for each machine that has employees.

Make use of the isolation timer.

Use one acker per worker by default in version 0.9, although earlier versions do not.

Enable GC logging; if everything is in order, you should only notice a few significant GCs.

Set the trident batch millis to around 50% of your average end-to-end latency.

Start with a modest max spout pending — one for trident, or the number of executors for storm — and gradually increase it until the flow stops changing. You'll probably get close to 2*(throughput in recs/sec)*(end-to-end delay) (2x the capacity of Little's law).




January 05, 2022

Top 20 Apache Struts 2 Interview Questions and Answers

  

Struts 2 is a Java enterprise application framework for constructing web applications. It was created by the Apache Software Foundation. In the year 2006, it was first released. It's written in the Java programming language. It is cross-platform compatible. It is built on the MVC architecture, which is a software design paradigm for creating applications. Struts 2 includes features such as simplified testability, Ajax support, Thread-safety, and Template support, among others.

Apache Cassandra Interview Questions and Answers

Ques. 1): What exactly is Struts2?

Answer:

Apache Struts2 is a Java web application framework that is free source. The OpenSymphony WebWork framework is the foundation for Struts2. It's a significant improvement over Struts1, making it more adaptable, simple to use, and extendable. Action, Interceptors, and Result pages are the three main components of Struts2.

Struts2 offers a variety of options for creating Action classes and configuring them using struts.xml or annotations. For common jobs, we can make our own interceptors. Struts2 includes a large number of tags and makes use of the OGNL expression language. To render result pages, we can design our own type converters. JSPs and FreeMarker templates can be used as result pages.

 Apache Camel Interview Questions and Answers

Ques. 2): What are some of Struts2's features?

Answer:

Here are some of the fantastic features that can persuade you to use Struts2.

POJO forms and POJO actions − Struts2 has done away with the Action Forms that were an integral part of the Struts framework. With Struts2, you can use any POJO to receive the form input. Similarly, you can now see any POJO as an Action class.

Tag support − Struts2 has improved the form tags and the new tags allow the developers to write less code.

AJAX support − Struts2 has recognised the take over by Web2.0 technologies, and has integrated AJAX support into the product by creating AJAX tags, that function very similar to the standard Struts2 tags.

Easy Integration − Integration with other frameworks like Spring, Tiles and SiteMesh is now easier with a variety of integration available with Struts2.

Template Support − Support for generating views using templates.

Plugin Support − The core Struts2 behaviour can be enhanced and augmented by the use of plugins. A number of plugins are available for Struts2.

Apache Ant Interview Questions and Answers

Ques. 3): What's the difference between Struts 1 and Struts 2?

Answer: 

This is a list of the most common Strut 2 interview questions. The action class in Strut 1 is not a POJO, hence it must inherit the abstract class. An action servlet is used as the front controller. Only JSP is used for the component view. A configuration file can be inserted in the WEB-INF directory in section 1. When processing requests, it makes use of the Request Processor class. Actions and models are separated in Strut 1.

The action class in Strut 2 is a POJO, therefore there's no need to inherit any classes or implement any interfaces. For the view component, it has JSP, free-market, and so on. The function controller in strut 2 is the Struts Prepare and Execute filter. In this, a configuration file must be named as struts.xml and placed inside the classes directory. It uses the concept of Interceptors while processing the request. In strut 2, action and models are combined within the action class.

Apache Tomcat Interview Questions and Answers

Ques. 4): In Struct2, What Is The Use Of Struts.properties?

Answer:

This configuration file allows you to override the framework's default behaviour. In fact, all of the properties in the struts.properties configuration file can be defined in the web.xml using the init-param, as well as in the struts.xml configuration file using the constant tag. However, if you prefer to keep things separate and more struts specific, you can create this file in the WEB-INF/classes folder. The default values configured in default.properties, which is included in the struts2-core-x.y.z.jar distribution, will be overridden by the values configured in this file.

Apache Kafka Interview Questions and Answers

Ques. 5): Explain The Life Cycle Of A Request In Struct2 Application?

Answer :

Following is the life cycle of a request in Struct2 application −

  • User sends a request to the server for requesting for some resource (i.e pages).
  • The FilterDispatcher looks at the request and then determines the appropriate Action.
  • Configured interceptors functionalities applies such as validation, file upload etc.
  • Selected action is executed to perform the requested operation.
  • Again, configured interceptors are applied to do any post-processing if required.
  • Finally the result is prepared by the view and returns the result to the user.

Apache Tapestry Interview Questions and Answers

Ques. 6): What are the inbuilt themes that are provided by strut 2?

Answer:

There are 3 different inbuilt themes:

Simple theme: It is a minimal theme which is having very little content. It means that the text field tag renders the HTML tag without label, validation, error reporting or any other formatting or functionality.

XHTML theme: It is referred to as the default theme used by struts 2 and provides all the basics that a simple theme provides. It adds the other several features like standard two-column table layout for the HTML labels for each of the HTML, validation and error reporting etc.

Css_xhtml theme: This is the theme that provides all the basics that the simple theme provides, and it adds other several features like the standard two-column CSS-based layout and using div tag for HTML struts tags, labels for each of the HTML struts tags and placed according to the CSS style sheet.

Apache Ambari interview Questions & Answers

 Ques. 7): What is internationalization and how does it work?

Answer:

This is one of the most common Struts 2 Interview Questions that is asked during an interview. Localization refers to the process of planning and implementing products and services so that they may be easily modified to specific local languages and cultures, whereas internationalization refers to the act of enabling localization.

Apache Hive Interview Questions & Answers

Ques. 8): What is the difference between an interceptor and a filter?

Answer:

The interceptors are built around struts 2. It runs for all requests that qualify for a servlet filter front controller and can be customised to run extra interceptors for specific action execution. Interceptor methods can be configured to execute or not to execute using exclude and include methods.

Servlet specifications are used to create the filters. Executes on request and non-configurable method calls if the pattern matches.

Apache Spark Interview Questions & Answers

Ques. 9): Explain struts 2's XML-based validation.

Answer:

XML-based validation in Struts 2 adds more validation options, such as email validation, integer range validation, form validation field, expression validation, regex validation, needed validation, string length validation, and necessary string validation, among others. The XML file must be titled 'actionclass'-validation.xml in Struts 2.

Apache NiFi Interview Questions & Answers

Ques. 10): How Does Validation in Struts 2 Work?

Answer:

When the user clicks the submit button, Struts 2 will run the validate method, and if any of the if statements inside the method are true, Struts 2 will call the addFieldError method. Struts 2 will not proceed to invoke the execute method if any errors have been introduced. The Struts 2 framework, on the other hand, will return input as a result of calling the action.

When validation fails and Struts 2 returns input, the view file is redisplayed by the Struts 2 framework. Because we utilised Struts 2 form tags, the error messages will appear directly above the completed form.

These are the error messages we specified in the call to the addFieldError function. The addFieldError method takes two arguments. The first is the form field name to which the error applies and the second is the error message to display above that form field.

 

Ques. 11): What Types Of Validations Are Available In Xml Based Validation In Struts2?

Answer:

Following is the list of various types of field level and non-field level validation available in Struts2 −

  • date validator
  • double validator
  • email validator
  • expression validator
  • int validator
  • regex validator
  • required validator
  • requiredstring validator
  • stringlength validator
  • url validator

 

Ques. 12):  How Does Struts 2's Interceptor Work?

Answer:

The actual action will be performed by calling invocation.invoke() from the interceptor. So, depending on your needs, you can conduct some pre-processing and some post-processing.

The framework initiates the process by using the invoke method on the ActionInvocation object (). When invoke() is called, ActionInvocation consults its state and performs the next available interceptor. The invoke() method will cause the action to be executed once all of the configured interceptors have been invoked.

 

Ques. 13): What Is Value Stack?

Answer :

The value stack is a set of several objects which keeps the following objects in the provided order −

Temporary Objects − There are various temporary objects which are created during execution of a page. For example the current iteration value for a collection being looped over in a JSP tag.

The Model Object − If you are using model objects in your struts application, the current model object is placed before the action on the value stack.

The Action Object − This will be the current action object which is being executed.

Named Objects − These objects include #application, #session, #request, #attr and #parameters and refer to the corresponding servlet scopes.

 

Ques. 14): What Is The Difference Between Valuestack And OGNL?

Answer:

ValueStack is the storage space where Struts2 stores application data for processing client requests. The information is saved in ActionContext objects that use ThreadLocal to store values that are unique to each request thread.

OGNL (Object-Graph Navigation Language) is a sophisticated Expression Language for manipulating data on the ValueStack. Both interceptors and result pages can use OGNL to access data stored on ValueStack, as shown in the architectural diagram.

 

Ques. 15): What Is The Struts-default Package And How Does It Help?

Answer:

Struts-default is an abstract package that specifies all of the Struts2 interceptors as well as the most widely used interceptor stack. To prevent having to configure interceptors twice, it's best to extend this package while configuring our application package. This is provided to assist developers by making the work of configuring interceptor and result pages in our application a lot easier.

 

Ques. 16): What Is The Purpose Of @after Annotation?

Answer :

The @After annotation marks a action method that needs to be called after the main action method and the result was executed. Return value is ignored.

public class Employee extends ActionSupport{

   @After

   public void isValid() throws ValidationException {

      // validate model object, throw exception if failed

   }

   public String execute() {

      // perform secure action

      return SUCCESS;

   }

}

 

Ques. 17): What Is The Purpose Of @before Annotation?

Answer :

The @Before annotation marks a action method that needs to be called before the main action method and the result was executed. Return value is ignored.

public class Employee extends ActionSupport{

   @Before

   public void isAuthorized() throws AuthenticationException {

      // authorize request, throw exception if failed

   }

   public String execute() {

      // perform secure action

      return SUCCESS;

   }

}

 

Ques. 18): What Is The Difference Between Using An Action Interface And Using An Actionsupport Class For Our Action Classes, And Which Would You Prefer?

Answer:

To develop our action classes, we can use the Action interface. This interface only has one function, execute(), which we must implement. The main advantage of utilising this interface is that it includes some constants that can be used on result pages, such as SUCCESS, ERROR, NONE, INPUT, and LOGIN.

The ActionSupport class implements the Action interface by default, as well as interfaces for Validation and i18n support. Action, Validateable, ValidationAware, TextProvider, and LocaleProvider are all implemented by the ActionSupport class. To implement field level validation login in our action classes, we can override the validate() method of the ActionSupport class.

Depending on the requirements, we can use any of the approaches to create struts 2 action classes, my favorite is ActionSupport class because it helps in writing validation and i18n logic easily in action classes.

 

Ques. 19): How Do We Get Servlet Api Requests, Responses, Httpsessions, and Other Objects Into Action Classes?

Answer:

Servlet API components such as Request, Response, and Session are not directly accessible through Struts2 action classes. However, in some action classes, such as checking the HTTP method or adding cookies in the response, these accesses are required.

As a result, the Struts2 API exposes a number of *Aware interfaces through which we can access these objects. Struts2 API injects Servlet API components into action classes using dependency injection. SessionAware, ApplicationAware, ServletRequestAware, and ServletResponseAware are some of the most essential Aware interfaces.

 

Ques. 20): Is Struts2 Interceptors And Action Thread Safe?

Answer:

Because an object is instantiated for each request to process, Struts2 Action classes are thread safe.

Because Struts2 interceptors are singleton classes that launch a new thread to handle the request, they are not thread safe, and we must construct them carefully to avoid any shared data concerns.