Showing posts with label question. Show all posts
Showing posts with label question. Show all posts

January 04, 2022

Top 20 Apache Cassandra Interview Questions and Answers

  

                Apache recommends Cassandra as one of the most popular NoSQL distributed database management systems. Cassandra is an open-source database that is meant to store and manage enormous amounts of data without failure. Apache Cassandra is a Java database with flexible schemas that is highly scalable for Big Data models and was originally built by Facebook. There is no single point of failure with Apache Cassandra. Cassandra is a combination of column-oriented and key–value store databases, and it is one of the most popular NoSQL databases. The keyspace entity in Cassandra is the table or column family, which is the application's outermost container.

 Apache Camel Interview Questions and Answers

Ques. 1): What is the purpose of Cassandra and why should you utilise it?

Answer:

Cassandra was created with the goal of handling large data workloads across several nodes with no single point of failure. Cassandra's use is influenced by a number of things.

  • It's fault-tolerant and reliable.
  • Scalability from gigabytes to petabytes
  • It's a database with columns.
  • There are no singular points of failure.
  • There isn't a requirement for a separate caching layer.
  • Schema design that is adaptable
  • It has a flexible data storage system, simple data distribution, and quick write speeds.
  • The ACID (Atomicity, Consistency, Isolation, and Durability) qualities are supported.
  • Cloud and multi-data centre capabilities
  • Compression of data

 Apache Ant Interview Questions and Answers

Ques. 2): What are Cassandra's applications?

Answer:

When it comes to app development and data management, Cassandra has become the go-to solution for many businesses. Because of the ease with which an operator can work, even fresh start-ups choose it.

Cassandra is a fantastic application for collecting data from a variety of sources at a rapid rate. Cassandra could be used in an internet of things application. It might also be utilised in product and retail apps, as well as messaging, social media analytics, and even a recommendation engine.

 Apache Tomcat Interview Questions and Answers

Ques. 3): What are the advantages of utilising Cassandra?

Answer:

  • Apache Cassandra, unlike any other database, provides near real-time speed, making the work of Developers, Administrators, Data Analysts, and Software Engineers much easier.
  • Cassandra is built on a peer-to-peer architecture rather than a master–slave design, assuring no failure.
  • It also ensures incredible flexibility by allowing numerous nodes to be added to each Cassandra cluster in any data centre. In addition, any client can send a request to any server.
  • Cassandra supports extensible scalability and can be simply scaled up and down depending on the needs. This NoSQL application does not need to be restarted while scaling because it has a high throughput for read and write operations.
  • Cassandra is also known for its powerful data replication on nodes feature, which allows users to store data in numerous locations and recover data from a different location if one node fails. The amount of copies that users want to make can be set by them.
  • When used for large datasets, it performs admirably, making it the NoSQL DB of choice for most businesses.
  • Operates on a column-oriented structure, which speeds up and simplifies the slicing process. With a column-based data model, even data access and retrieval become more efficient.
  • Furthermore, Apache Cassandra features a schema-free/schema-optional data model, which eliminates the need to display all of the columns that your application requires.
  • Learn how Cassandra vs. MongoDB can help you advance your career.

 Apache Kafka Interview Questions and Answers

Ques. 4): In Cassandra, explain the idea of adjustable consistency.

Answer:

Cassandra's tunable consistency is a fantastic feature that makes it a popular database among developers, analysts, and big data architects. Consistency refers to all replicas having up-to-date and synced data rows. Cassandra's adjustable consistency allows users to choose the level of consistency that best suits their needs. It encourages two types of constancy: eventual and strong consistency.

The former ensures consistency when no new updates are made to a data item, i.e., all accesses eventually return the most recently modified value. Replica convergence is a term used to describe systems that achieve eventual consistency.

For strong consistency, Cassandra supports the following condition:

R + W > N where,

N – Number of replicas

W – Number of nodes that need to agree for a successful write

R – Number of nodes that need to agree for a successful read

 Apache Tapestry Interview Questions and Answers

Ques. 5): What is Cassandra's data storage method?

Answer:

Bytes are used to store all data.

Cassandra ensures that the bytes are encoded correctly when you specify validator.

The column is then ordered using a comparator depending on the encoding's specific ordering.

While composites are simply byte arrays with a specific encoding, each component holds a two-byte length, a byte encoded component, and a termination bit.

Apache Ambari interview Questions & Answers

Ques. 6): What is the definition of memtable?

Answer:

A memTable is a place where data is written and temporarily stored. After the data in the commit log has been completed, it is written to memtable.

In Cassandra, Memtable is a storage engine. Because each column category has its own MemTable, data in MemTable is classified into a key, and data is retrieved using the key. When the write memory is filled, the messages are automatically deleted.

 Apache Hive Interview Questions & Answers

Ques. 7): Explain the Bloom Filter concept.

Answer:

Bloom filter is an off-heap (off the Java heap to native memory) data structure associated with SSTable that checks whether there is any data accessible in the SSTable before conducting any I/O disc action.

 Apache Spark Interview Questions & Answers

Ques. 8): What are the functions of the shell commands "Capture" and "Consistency"?

Answer:

Cassandra has a number of Cqlsh shell commands. The command "Capture" saves the result of a command to a file, whereas the command "Consistency" shows the current consistency level or sets a new one.

 Apache NiFi Interview Questions & Answers

Ques. 9): What is the purpose of the read repair request?

Answer:

When the coordinator node sends requests, it checks in with the nodes to see if they have any outdated information. This data is transmitted to be read and repaired in the background before being replaced with the updated data. Read and repair requests are a way to maintain the data current while also ensuring that the requested row is consistent across all replicas.

 

Ques. 10): How does Cassandra write?

Answer:

Cassandra executes the write operation in two steps: first, it writes to a disc commit log, and then it commits to an in-memory structure called memtable. The write is complete after the two commits are successful. SSTables are used to store writes in the table structure (sorted string tables). Cassandra is more efficient when it comes to writing.

 

Ques. 11): What are the best Cassandra monitor tools?

Answer:

Despite the fact that Cassandra has built-in tolerance mechanisms, it still needs to be monitored for optimal outcomes. Cassandra utilises the following tools to keep track of its databases:

  • Solarwind server and application monitor
  • Instana
  • Instaclustr
  • AppDynamics
  • Dynatrace
  • Machine engine applications manager.

 

Ques. 12): What is Cassandra- CQL collections?

Answer:

Cassandra Multiple values can be stored in a single variable using CQL collections. CQL collections can be used in Cassandra in the following ways.

List: It is used when the order of the data needs to be maintained, and a value is to be stored multiple times (holds the list of unique elements)

SET: It is used for group of elements to store and returned in sorted orders (holds repeating elements)

MAP: It is a data type used to store a key-value pair of elements

 

Ques. 13): What is Super Column in Cassandra?

Answer:

Cassandra Super Column is a unique element consisting of similar collections of data. They are actually key–value pairs with values as columns. It is a sorted array of columns, and they follow a hierarchy when in action: keystore > column family > super column > column data structure in JSON.

Similar to the row keys, super column data entries contain no independent values but are used to collect other columns. It is interesting to note that super column keys appearing in different rows do not necessarily match and will not ever.

 

Ques. 14): Describe the CAP Theorem.

Answer:

With a strong necessity to scale systems when new resources are required, the CAP Theorem is critical to the scaling strategy's success. It's a good approach to deal with scaling in distributed systems. The Consistency, Availability, and Partition Tolerance (CAP) theorem asserts that customers can only have two of these three qualities in distributed systems like Cassandra.

It's necessary to sacrifice one of them. Consistency ensures that the client receives the most recent writing; availability ensures a sensible response in the shortest time possible; and partition tolerance ensures that the system continues to operate even if network partitions occur. AP and CP are the two alternatives available.

 

Ques. 15): What is the difference between Column and Super Column?

Answer:

Both elements work on the principle of tuples having name and value. However, the former’s value is a string, while the value of the latter is a map of columns with different data types.

Unlike Columns, Super Columns do not contain the third component of timestamp.

 

Ques. 16): What exactly is a Column Family?

Answer:

A column family, as the name implies, is a structure with an endless number of rows. A key–value pair is used to refer to these, with the key being the column name and the value being the column data. In Java, it's equivalent to a hashmap, while in Python, it's analogous to a dictionary. Remember that the columns in the rows are not confined to a specified list. Furthermore, the column family is extremely adaptable, with one row having 100 columns and the other simply having two.

 

Ques. 17): Define the management tools in Cassandra.

Answer:

DataStax OpsCenter: It is the Internet-based management and monitoring solution for Cassandra cluster and DataStax. It is free to download and includes an additional edition of OpsCenter.

SPM primarily administers Cassandra metrics and various OS and JVM metrics. Besides Cassandra, SPM also monitors Hadoop, Spark, Solr, Storm, ZooKeeper, and other Big Data platforms. The main features of SPM include correlation of events and metrics, distributed transaction tracing, creating real-time graphs with zooming, anomaly detection, and heartbeat alerting.

 

Ques. 18): In Cassandra, explain the distinctions between a node, a cluster, and a data centre.

Answer:

Cassandra is made up of several parts. A cluster is a collection of nodes that have comparable sorts of data organised together, whereas a node is a single machine running Cassandra. When serving consumers from different parts of the world, data centres are essential components. You can divide a cluster's nodes into various data centres.

 

Ques. 19): What is the purpose of the Bloom Filter in Cassandra?

Answer:

A bloom filter is a space-saving data structure for determining if an element belongs to a set. In other words, it's used to see if an SSTable contains data for a specific row. When executing a KEY LOOKUP in Cassandra, it is utilised to save IO.

 

Ques. 20): What exactly is SSTable? What makes it unique among relational tables?

Answer:

SSTable stands for 'Sorted String Table,' and it refers to a crucial Cassandra data file that supports regular written memtables. They exist for each Cassandra table and are kept on disc. Because of their immutability, SSTables do not allow the insertion or removal of data items once they have been written. Cassandra creates three different files for each SSTable: a partition index, a partition summary, and a bloom filter.

 

Top 20 Apache Ant Interview Questions and Answers

 

                        According to James Duncan Davidson, the ANT stands for "Another Neat Tool." Ants are tiny, but they can carry a lot of weight. As with the Apache ant's job. Apache Ant is a command-line program and Java library for automating software construction processes. In the early 2000s, it arose from the Apache Tomcat project. It was built as a replacement for Unix's Make build tool due to a number of issues with Unix's make. It is responsible for driving the processes or instructions provided in build files as targets and extension points that are interdependent. Ant's most well-known application is the creation of Java applications. Ant comes with a number of built-in jobs for compiling, assembling, testing, and running Java programs.

                        Ant can also be used to create non-Java applications, such as those written in C or C++. Ant can be used to pilot any process that can be specified in terms of targets and tasks in general. For developers looking for Apache Ant Interview questions and answers, we've compiled a list of possible Apache Ant Interview questions and answers.

Apache Tomcat Interview Questions and Answers

Ques: 1): What does the acronym ANT stand for?

Answer:

The ant, according to James Duncan Davidson, is an acronym for "Another Neat Tool." Ants are little but strong. The Apache ant's job is similar. For automating software build processes, Apache Ant is a Java library and command-line programme. It was developed in the early 2000s as part of the Apache Tomcat project. It was developed to replace Unix's Make build tool, which had a number of flaws. It controls the processes or instructions provided as targets and extension points in build files.

Apache Kafka Interview Questions and Answers

Ques. 2): What Are The Ant Concepts?

Answer:

Ant is a build tool that is based on Java. The following are the functions of a build tool:

Open: Ant is an open source project available under the Apache license. Therefore, its source code can be downloaded and modified.

Additionally, Ant uses XML build files which make its development easy.

Cross Platform: Use of XML along with Java makes Ant makes it the perfect solution for developing programs designed to run or be built across a range of different operating systems.

Extensible: New tasks are used to extend the capabilities of the build process, while build listeners are used to help hook into the build process to add extra error tracking functionality.

Integration: As Ant is extensible and open, it can be integrated with any editor or development environment easily.

Apache Tapestry Interview Questions and Answers

Ques. 3): Why Is Ant A Fantastic Construction Tool?

Answer:

Ant is a fantastic build tool for the following reasons:

  • Ant is a cross-platform, user-friendly, extensible, and scalable Java-based build tool.
  • Ant can be utilised in a small personal project as well as a large, multi-team software development effort.
  • Ant syntax is simple to grasp.
  • The XML format was utilised in the Ant syntax.
  • We only need to provide our task in the build.xml file.
  • Ant is simple to use.
  • On large Make-based software projects, eliminating the full-time make file engineer is usual.

Apache Ambari interview Questions & Answers

Ques. 4):  Explain Ant Functionality?

Answer :

Ant is an open source project available under the Apache license. Therefore, its source code can be downloaded and modified.

Additionally, Ant uses XML build files which make its development easy.

Cross Platform: Use of XML along with Java makes Ant makes it the perfect solution for developing programs designed to run or be built across a range of different operating systems.

Extensible: New tasks are used to extend the capabilities of the build process, while build listeners are used to help hook into the build process to add extra error tracking functionality.

As Ant is extensible and open, it can be integrated with any editor or development environment easily.

Apache Hive Interview Questions & Answers

 Ques. 5):  How Do You Make An Ant User Interactive?

Answer:

org.apache.tools.ant.input is the correct answer.

The user input is implemented using the InputHandler interface. The programme creates an InputRequest object, which is sent to InputHandler to perform user input. If the user input is invalid, it will be denied.

handleInput is the only method on the InputHandler interface (InputRequest request). If the input is invalid, this function throws an org.apache.tools.ant.BuildException.

Apache Spark Interview Questions & Answers

Ques. 6):  Explain with the help of Ant and a small example?

Answer: Before we begin using ANT, we must be certain of the project name, the.java files, and, most crucially, the location of the.class files.

For example, we want to use the HelloWorld programme with ant. The Java source files should be placed in the Dirhelloworld subdirectory, and the.class files should be placed in the Helloworldclassfiles subdirectory.

1. The build file by name build.xml is to be written. The script is as follows

<project name=”HelloWorld” default=”compiler” basedir=”.”>

<target name=”compiler”>

<mkdir dir = “Helloworldclassfiles”>

<javac srcdir=”Dirhelloworld” destdir=”Helloworldclassfiles”>

</target>

</project>

2. Now run the ant script to perform the compilation:

C :\> ant

Buildfile: build.xml

and see the results in the extra files and directory created:

c:\>dir Dirhelloworld

c:\>dir Helloworldclassfiles

All the .java files are in Dirhelloworld directory and all the corresponding .class are in Helloworldclassfiles directory.

Apache NiFi Interview Questions & Answers

Ques. 7):  Explain How To Use Runtime In Ant?

Answer :

There is no need to use Runtime in ant. Because ant has Runtime counterpart by name ExecTask. ExecTask is in the package org.apache.tools.ant.taskdefs. The Task is created by using the code in the customized ant Task. The code snippet is as follows:

ExecTask execTask = (ExecTask)project.createTask (“exec”);

 

Ques. 8):  How to do conditional statement in ant?

There are many ways to solve the problem.

Since target if/unless all depend on some property is defined or not, you can use condition to define different NEW properties, which

in turn depends on your ant property values. This makes your ant script very flexible, but a little hard to read.

Ant-contrib has <if> <switch> tasks for you to use.

Ant-contrib also has <propertyregex> which can make very complicate decisions.

 

Ques. 9):  How many different ways are there to set properties in a Build Ant file?

Answer:

There are six different ways to set properties:

Both the name and value attributes must be provided.

/>src.dir/>src.dir/>src.dir/>src.dir/>src.dir/>src.dir/>src.dir/

Both the name and the refid property must be provided.

Setting the filename of the property file to load as the file attribute.

Setting the url property to the url from which the properties should be loaded.

Setting the resource attribute to the property file's resource name to load.

Setting a prefix for the environment attribute.

All of the above can be combined in our build files.

However, only one should be used at any given moment.

 

Ques. 10):  How Can We Make A Jar With Ant?

Answer:

To make a jar of classes, change the target to jar. This target requires the creation of a directory in which the jar will be kept. To finish the jar, we'll need a jar tag. We have passed two attributes in this tag: the first is the name of the destination directory, and the second is the name of the base directory, which contains all of our class files. To make a jar file, we'll need a manifest. We have two attributes in the manifest tag: the first is the name of the manifest file and the second is its value.

 

Ques. 11): What Exactly Is Ivy?

Answer:

Ivy is a well-known dependency manager. IVY is primarily concerned with adaptability and simplicity.

Ivy 2.1.0 is the most recent version.

The following are some of the highlights of the 2.1.0 release:

Ivy's main feature is improved Maven2 compatibility, which includes various bug fixes and more pom functionality.

several bug fixes and improvements as stated in Jira and the release notes additional options for the Ivy Ant jobs and commandline configuration intersections and configuration groups

 

Ques. 12): What method does ant use to read properties? How do I set up my property management system?

Answer:

Ant sets properties in a sequential order, thus once something is set, later properties with the same name cannot replace the prior ones. This is the polar opposite of the Java setters. This allows us to preset all properties in one location and just overwrite the ones that are needed. Let me give you an example. You need a password for a task but don't want to disclose it with anyone on your team, even outside engineers.

Store your password in your ${user.home}/prj.properties

pswd=yourrealpassword

In your include directory master prj.properties

pswd=password

In your build-common.xml read properties files in this order

The commandline will prevail, if you use it: ant -Dpswd=newpassword

${user.home}/prj.properties (personal)

yourprojectdir/prj.properties (project team wise)

your_master_include_directory/prj.properties (universal)

[code lang=”java”]<cvsnttask password="${pswd} … />[/code]

 

Ques. 13): How can I use ant to perform a command from the command line? How can I get the outcome of a perl script running?

Answer:

Use the exec ant task to solve the problem.

Don't forget that ant is a Java programme. That is why the ant is so useful, strong, and adaptable. You must consider Unix if you want ant to get unix commands and results. In MS-Windows, it's the same. Ant just assists you in automating the procedure.

 

Ques. 14): How to copy files without extention?

Answer:

If files are in the directory:

[code lang=”xml”]<include name="a,b,c"/>[/code]

If files are in the directory or subdirectories:

[code lang=”xml”]<include name="**/a,**/b,**/c"/>[/code]

If you want all files without extension are in the directory or subdirectories:

[code lang=”xml”]<exclude name="**/*.*"/>[/code]

 

Ques. 15): How can I troubleshoot my Ant script?

Answer:

There are a variety of options.

Do an echo on the areas where you are unsure. You'll quickly figure out what the issue is. Like the classic printf() function in C or the Java System. println()

In your javascript or custom ant task, use project.log("msg"). Run Ant with -verbose or -debug to learn more about what it's doing and where it's doing it. However, you may grow tired of it quickly because it provides you with too much information.

 

Ques. 16). Why did I get such warning in ant?

Answer:

compile:

[javac] Warning: commons-logging.properties modified in the future.

[javac] Warning: dao\\DAO.java modified in the future.

[javac] Warning: dao\\DBDao2.java modified in the future.

[javac] Warning: dao\\HibernateBase.java modified in the future.

Possible causes of the system time problem include:

You altered the system's clock.

I encountered the same issue before, when I checked out files from cvs to windows and transferred them to a unix machine, I received a large number of these warnings due to a system timing issue.

You'll have the same trouble transferring files from Australia, China, or India to the United States. True, I've done it before and encountered the issue.

 

Ques. 17): How can I dynamically add pieces to an existing path?

Answer:

Yes, this is conceivable. You must, however, create a custom ant job, obtain the path, add/modify it, and use it. What I'm doing is defining a path reference to lib.classpath, then using my own task to add/modify the lib.classpath.

 

Ques. 18): How can I reorganize my jar/war/ear/zip file's directory structure? Is it necessary for me to unarchive them first?

No, you are not required to unarchive them first. To put the files into your destination jar/ear/war files, you don't need to unzip them from the archive.

To extract files from an old archive to a separate directory in your new archive, utilise zipfileset in your jar/war/ear task.

You can also use zipfileset in your jar/war/ear operation to send files from a local directory to a new archive location.

See the follow example:

[code lang=”java”] <jar destfile="${dest}/my.jar">

<zipfileset src="old_archive.zip" includes="**/*.properties" prefix="dir_in_new_archive/prop"/>

<zipfileset dir="curr_dir/abc" prefix="new_dir_in_archive/xyz"/>

</jar>[/code]

 

Ques. 19): How to exclude multi directories in copy or delete task?

Answer:

Here is an example.

[code lang=”xml”]<copy todir="${to.dir}" >

<fileset dir="${from.dir}" >

<exclude name="dirname1" />

<exclude name="dirname2" />

<exclude name="abc/whatever/dirname3" />

<exclude name="**/dirname4" />

</fileset>

</copy>[/code]

  

Ques. 20): How do I get started to use ant? Can you give me a “Hello World” ant script?

Answer:

Download the most recent version of ant from Apache; unzip it somewhere on your machine.

Install j2sdk 1.4 or above.

Set JAVA_HOME and ANT_HOME to the directory your installed them respectively.

Put %JAVA_HOME%/bin;%ANT_HOME%/bin on your Path. Use ${JAVA_HOME}/bin:${ANT_HOME}/bin on UNIX. Yes, you can use forward slash on windows.

Write a “Hello world” build.xml

[code lang=”xml”]

<project name="hello" default="say.hello" basedir="." >

<property name="hello.msg" value="Hello, World!" />

<target name="say.hello" >

<echo>${hello.msg}</echo>

</target>

</project>[/code]

* Type ant in the directory your build.xml located.




January 03, 2022

Top 20 Apache Tomcat Interview Questions and Answers

  

       Tomcat is a Java Servlet container and web server developed by the Apache Software Foundation's Jakarta project. Client browsers send queries to a web server, which the server answers to with web pages. Web servers can generate dynamic content based on the user's requests. Because it supports both Java servlet and JavaServerPages (JSP) technologies, Tomcat excels at this. Even if a free servlet and JSP engine is required, Tomcat can be utilised as a web server for a variety of applications. It can run on its own or alongside standard web servers like Apache httpd, delivering static pages while Tomcat handles dynamic servlet and JSP queries.

    Apache Tomcat is an open source Java Servlet, JavaServer Pages, Java Expression Language, and Java WebSocket implementation platform. Many firms are hiring Devops engineers, Apache Tomcat administrators, Linux Apache Tomcat jobs, and Hadoop developers at varying levels of experience. The most popular Web server is Apache, and you must be familiar with it if you plan to work as a Middleware/System/Web administrator. Apache HTTP is a free and open-source web server that runs on Windows and Linux.

 Apache Kafka Interview Questions and Answers

Ques. 1): Who is in charge of Tomcat?

Answer:

The Apache Software Foundation is the correct answer. The Apache Software Foundation is a non-profit organisation that oversees several Open Source projects.

The Apache Software Foundation's Java-based projects are referred to as Jakarta.

Tomcat is an Apache Jakarta project that manages server-side Java (in the form of Servlets and JSPs). Tomcat is the "reference" implementation of the Servlet and JSP specifications, which means that anything that runs in Tomcat should run in any compliant Servlet / JSP container.

 Apache Tapestry Interview Questions and Answers

Ques. 2): Difference between apache and apache-tomcat server?

Answer: 

Apache: Apache is mostly used to serve static content, but there are numerous add-on modules (some of which are included with Apache) that allow it to modify the content and serve dynamic content written in Perl, PHP, Python, Ruby, and other languages.

Apache is an HTTP server that serves HTTP requests.

Tomcat is a servlet/JSP container developed by Apache. It's written in the Java programming language. Although it can provide static information, its primary function is to host servlets and JSPs.

JSP files (which are comparable to PHP and older ASP files) are converted into Java code (HttpServlet), which is then compiled into.class files and run by the Java virtual machine by the server.

Apache Tomcat is used to deploy your Java Servlets and JSPs. So in your Java project, you can build your WAR (short for Web ARchive) file, and just drop it in the deploy directory in Tomcat.

Although it is possible to get Tomcat to run Perl scripts and the like, you wouldn’t use Tomcat unless most of your content was Java.

Tomcat is a Servlet and JSP Server serving Java technologies

 Apache Ambari interview Questions & Answers

Ques. 3):  What exactly is Coyote?

Answer:

Coyote is a Tomcat Connector component that acts as a web server and supports the HTTP 1.1 protocol. This enables Catalina, which is ostensibly a Java Servlet or JSP container, to additionally serve local files as HTTP documents.

Coyote monitors a specific TCP port for incoming connections to the server and transmits the request to the Tomcat Engine, which processes the request and returns a response to the requesting client.

Coyote is Tomcat's HTTP connector, which offers an interface for browsers to connect to.

 Apache Hive Interview Questions & Answers

Ques. 4): What is a servlet container?

Answer:

A servlet container is a web server component that communicates with Java servlets. The servlet container is in charge of managing servlet lifecycles, mapping URLs to specific servlets, and ensuring that the URL requester has the appropriate access privileges.

Requests to servlets, JavaServer Pages (JSP) files, and other types of files containing server-side code are handled by the servlet container. The Web container generates servlet instances, loads and unloads servlets, creates and manages request and response objects, and handles other servlet-related operations.

The web component contract of the Java EE architecture is implemented by the servlet container, which defines a runtime environment for web components that includes security, concurrency, lifecycle management, transaction, deployment, and other services.

Apache Spark Interview Questions & Answers 

Ques. 5): How Do I Can Change The Default Home Page Loaded By Tomcat?

Answer :

We can easily override home page via adding welcome-file-list in application $TOMCAT_HOME/webapps//WEB-INF /web.xml file or by editing in container $TOMCAT_HOME/conf/web.xml

In $TOMCAT_HOME/conf/web.xml, it may look like this:

    index.html

    index.htm

    index.jsp

Request URI refers to a directory, the default servlet looks for a "welcome file" within that directory in following order: index.html, index.htm and index.jsp

Apache NiFi Interview Questions & Answers 

Ques. 6): what is a difference between Apache and Nginx web server?

Answer:

Both are classified as Web Servers, but there are a few key differences. Nginx is an event-driven web server, whereas Apache is a process-driven web server.

Nginx has a reputation for being faster than Apache.

Whereas Nginx does not support OpenVMS or IBMi, Apache supports a wide range of operating systems.

Nginx is still catching up to Apache in terms of module interoperability with backend application servers.

Nginx is a lightweight web server that is rapidly gaining market share. If you're new to Nginx, you might be interested in reading some of my Nginx articles.

 Apache Ant Interview Questions and Answers

Ques. 7): How Do You Create Multiple Virtual Hosts?

Answer :

If you want tomcat to accept requests for different hosts e.g. www.myhostname.com then you must

Create ${catalina.home}/www/appBase , ${catalina.home}/www/deploy, and ${catalina.home}/conf/Catalina/www.myhostname.com

Add a host entry in the server.xml file

Create the the following file under conf/Catalina/www.myhostname.com/ROOT.xml

Add any parameters specific to this hosts webapp to this context file

Put your war file in ${catalina.home}/www/deploy

When tomcat starts, it finds the host entry, then looks for any context files and will start any apps with a context.

 

Ques. 8): In Apache Tomcat, what is Catalina?

Answer:

Once Jasper has completed the compilation, it turns JSP into a servlet, which Catalina can then manage. Catalina is a servlet container for Tomcat. It also implements all of the Java server page and servlet specs. Catalina is a Java engine embedded into Tomcat that provides an efficient environment for servlets to execute in.

 

Ques. 9): What exactly do you mean by Tomcat's default port, and can it be used with SSL?

Answer:

Tomcat uses port 8080 as its default port. Well, you can change it by editing the server.xml file in the Tomcat install directory's conf folder. By adjusting the property to the desired port connection port="8080" and then restarting Tomcat, the modifications will take effect.

Tomcat can use SSL, but it will require some configuration. You must complete the following tasks:

Generate a keystore

Then add a connector in server.xml

Restart Tomcat

 

Ques. 10): What is a mod_evasive module, and what does it do?

Answer:

Mod_evasive is a third-party module that accomplishes one simple task really well. It identifies when your site is under attack by a Denial of Service (DoS) attack and mitigates the harm that the attack causes. When a single client makes repeated requests in a short period of time, mod evasive recognises this and refuses additional requests from that client. The ban can last for a very short time because it is simply reissued the following time a request is discovered from that same host.

 

Ques. 11): Explain Directory Structure Of Tomcat?

Answer :

Directory structure of Tomcat are:

bin - contain startup, shutdown, and other scripts (*.sh for UNIX and *.bat for Windows systems) and some jar files also there.

conf - Server configuration files (including server.xml) and related DTDs. The most important file in here is server.xml. It is the main configuration file for the container.

lib - contains JARs those are used by container and Servlet and JSP application programming interfaces (APIs).

logs - Log and output files.

webapps – deployed web applications reside in it .

work - Temporary working directories for web applications and mostly used during in JSP compilation where JSP is converted to a Java servlet.

temp - Directory used by the JVM for temporary files .

 

Ques. 12): Explain How Running Tomcat As A Windows Service Provides Benefits?

Answer :

Running Tomcat as a windows service provides benefits like:

Automatic startup: It is crucial for environment where you may want to remotely re-start a system after maintenance

Server startup without active user login: Tomcat is run oftenly on blade servers that may not even have an active monitor attached to them. Windows services can be started without an active user

Security: Tomcat under window service enables you to run it under a special system account, which is protected from the rest of the user accounts

 

Ques. 13): How Do Servlet Life Cycles Work?

Answer:

The life-cycle of a typical Tomcat servlet is as follows:

Through one of its connectors, Tom-cat receives a request from a client.

This request will be processed. This request is routed through Tomcat to the proper server.

Tomcat checks that the servlet class has been loaded after the request has been forwarded to the proper servlet. If it isn't, Tomcat wraps the servlet in Java Bytecode, which is executed by the JVM and creates a servlet instance.

The servlet is started by Tomcat by invoking its init method. The servlet includes code that can inspect Tomcat configuration files and take appropriate action, as well as declare any resources it might need.

Once the servlet has been started, Tomcat can call the servlet’s service method to proceed the request

Tomcat and the servlet can co-ordinate or communicate through the use of listener classes during the servlet’s lifecycle, which tracks the servlet for a variety of state changes.

To remove the servlet, Tomcat calls the servlets destroy method.

 

Ques. 14): In Tomcat, what is the difference between a host and a context?

Answer:

In Tomcat, the host is a component. It's a network name association for the server. On the other hand, context is an element that indicates a web application that is running on a certain virtual host. Web applications are built on top of a Web Application Archive (WAR) file or a corresponding directory that contains all of the unpacked content indicated in the servlet description.

 

Ques. 15): What Is The Distinction Between A Webserver And An Application Server?

Answer:

The main distinction between a web server and an application server is that a web server can only execute web applications, such as servlets and JSPs, and has just one container, the Web container, that is used to understand and execute web applications. The application server has the ability to run Enterprise applications, i.e. (servlets, jsps, and EJBs)

it is having two containers:

Web Container(for interpreting/executing servlets and jsps)

EJB container(for executing EJBs).

it can perform operations like load balancing , transaction demarcation etc.

 

Ques. 16): Apart from Apache Tomcat, what are the different kinds of Web Servers?

Answer:

There are many web servers as mentioned below:

LiteSpeed Web Server

GWS Web Server

Microsoft IIS Web Server

Nginx Web Server

Jigsaw Web Server

Sun Java System Web Server

Lighttpd Web Server

 

Ques. 17): How to limit upload size?

Answer:

I have a web application that allows users to upload files such as word documents, pdf and so on.  How do I limit file upload by users?

You can make use of the LimitRequestBody directive to limit upload file size.

<Directory "usr/local/apache2/uploads">

LimitRequestBody 9000

</Directory>

The value assigned to the LimitRequestBody allows Apache to accept and store file uploads of 9000 bytes by users. You can adjust the value based on the requirement.

 

Ques. 18): Explain how to use WAR files to deploy a web application.

Answer:

JSPs, servlets, and their associated files are placed under Tomcat's web applications directory in the appropriate subdirectories. You can combine all of the files in the web apps directory into a single compressed file with the extension.war. A web application can be run by placing a WAR file in the webapps directory. When a web server starts up, it extracts the contents of the WAR file and places them in the proper webapps sub-directories.

 

Ques. 19): How can an Apache Service be stopped by its control script?

Answer:

The Apache Service is controlled using a script called the apachectl.

So, to stop the service, we need to run the below-mentioned commands.

#apachectl stop [for Ubuntu based system]

# /etc/inid.t/httpd.stop [for red hat based system]

 

Ques. 20): What is the purpose of the Listen property in Apache Tomcat?

Listening is very important for Apache Tomcat and the developers.

If a developer has numerous IPs on the server, we must explicitly indicate IP and PORT in the Listen Drive if we want Apache to evaluate only one of them.

For example: 10.10.10.20

 

 

Top 20 Apache Kafka Interview Questions and Answers

 

Apache Kafka is a free and open-source streaming platform. Kafka began as a messaging queue at LinkedIn, but it has since grown into much more. It's a flexible tool for working with data streams that may be used in a wide range of situations. Because Kafka is a distributed system, it can scale up and down as needed. All that's left to do now is expand the cluster with new Kafka nodes (servers).

In a short length of time, Kafka can process a big volume of data. It also has a low latency, allowing for real-time data processing. Despite the fact that Apache Kafka is written in Scala and Java, it may be utilised with a wide range of computer languages.


Apache Hive Interview Questions & Answers


Ques. 1): What exactly do you mean when you say "confluent kafka"? What are the benefits?

Answer:

Confluent is an Apache Kafka-based data streaming platform that can do more than just publish and subscribe. It can also store and process data within the stream. Confluent Kafka is a more extensive version of Apache Kafka. It improves Kafka's integration capabilities by adding tools for optimising and maintaining Kafka clusters, as well as methods for ensuring the security of the streams. Because of the Confluent Platform, Kafka is simple to set up and use. Confluent's software is available in three flavours:

A free, open-source streaming platform that makes working with real-time data streams a breeze;

A premium cloud-based version with more administration, operations, and monitoring features; an enterprise-grade version with more administration, operations, and monitoring tools.

Following are the advantages of Confluent Kafka :

  • It features practically all of Kafka's characteristics, as well as a few extras.
  • It greatly simplifies the administrative operations procedures.
  • It relieves data managers of the burden of thinking about data relaying.


Apache Ambari interview Questions & Answers


Ques. 2): What are some of Kafka's characteristics?

Answer:

The following are some of Kafka's most notable characteristics:-

  • Kafka is a fault-tolerant messaging system with a high throughput.
  • A Topic is a built-in patriation system in Kafka.
  • Kafka also comes with a replication mechanism.
  • Kafka is a distributed messaging system that can manage massive volumes of data and transfer messages from one sender to another.
  • The messages can also be saved to storage and replicated across the cluster using Kafka.
  • Kafka works with Zookeeper for synchronisation and collaboration with other services.
  • Kafka provides excellent support for Apache Spark.


Apache Tapestry Interview Questions and Answers


Ques. 3): What are some of the real-world usages of Apache Kafka?

Answer:

The following are some examples of Apache Kafka's real-world applications:

Message Broker: Because Apache Kafka has a high throughput value, it can handle a large number of similar sorts of messages or data. Apache Kafka can be used as a publish-subscribe messaging system that makes it simple to read and publish data.

To keep track of website activity, Apache Kafka can check if data is successfully delivered and received by websites. Apache Kafka is capable of handling the huge volumes of data generated by websites for each page as well as user actions.

To keep track of metrics connected to certain technologies, such as security logs, we can utilise Apache Kafka to monitor operational data.

Data logging: Apache Kafka provides data replication between nodes functionality that can be used to restore data on failed nodes. It can also be used to collect data from various logs and make it available to consumers.

Stream Processing with Kafka: Apache Kafka can also handle streaming data, the data that is read from one topic, processed, and then written to another. Users and applications will have access to a new topic containing the processed data.


Apache NiFi Interview Questions & Answers


Ques. 4): What are some of Kafka's disadvantages?

Answer:

The following are some of Kafka's drawbacks:

  • When messages are tweaked, Kafka performance suffers. Kafka works well when the message does not need to be updated.
  • Kafka does not support wildcard topic selection. It's crucial to use the appropriate issue name.
  • When dealing with large messages, brokers and consumers degrade Kafka's performance by compressing and decompressing the messages. This has an effect on Kafka's performance and throughput.
  • Kafka does not support several message paradigms, such as point-to-point queues and request/reply.
  • Kafka lacks a comprehensive set of monitoring tools.


Apache Spark Interview Questions & Answers


Ques. 5): What are the use cases of Kafka monitoring?

Answer:

The following are some examples of Kafka monitoring use cases:

  • Monitor the use of system resources: It can be used to track the usage of system resources like memory, CPU, and disc over time.
  • Threads and JVM consumption should be monitored: To free up memory, Kafka relies on the Java garbage collector, which ensures that it runs frequently, ensuring that the Kafka cluster is more active.
  • Maintain an eye on the broker, controller, and replication statistics so that partition and replica statuses can be changed as needed.
  • Identifying which applications are producing excessive demand and performance bottlenecks may aid in quickly resolving performance issues.

 

Ques. 6): What is the difference between Kafka and Flume?

Answer:

Flume's main application is ingesting data into Hadoop. Hadoop's monitoring system, file types, file system, and tools like Morphlines are all incorporated into the Flume. When working with non-relational data sources or streaming a huge file into Hadoop, the Flume is the best option.

Kafka's main use case is as a distributed publish-subscribe messaging system. Kafka was not created with Hadoop in mind, therefore using it to gather and analyse data for Hadoop is significantly more difficult than using Flume.

When a highly reliable and scalable corporate communications system, such as Hadoop, is required, Kafka can be used.

 

Ques. 7): Explain the terms "leader" and "follower."

Answer:

In Kafka, each partition has one server that acts as a Leader and one or more servers that operate as Followers. The Leader is in charge of all read and write requests for the partition, while the Followers are responsible for passively replicating the leader. In the case that the Leader fails, one of the Followers will assume leadership. The server's load is balanced as a result of this.

 

Ques. 8): What are the traditional methods of message transfer? How is Kafka better from them?

Answer:

The classic techniques of message transmission are as follows: -

Message Queuing: -

The message queuing pattern employs a point-to-point approach. A message in the queue will be discarded once it has been eaten, similar to how a message in the Post Office Protocol is removed from the server once it has been delivered. These queues allow for asynchronous messaging.

If a network difficulty prevents a message from being delivered, such as when a consumer is unavailable, the message will be queued until it is transmitted. As a result, messages aren't always sent in the same order. Instead, they are distributed on a first-come, first-served basis, which in some cases can improve efficiency.

Publisher - Subscriber Model:-

The publish-subscribe pattern entails publishers producing ("publishing") messages in multiple categories and subscribers consuming published messages from the various categories to which they are subscribed. Unlike point-to-point texting, a message is only removed once it has been consumed by all category subscribers.

Kafka caters to a single consumer abstraction, the consumer group, which contains both of the aforementioned. The advantages of adopting Kafka over standard communications transfer mechanisms are as follows:

Scalable: Data is partitioned and streamlined using a cluster of devices, which increases storage capacity.

Faster: A single Kafka broker can handle megabytes of reads and writes per second, allowing it to serve thousands of customers.

Durability and Fault-Tolerant: The data is kept persistent and tolerant to any hardware failures by copying the data in the clusters.

  

Ques. 9): What is a Replication Tool in Kafka? Explain how to use some of Kafka's replication tools.

Answer:

The Kafka Replication Tool is used to define the replica management process at a high level. Some of the replication tools available are as follows:

Replica Leader Election Tool of Choice: The Preferred Replica Leader Election Tool distributes partitions to many brokers in a cluster, each of which is known as a replica. The favourite replica is a term used to describe the leader. For various partitions, the brokers generally distribute the leader position fairly across the cluster, but due to failures, planned shutdowns, and other circumstances, an imbalance might develop over time. By reassigning the preferred copies, and hence the leaders, this tool can be utilised to maintain the balance in these instances.

Topics tool: The Kafka topics tool is in charge of all administration operations relating to topics, including:

  • Listing and describing the topics.
  • Topic generation.
  • Modifying Topics.
  • Adding a topic's dividers.
  • Disposing of topics.

Tool to reassign partitions: The replicas assigned to a partition can be changed with this tool. This refers to adding or removing followers from a partition.

StateChangeLogMerger tool: The StateChangeLogMerger tool collects data from brokers in a cluster, formats it into a central log, and aids in the troubleshooting of state change issues. Sometimes there are issues with the election of a leader for a particular partition. This tool can be used to figure out what's causing the issue.

Change topic configuration tool: used to create new configuration choices, modify current configuration options, and delete configuration options.

 

Ques. 10):  Explain the four core API architecture that Kafka uses.

Answer:

Following are the four core APIs that Kafka uses:

Producer API:

The Producer API in Kafka allows an application to publish a stream of records to one or more Kafka topics.

Consumer API:

The Kafka Consumer API allows an application to subscribe to one or more Kafka topics. It also allows the programme to handle streams of records generated in connection with such topics.

Streams API: The Kafka Streams API allows an application to process data in Kafka using a stream processing architecture. This API allows an application to take input streams from one or more topics, process them with streams operations, and then generate output streams to send to one or more topics. In this way, the Streams API allows you to turn input streams into output streams.

Connect API:

The Kafka Connector API connects Kafka topics to applications. This opens up possibilities for constructing and managing the operations of producers and consumers, as well as establishing reusable links between these solutions. A connector, for example, may capture all database updates and ensure that they are made available in a Kafka topic.

  

Ques. 11): Is it possible to utilise Kafka without Zookeeper?

Answer:

As of version 2.8, Kafka can now be utilised without ZooKeeper. When Kafka 2.8.0 was released in April 2021, we all had the opportunity to check it out without ZooKeeper. This version, however, is not yet ready for production and is missing a few crucial features.

It was not feasible to connect directly to the Kafka broker without using Zookeeper in prior versions. This is because the Zookeeper is unable to fulfil client requests when it is down.

 

Ques. 12): Explain Kafka's concept of leader and follower.

Answer:

Each partition in Kafka has one server acting as a Leader and one or more servers acting as Followers. The Leader is in control of the partition's read and write requests, while the Followers are in charge of passively replicating the leader. If the Leader is unable to lead, one of the Followers will take over. As a result, the server's load is balanced.

 

Ques. 13): In Kafka, what is the function of partitions?

Answer:

From the standpoint of the Kafka broker, partitions allow a single topic to be partitioned across many servers. This gives you the ability to store more data in a single topic than a single server. If you have three brokers and need to store 10TB of data in a topic, you can create a subject with only one partition and store the entire 10TB on one broker. Another option is to create a three-partitioned topic with 10 TB of data distributed across all brokers. From the consumer's perspective, a partition is a unit of parallelism.

 

Ques. 14): In Kafka, what do you mean by geo-replication?

Answer:

Geo-replication is a feature in Kafka that allows you to copy messages from one cluster to a number of other data centres or cloud locations. You can use geo-replication to replicate all of the files and store them all over the world if necessary. Using Kafka's MirrorMaker Tool, we can achieve geo-replication. We can ensure data backup without fail by employing the geo-replication strategy.

 

Ques. 15): Is Apache Kafka a platform for distributed streaming? What are you going to do with it?

Answer:

Yes. Apache Kafka is a platform for distributed streaming data. Three critical capabilities are included in a streaming platform:

  • We can easily push records using a distributed streaming infrastructure.
  • It has a large storage capacity and allows us to store a large number of records without difficulty.
  • It assists us in processing records as they arrive.
  • The Kafka technology allows us to do the following:
  • We may create a real-time stream of data pipelines using Apache Kafka to send data between two systems.
  • We could also create a real-time streaming platform that reacts to data.

 

Ques. 16): What is Apache Kafka Cluster used for?

Answer:

Apache Kafka Cluster is a messaging system that is used to overcome the challenges of gathering and processing enormous amounts of data. The following are the most important advantages of Apache Kafka Cluster:

We can track web activities using Apache Kafka Cluster by storing/sending events for real-time processes.

We may use this to both alert and report on operational metrics.

We can also use Apache Kafka Cluster to transform data into a common format.

It enables the processing of streaming data to the subjects in real time.

It is currently ruling over some of the most popular programmes such as ActiveMQ, RabbitMQ, AWS, and others due to its outstanding characteristics.

 

Ques. 17): What is the purpose of the Streams API?

Answer:

Streams API is an API that allows an application to function as a stream processor, ingesting an input stream from one or more topics and providing an output stream to one or more output topics, as well as effectively changing the input streams to output streams.

 

Ques. 18): In Kafka, what do you mean by graceful shutdown?

Answer:

Any broker shutdown or failure will be detected automatically by the Apache cluster. In this case, new leaders will be picked for partitions previously handled by that device. This can occur as a result of a server failure or even when the server is shut down for maintenance or configuration changes. Kafka provides a graceful approach for ending a server rather than killing it when it is shut down on purpose.

When a server is turned off, the following happens:

Kafka guarantees that all of its logs are synced onto a disc to avoid having to perform any log recovery when it is restarted. Purposeful restarts can be sped up since log recovery requires time.

Prior to shutting down, all partitions for which the server is the leader will be moved to the replicas. The leadership transfer will be faster as a result, and the period each partition is inaccessible will be decreased to a few milliseconds.

  

Ques. 19): In Kafka, what do the terms BufferExhaustedException and OutOfMemoryException mean?

Answer:

A BufferExhaustedException is thrown when the producer can't assign memory to a record because the buffer is full. If the producer is in non-blocking mode and the pace of production over an extended period of time exceeds the rate at which data is transferred from the buffer, the allocated buffer will be emptied and an exception will be thrown.

An OutOfMemoryException may occur if the consumers send large messages or if the quantity of messages sent increases faster than the rate of downstream processing. As a result, the message queue becomes overburdened, using RAM.

 

Ques. 20): How will you change the retention time in Kafka at runtime?

Answer:

A topic's retention time can be configured in Kafka. A topic's default retention time is seven days. While creating a new subject, we can set the retention time. When a topic is generated, the broker's property log.retention.hours are used to set the retention time. When configurations for a currently operating topic need to be modified, kafka-topic.sh must be used.

The right command is determined on the Kafka version in use.

The command to use up to 0.8.2 is kafka-topics.sh --alter.

Use kafka-configs.sh --alter starting with version 0.9.0.

 


 

November 23, 2021

Top 20 Aws Cloudwatch interview Questions & Answers

  

Ques: 1). What Is Amazon Cloudwatch and How Does It Work?

Answer:

CloudWatch is an AWS monitoring service that keeps track of your cloud resources and the applications you run on them. CloudWatch may be used to gather and track metrics, monitor log files, and generate alarms. EC2 instances, DynamoDB tables, and RDS DB instances may all be monitored with CloudWatch.

Amazon CloudWatch is a management tool for system architects, administrators, and developers, and it is part of the Amazon Web Services family.

 

AWS RedShift Interview Questions and Answers


Ques: 2). What's the difference between CloudTrail and CloudWatch, and how do I use them?

Answer:

CloudWatch keeps track of the health and performance of AWS services and resources and generates reports on them. CloudTrail, on the other hand, keeps track of all of the activities that take place in your AWS environment.


AWS Lambda Interview Questions & Answers


Ques: 3). What platforms are compatible with CloudWatch Logs Agent?

Answer:

The CloudWatch logs agent is compatible with a wide range of operating systems and platforms. The following is a list of similar items:

  • CentOS
  • Amazon Linux
  • Ubuntu
  • Red Hat Enterprise Linux
  • Windows


AWS Cloud Support Engineer Interview Question & Answers


Ques: 4). What Are Amazon Cloudwatch Logs, and What Do They Mean?

Answer:

Using your existing system, application, and custom log files, Amazon CloudWatch Logs allows you to monitor and troubleshoot your systems and applications. You may monitor your logs in near real time with CloudWatch Logs for specific phrases, values, or patterns. You could, for example, set an alarm for the amount of failures in your system logs or look at graphs of web request latency from your application logs. The original log data can then be viewed to determine the source of the problem. You don't have to worry about filling up hard discs because log data may be saved and accessed endlessly in very durable, low-cost storage.


AWS Solution Architect Interview Questions & Answers


Ques: 5). What Cloudwatch Access Management Policies Can I Implement?

Answer:

You can select which CloudWatch actions a user in your AWS Account can execute using CloudWatch's integration with AWS IAM. IAM cannot be used to restrict access to CloudWatch data for individual resources. You can't grant a person access to CloudWatch data for just one group of instances or a single LoadBalancer, for example. Permissions provided by IAM apply to all cloud resources used by CloudWatch. Furthermore, the Amazon CloudWatch command line tools do not support IAM roles.


AWS DevOps Cloud Interview Questions & Answers


Ques: 6). What is a CloudWatch Alarm, and how does it work?

Answer:

CloudWatch Alarms is a new feature that allows you to monitor CloudWatch metrics and receive warnings when they go outside of the levels (high or low thresholds) you designate. There can be several Alarms for each statistic, each with its own set of actions.

A CloudWatch Alarm's state is always one of three things: OK, ALARM, or INSUFFICIENT DATA. When the metric is inside the permissible range that you've set, the Monitor is in the OK condition. It enters the ALARM state when it hits a particular threshold. When the data needed to make a judgement is absent or incomplete, the monitor enters the INSUFFICIENT DATA state.


AWS(Amazon Web Services) Interview Questions & Answers


Ques: 7). What Is The Average Metric Retention Period?

Answer:

The following is how CloudWatch stores metric data:

For 3 hours, data points with a period of less than 60 seconds are available. These data points are bespoke measurements with a high resolution.

Data points with a period of 60 seconds (1 minute) are available for 15 days, 300 seconds (5 minutes) are available for 63 days, and 4) data points with a metric of 3600 seconds (1 hour) are available for 455 days (15 months). Data points with a shorter duration of publication are aggregated together for long-term storage.


AWS Database Interview Questions & Answers


Ques: 8). When should I use a custom metric instead of sending a log to Cloudwatch Logs?

Answer:

Custom metrics, CloudWatch logs, or both can be used to keep track of your data. If your data, such as OS process or performance measurements, is not already produced in log format, you may want to utilise custom metrics. You may also create your own app or script, or use one offered by an AWS partner. CloudWatch Logs can be used to store and save specific measurements as well as supplementary information.


ActiveMQ Interview Questions & Answers


Ques: 9). Is There Anything I Can Do With My Cloudwatch Logs?

Answer:

CloudWatch Logs can monitor and store logs to help you understand and operate your systems and applications better. No code modifications are necessary when using CloudWatch Logs with your logs because your existing log data is used for monitoring.

 

Ques: 10). What is Amazon CloudWatch Synthetics, and how does it work?

Answer:

You may use Amazon CloudWatch Synthetics to create canaries, which are programmable scripts that run on a schedule, to monitor your endpoints and APIs. Canaries follow the same paths as customers and do the same actions, allowing you to validate your client experience even when there is no customer activity on your apps. Using canaries, you can notice problems before your customers do.

Synthetic monitoring is a technique for assessing a website or online service's availability, performance, and functionality by mimicking visitor queries.

 

Ques: 11). Is it possible to use regular expressions with log data?

Answer:

Regular expressions are not supported by CloudWatch Metric Filters. Consider using Amazon Kinesis and connecting the stream to a regular expression processing engine to handle your log data with regular expressions.

 

Ques: 12). Canaries in Amazon CloudWatch Synthetics are what they sound like.

Answer:

Canaries are scripts that are written in Node.js or Python. Users construct Lambda functions in your account using Node.js or Python as a framework. The HTTP and HTTPS protocols are both supported by Canaries.

 

Ques: 13). How Do I Get My Log Data Back?

Answer:

The CloudWatch Logs console or the CloudWatch Logs CLI can be used to retrieve any of your log data. The Log Group, Log Stream, and time with which the log events are related are used to obtain them.

 

Ques: 14). What Are the Different Thresholds I Can Use To Set A Cloudwatch Alarm?

Answer:

When you create an alarm, you must first select the CloudWatch statistic that it will track. The next step is to select an evaluation period and a statistical value to assess. Set a target value and choose whether the alarm will be triggered if the value is more, equal, or less than that value to create a threshold.

 

Ques: 15). What is Amazon CloudWatch ServiceLens, and how does it work?

Answer:

Amazon CloudWatch ServiceLens is a new tool that allows you to visualise and analyse the health, performance, and availability of your applications in a single location. All public AWS Regions that offer AWS-X-Ray support Amazon CloudWatch ServiceLens.

 

Ques: 16). What are CloudWatch Metric Streams, and how can I use them?

Answer:

CloudWatch Metric Streams is a feature that lets you broadcast CloudWatch metrics endlessly to a location of your choice with very little setup and administration. It's a completely managed solution that takes care of everything for you, including writing code and maintaining infrastructure. With a few clicks, users can setup a metric stream to destinations like Amazon Simple Storage Service (S3). Users might also submit the analytics to a variety of third-party service providers to keep their operational dashboards up to date.

 

Ques: 17). What Can Amazon Cloudwatch Metrics Tell Me?

Answer:

CloudWatch allows you to monitor AWS cloud resources as well as the AWS packages you use. EC2 times, EBS volumes, ELBs, Autoscaling agencies, EMR process flows, RDS DB times, DynamoDB tables, ElastiCache clusters, RedShift clusters, OpsWorks stacks, Route 53 fitness assessments, SNS topics, SQS queues, SWF workflows, and Storage Gateways are among the AWS services and products for which metrics are automatically provided. You can also view custom metrics generated by your own applications and services.

 

Ques: 18). How do I send Grafana from CloudWatch metrics?

Answer:

1. Install Grafana : Follow the steps to Install Grafana.

2. Go to AWS -> IAM -> Policies.

3. Add below JSON in policy -> Create Policy:

{

   "Version": "2021-10-23", -- current Date

   "Statement": [

       {

           "Sid": "AllowReadingMetricsFromCloudWatch",

           "Effect": "Allow",

           "Action": [

               "cloudwatch:ListMetrics",

               "cloudwatch:GetMetricStatistics",

               "cloudwatch:GetMetricData"

           ],

           "Resource": "*"

       },

       {

           "Sid": "AllowReadingTagsInstancesRegionsFromEC2",

           "Effect": "Allow",

           "Action": [

               "ec2:DescribeTags",

               "ec2:DescribeInstances",

               "ec2:DescribeRegions"

           ],

           "Resource": "*"

       }

   ]

}

4. IAM -> Roles -> Create Role -> Select AWS Service / EC2

5. Attach Permission policies

6. IAM -> Users and click Add User ->Attach existing policies -> copy Access Key ID, your Secret Key

7. EC2 -> Instances-> Select Grafana Server and click on Actions -> Instance Settings -> Attach/Replace IAM Role -> Attach your Grafana IAM Role to the instance.

8. Log in to your Grafana Server using Terminal as root user and provide Access Key ID, your Secret Key:

# vim /usr/share/grafana/.credentials

aws_access_key_id = 000000000000

aws_secret_access_key = 0000000000

region = us-west-2


# chmod 0644 .credentials

9. Grafana -> Navigate to Data Sources -> Select CloudWatch Type

10. Create Dashboard -> Select Graph -> Select Panel Title -> edit and provide namespace.


Ques: 19). Is it possible to use IAM roles with the CloudWatch logs agent?

Answer:

Yes, the CloudWatch logs agent has access to both keys and IAM roles and is capable of supporting and working with IAM.

Amazon Key Management Service (AWS KMS) is a managed service that integrates with a number of other AWS services. You can use it to create, store, and control encryption keys in your applications to encrypt your data. AWS KMS Key Management Service is a service that allows you to manage your keys on Amazon Web Services.

 

Ques: 20). How does AWS CloudWatch handle authentication and access control?

Answer:

Use IAM users or roles to control who has access.

To manage access control, use Dashboard Permissions, IAM identity-based policies, and service-linked roles.

Permissions policies define who gets access to what and when.

Policies based on an individual's identity

Policies based on resources

You can't utilise CloudWatch Amazon Resource Names (ARNs) in an IAM policy because there aren't any. When designing a policy to control access to CloudWatch actions, replace the resource with a * (asterisk).