April 28, 2022

Top 20 Apache Drill Interview Questions and Answers

 

        Apache Drill is an open source software framework that enables the interactive study of huge datasets using data demanding distributed applications. Drill is the open source version of Google's Dremel technology, which is provided as a Google Big Query infrastructure service. HBase, MongoDB, MapR-DB, HDFS, MopEDS, AmazonS3, Google cloud storage, Swift, NAS, and local files are among the NoSQL databases and filesystems it supports. Data from various datastores can be combined in a single query. You may combine a user profile collection in MongoDB with a directory of Hadoop event logs, for example.


Apache Kafka Interview Questions and Answers


Ques. 1): What is Apache Drill, and how does it work?

Answer:

Apache Drill is an open-source SQL engine with no schema that is used to process massive data sets and semi-structured data created by new age Big data applications. Drill's plug-and-play interface with Hive and Hbase installations is a great feature. Google's Dremel file system inspired the Apache Drill. We may have a faster understanding of data analysis without having to worry about schema construction, loading, or any other type of maintenance that used to be required in the RDBMS system. We can easily examine multi-structured data with Drill.

Apache Drill is a schema-free SQL Query Engine for Hadoop, NoSQL, and Cloud Storage that allows us to explore, visualise, and query various datasets without needing to use ETL or other methods to fix them to a schema.

Apache Drill can also directly analyse multi-structured and nested data in non-relational data stores, without any data restrictions.

The schema-free JSON model is included in Apache Drill, the first distributed SQL query engine and its looks like -

  • Elastic Search
  • MongoDB
  • NoSQL database

The Apache Drill is very useful for those professionals that already working with SQL databases and BI tools like Pentaho, Tableau, and Qlikview.

Also Apache Drill supports to -

  • RESTful,
  • ANSI SQL and
  • JDBC/ODBC drivers


Apache Camel Interview Questions and Answers


Ques. 2): Is Drill a Good Replacement for Hive?

Answer:

Hive is a batch processing framework that is best suited for processes that take a long time to complete. Drill outperforms Hive when it comes to data exploration and business intelligence.

Drill is also not exclusive to Hadoop. It can, for example, query NoSQL databases (such as MongoDB and HBase) and cloud storage (eg, Amazon S3, Google Cloud Storage, Azure Blob Storage, Swift).

Both Instruments Hive and Drill are used to query enormous datasets; Hive is best for batch processing for long-running processes, whereas Drill offers more advancement and a better user experience. Drill's limitation isn't limited to Hadoop; it may also access and process data from other sources.


Apache Struts 2 Interview Questions and Answers


Ques. 3): What are the differences between Apache Drill and Druid?

Answer:

The primary distinction is that Druid pre-aggregates metrics to give low latency queries and minimal storage use.

You can't save information about individual events while using Druid to analyse event data.

Drill is a generic abstraction for a variety of NoSql data stores. Because the values in these data stores are not pre-aggregated and are saved individually, they can be used for purposes other than storing aggregated metrics.

Drill does not provide the low latency queries required to create dynamic reporting dashboards.


Apache Spark Interview Questions and Answers


Ques. 4): What does Tajo have in common with Apache Drill?

Answer:

Tajo resembles Drill in appearance. They do, however, have a lot of differences. Their origins and eventual purposes are the most significant contrasts. Drill is based on Google's Dremel, whereas Tajo is based on the combination of MR and parallel RDBMS. Tajo's goal is a relational and distributed data warehousing system, whereas Drill's goal is a distributed system for interactive analysis of large-scale datasets.

As far as I'm aware, the first Drill contains the following characteristics:

  • Drill is a Google Dremel clone project.
  • Its primary goal is to do aggregate queries using a full table scan.
  • Its main goal is to handle queries quickly.
  • It employs a hierarchical data model.

Tajo, on the other hand, has the following features:

  • Tajo combines the benefits of MapReduce and Parallel databases.
  • It primarily targets complex data warehouse queries and has its own distributed query evaluation approach.
  • Its major goal is scalable processing by exploiting the advantages of MapReduce and Parallel databases.
  • We expect that sophisticated query optimization techniques, intermediate data streaming, and online aggregation will significantly reduce query response time.
  • It utilizes a relational data model. We feel that the relational data model is sufficient for modelling the vast majority of real-world applications.
  • Tajo is expected to be linked with existing BI and OLAP software.


Apache Hive Interview Questions and Answers


Ques. 5): What are the benefits of using Apache Drill?

Answer:

Some of the most compelling reasons to use Apache Drill are listed below.

  • Simply untar the Apache Drill and use it in local mode to get started. It does not necessitate the installation of infrastructure or the design of a schema.
  • Running SQL queries does not necessitate the use of a schema.
  • We can query semi-structured and complex data in real time with Drill.
  • The SQL:2003 syntax standard is supported by Apache Drill.
  • Drill can be readily linked with BI products like QlikView, Tableau, and MicroStrategy to give analytical capabilities.
  • We can use Drill to conduct an interactive query that will access the Hive and HBase tables.
  • Drill supports multiple data stores such as local file systems, distributed file systems, Hadoop HDFS, Amazon S3, Hive tables, HBase tables, and so on.
  • Apache Drill can be easily scalable from a single system up to 1000 nodes.


Apache Tomcat Interview Questions and Answers 


Ques. 6): What Are the Great Features of Apache Drill?

Answer:

The following features are -

  • Schema-free JSON document model similar to MongoDB and Elastic search
  • Code reusability
  • Easy to use and developer friendly
  • High performance Java based API
  • Memory management system
  • Industry-standard API like ANSI SQL, ODBC/JDBC, RESTful APIs
  • How does Drill achieve performance?
  • Distributed query optimization and execution
  • Columnar Execution
  • Optimistic Execution
  • Pipelined Execution
  • Runtime compilation and code generation
  • Vectorization


Apache Ambari interview Questions and Answers


Ques. 7): What are some of the things we can do with the Apache Web interface?

Answer:

The tasks that we can conduct through the Apache Drill Web interface are listed below.

  • The SQL Queries can be conducted from the Query tab.
  • We have the ability to stop and restart running queries.
  • We can view the executed queries by looking at the query profile.
  • In the storage tab, you can view the storage plugins.
  • In the log tab, we can see logs and stats.


Apache Tapestry Interview Questions and Answers


Ques. 8): What is Apache Drill's performance like? Does the number of lines in a query result affect its performance?

Answer:

We utilise drill for its rest server and connect D3 visualisation for querying IOT data, and the querying command(select and join) suffers from a lot of slowness, however this was fixed when we switched to spark SQL.

Drill is useful in that it can query most data sources, but it may need to be tested before being used in production. (If you want something faster, I believe you can find a better query engine.) But for development and testing, it's been quite useful.


Apache Ant Interview Questions and Answers


Ques. 9): What Data Storage Plugins does Apache Drill support?

Answer:

The following is a list of Data Storage Plugins that Apache Drill supports.

  • File System Data Source Storage Plugin
  • HBase Data Source Storage Plugin
  • Hive Data Source Storage Plugin
  • MongoDB Data Source Storage Plugin
  • RDBMS Data Source Storage Plugin
  • Amazon S3 Data Source Storage Plugin
  • Kafka Data Source Storage Plugin
  • Azure Blob Data Source Storage Plugin
  • HTTP Data Source Storage Plugin
  • Elastic Search Data Source Storage Plugin


Apache Cassandra Interview Questions and Answers


Ques. 10): What's the difference between Apache Solr and Apache Drill, and how do you use them?

Answer:

The distinction between Apache Solr and Apache Drill is comparable to that between a spoon and a knife. In other words, despite the fact that they deal with comparable issues, they are fundamentally different instruments.

To put it plainly... Apache Solr is a search platform, while Apache Drill is a platform for interactive data analysis (not restricted to just Hadoop). Before performing searches with Solr, you must parse and index the data into the system. For Drill, the data is stored in its raw form (i.e., unprocessed) on a distributed system (e.g., Hadoop), and the Drill application instances (i.e., drillbits) will process it in parallel.


Apache NiFi Interview Questions and Answers


Ques. 11): What is the recommended performance tuning approach for Apache Drill?

Answer:

To tune Apache Drill's performance, a user must first understand the data, query plan, and data source. Once these locations have been discovered, the user can utilise the performance tuning technique below to increase the query's performance.

  • Change the query planning options if necessary.
  • Change the broadcast join options as needed.
  • Switch the aggregate between one and two phases.
  • The hash-based memory-constrained operators can be enabled or disabled.
  • We can activate query queuing based on your needs.
  • Take command of the parallelization.
  • Use partitions to organise your data.


Apache Storm Interview Questions and Answers


Ques. 12): What should you do if an Apache Drill query takes a long time to deliver a result?

Answer:

Check the following points if a query from Apache Drill is taking too long to deliver a result.

  • Check the query's profile to determine if it's moving or not. The query progress is determined by the time of the latest update and change.
  • Streamline the process where Apache Drill is taking too long.
  • Look for partition pruning and projection pushdown operations.

 

Ques. 13): I'm using Apache Drill with one drillbit to query approximately 20 GB of data, and each query takes several minutes to complete. Is this normal?

Answer:

The performance of a single bit drill is determined by the Java memory setup and resources available on the computer where your query is being performed. Because the query engine must identify meaningful matches, the where clause requires more work from the query engine, which is why it is slower.

You can also alter JVM parameters in the drill configuration. You can devote more resources to your searches, which should result in speedier results.

 

Ques. 14): How does Apache Drill compare to Apache Phoenix with Hbase in terms of performance?

Answer:

Because Drill is a distributed query engine, this is a fascinating question. In contrast, Phoenix implements RDBMS semantics in order to compete with other RDBMS. That isn't to suggest that Drill won't support inserts and other features... But, because they don't do the same thing right now, comparing their performance isn't really apples-to-apples.

Drill can query HBase and even push query parameters down into the database. Additionally, there is presently a branch of Drill that can query data stored in Phoenix.

Drill can simultaneously query numerous data sources. Logically if you choose to use Phoenix, you could use both to satisfy your business needs.

 

Ques. 15): Is Apache Drill 1.5 ready for usage in production?

Answer:

Drill is one of the most mature SQL-on-Hadoop solutions in general. As with all of the SQL-on-Hadoop solutions, it may or may not be the best fit for your use case. I mention that solely because I've heard of some extremely far-fetched use cases for Drill that aren't a good fit.

Drill will serve you well in your production environment if you wish to run SQL queries without "requiring" ETL first.

Any tool that supports the ODBC and JDBC connections can easily access it as well.

 

Ques. 16): Why doesn't Apache Drill get the same amount of attention as other SQL-on-Hadoop tools?

Answer:

To keep track of SQL on Hadoop tools and to advise enterprise customers on which ones would be ideal for them. A lot of SQL on Hadoop solutions have a large number of users. Presto has been used by a number of major Internet firms (Netflix, AirBnB), as well as a number of large corporations. It is largely sponsored by Facebook and Teradata (my job). The Cloudera distribution makes Impala widely available. Phoenix and Kylin also make a lot of appearances and have a lot of popularity. Until it doesn't function or a flaw is discovered, Spark SQL is the go-to for new projects these days. Hive is the hard to beat incumbent. Adoption is crucial.

 

Ques. 17): Is it possible to utilise Apache Drill + MongoDB in the same way that RDBMS is used?

Answer:

To begin, you must comprehend the significance of NoSQL. To be honest, deciding between NoSQL and RDBMS based on a million or ten million users is not a great number.

However, as you stated, the size of your dataset will only grow. You can begin using MongoDB, keeping in mind the scalability element.

Apache Drill is now available.

Dremel by Google was the inspiration for Apache drill. When you select columns to retrieve, it performs well. Multiple data sources can be joined together (e.g. join over hive and MongoDB, join over RDBMS and MongoDB, etc.)

Also, pure MongoDB or MongoDB + Apache Drill are both viable options.

MongoDB

Stick to native MongoDB if your application architecture is entirely based on MongoDB. You have access to all of MongoDB's features. MongoDB java driver, python driver, REST API, and other options are available. Yes, learning MongoDB-specific concepts will take more time. However, RDBMS queries provide you a lot of flexibility, and you can do a lot of things over here.

MongoDB + Apache Drill

You can choose this option if you can accomplish your goal with JPA or SQL queries and you are more familiar with RDBMS queries.

Additional benefit: You can use dig to query across additional data sources such as hive/HDFS or RDBMS in addition to MongoDB in the future.

 

Ques. 18): What is an example of a real-time use of Apache Drill? What makes Drill superior to Hive?

Answer:

Hive is a batch processing framework that is best suited for processes that take a long time to complete. Drill outperforms Hive when it comes to data exploration and business intelligence.

Drill is also not exclusive to Hadoop. It can, for example, query NoSQL databases (such as MongoDB and HBase) and cloud storage (eg, Amazon S3, Google Cloud Storage, Azure Blob Storage, Swift).

 

Ques. 19): Is Cloudera Impala similar to the Apache Drill incubator project?

Answer:

It's difficult to make a fair comparison because both initiatives are still in the early stages. We still have a lot of work to do because the Apache Drill project was only started a few months ago. That said, I believe it is critical to discuss some of the Apache Drill project's techniques and goals, which are critical to comprehend when comparing the two:

  • Apache Drill is a community-driven product run under the Apache foundation, with all the benefits and guarantees it entails.
  • Apache Drill committers are scattered across many different companies.


Apache Drill is a NoHadoop (not just Hadoop) project with the goal of providing distributed query capabilities across a variety of large data systems, including MongoDB, Cassandra, Riak, and Splunk.

  • By supporting all major Hadoop distributions, including Apache, Hortonworks, Cloudera, and MapR, Apache Drill avoids vendor lock-in.
  • Apache Drill allows you to do queries on hierarchical data.
  • JSON and other schemaless data are supported by Apache Drill.
  • The Apache Drill architecture is built to make third-party and custom integrations as simple as possible by clearly specifying interfaces for query languages, query optimizers, storage engines, user-defined functions, user-defined nested data functions, and so on.

Clearly, the Apache Drill project has a lot to offer and a lot of qualities. These things are only achievable because of the enormous amount of effort and interest that a big number of firms have begun to contribute to the project, which is only possible because of the Apache umbrella's power.

 

Ques. 20): Why is MapR mentioning Apache Drill so much?

Answer:

Originally Answered: Why is MapR mentioning Apache Drill so much?

Drill is a new and interesting low latency SQL-on-Hadoop solution with more functionality than the other options available, and MapR has done it in the Apache Foundation so that it, like Hive, is a real community shared open source project, which means it's more likely to gain wider adoption.

Drill is MapR's baby, so they're right to be proud of it - it's the most exciting thing to happen to SQL-on-Hadoop in years. They're also discussing it since it addresses real-world problems and advances the field.

Consider Drill to be what Impala could have been if it had more functionality and was part of the Apache Foundation.

 

 

 


April 25, 2022

Top 20 AWS VPC Interview Questions and Answers

 

VPC (Virtual Private Cloud) is one of the AWS services that is gaining traction in the tech employment market these days. Knowing the fundamentals of VPC might provide job seekers who want to work for Amazon Web Services an advantage. It is our responsibility to prepare you for this. As a result, we've compiled a list of the finest AWS VPC interview questions that frequently appear in AWS interviews. Before we get into that, let's go over some of the fundamentals of this technology that a newbie should be aware of while taking AWS training.

As most of you are aware, Amazon Web Services (AWS) is an Amazon subsidiary that offers cloud computing services based on user demand. Users must pay a monthly subscription fee. Amazon offers a variety of services that allow you to effortlessly integrate your local resources with the cloud. AWS S3 (Simple Storage Service) is an Amazon Web Services (AWS) service that offers object storage using several web service interfaces such as SOAP, BitTorrent, and others. Knowing how to respond to common AWS interview questions can give you an advantage over other candidates vying for a spot on the AWS team.


AWS(Amazon Web Services) Interview Questions and Answers

AWS Cloud Interview Questions and Answers


Ques. 1): Is there a limit to how many VPCs, VPNs, Subnets, and Gateways I can create?

Answer:

Those things are unquestionably constrained in their production. In a single region, you can only construct five VPCs. If you want to increase the limit, you'll need to increase the internet gateway as well.

VPNs, elastic IP addresses, NAT gateways, and internet gateways all have a maximum limit of five. The maximum number of subnets per VPC is 200.

Furthermore, there is a maximum of 50 customer portals per area.


AWS RedShift Interview Questions and Answers


Ques. 2): What Is It That Sets AWS VPC Apart From Other Private Clouds?

Answer:

The following two qualities distinguish AWS VPC from other cloud computing services:

When you need a private network in the cloud, it eliminates the need to set up and manage physical data centres, hardware, and/or virtual private networks.

AWS VPC is extremely secure against security and privacy threats because to its comprehensive security measures.


AWS Cloud Practitioner Essentials Questions and Answers


Ques. 3): What exactly is the meaning of the phrase "VPC"?

Answer:

VPC stands for Virtual Private Cloud, and it's a private network space within the Amazon cloud where you can deploy AWS resources. It's Amazon EC2's actual networking layer, which we've already talked about. Each virtual network in the cloud that you construct will be logically separated from other virtual networks in the cloud.

Although the layout of a VPC is similar to that of a typical network in a data centre, a VPC will benefit from AWS's scalable architecture. Another significant benefit of VPC is that it is completely customizable. You can create subnets, set up root tables, configure network gateways, setup network access control lists, choose IP address range, and many more in a Virtual Private Cloud.


AWS EC2 Interview Questions and Answers


Ques. 4): What is a Network Address Translation (NAT) Device?

Answer:

In your VPC, a NAT device will allow instances in the private subnet to send outward IPv4 traffic to other AWS services/the internet while preventing inbound traffic from the internet. When traffic is sent to the internet, the IP address is replaced by the address of the NAT device, and when the response is returned to the instances, the device translates the instances' addresses back to private IP addresses. There are two types of NAT devices available on AWS: NAT instance and NAT gateway. NAT instances are configured on Linux AMIs. IPv6 is not supported by NAT.


AWS Lambda Interview Questions and Answers


Ques. 5): What Are My Vpc's Connectivity Options?

Answer:

You can link your VPC to the following resources:

  • The World Wide Web (via an Internet gateway)
  • Using a Hardware VPN connection to access to your business data centre (via the virtual private gateway)
  • The Internet as well as your company's data centre (utilizing both an Internet gateway and a virtual private gateway)
  • AWS's other services (via Internet gateway, NAT, virtual private gateway, or VPC endpoints)
  • Other Virtual Private Clouds (via VPC peering connections)


AWS Cloud Security Interview Questions and Answers


Ques. 6): Is it possible to use Amazon VPC with Amazon Ec2 Reserved Instances?

Answer:

Yes. When you buy Reserved Instances, you can reserve an instance in Amazon VPC. AWS does not distinguish between instances running on Amazon VPC and normal Amazon EC2 when calculating your charge. AWS optimises which instances are charged at the reduced Reserved Instance rate, ensuring you pay the least amount possible. However, your instance reservation will be specific to Amazon VPC; for more information, visit the Reserved Instances page.


AWS Simple Storage Service (S3) Interview Questions and Answers


Ques. 7): Is it possible for Amazon Ec2 instances within a Vpc to communicate with Amazon Ec2 instances outside of the Vpc?

Answer:

Yes, it is correct. If an Internet gateway is set up, Amazon VPC traffic destined for Amazon EC2 instances outside of a VPC passes through the Internet gateway before entering the public AWS network to reach the EC2 instance. The traffic traverses the VPN connection, egresses from your datacenter, and then re-enters the public AWS network if an Internet gateway has not been established, or if the instance is in a subnet configured to route through the virtual private gateway.


AWS Fargate Interview Questions and Answers


Ques. 8): What is ELB (Elastic Load Balancing) and how does it effect Virtual Private Cloud?

Answer:

ELB is a load balancer service for AWS deployments, as the name implies. A load balancer spreads the amount of work that a computer must complete into other computers, allowing it to be completed faster. ELB distributes incoming application traffic to numerous destinations, such as EC2 instances, in the same way.

There are three types of ELBs that assure scalability, availability, and security for fault-tolerant applications. There are three types of load balancers: traditional, network, and application load balancers. VPC can be used in conjunction with network and application load balancers, which can route traffic to targets within VPCs.


AWS SageMaker Interview Questions and Answers


Ques. 9): What Are The Amazon Vpc Components?

Answer:

Amazon VPC comprises a variety of objects that will be familiar to customers with existing networks:

  • A Virtual Private Cloud (VPC): A logically isolated virtual network in the AWS cloud. You define a VPC’s IP address space from a range you select.
  • Subnet: A segment of a VPC’s IP address range where you can place groups of isolated resources.
  • Internet Gateway: The Amazon VPC side of a connection to the public Internet.
  • NAT Gateway: A highly available, managed Network Address Translation (NAT) service for your resources in a private subnet to access the Internet.
  • Hardware VPN Connection: A hardware-based VPN connection between your Amazon VPC and your datacenter, home network, or co-location facility.
  • Virtual Private Gateway: The Amazon VPC side of a VPN connection.
  • Customer Gateway: Your side of a VPN connection.
  • Router: Routers interconnect subnets and direct traffic between Internet gateways, virtual private gateways, NAT gateways, and subnets.
  • Peering Connection: A peering connection enables you to route traffic via private IP addresses between two peered VPCs.
  • VPC Endpoint for S3: Enables Amazon S3 access from within your VPC without using an Internet gateway or NAT, and allows you to control the access using VPC endpoint p
  • LI>Egress-only Internet Gateway: A stateful gateway to provide egress only access for IPv6 traffic from the VPC to the Internet.


AWS DynamoDB Interview Questions and Answers


Ques. 10): In a VPC, what IP address range can be used?

Answer:

For the principal CIDR block, you can use any IPv4 address range, including RFC 1918 or publicly routable IP ranges. Certain restrictions apply to secondary CIDR blocks. Publicly routable IP blocks can only be reached via the Virtual Private Gateway and cannot be reached via the Internet gateway. Customer-owned IP address blocks are not advertised on the Internet by AWS. Call the necessary API or use the AWS Management Console to assign an Amazon-provided IPv6 CIDR block to a VPC.


AWS Cloudwatch interview Questions and Answers


Ques. 11): What Is The Difference Between A Vpc's Security Groups And Network Acls?

Answer:

A VPC's security groups define which communication is permitted to and from an Amazon EC2 instance. Network ACLs assess traffic entering and exiting a network at the subnet level. Allow and Deny rules can be set using network ACLs. Traffic between instances in the same subnet is not filtered by network ACLs. Furthermore, network ACLs filter in a stateless manner, whereas security groups filter in a stateful manner.


AWS Elastic Block Store (EBS) Interview Questions and Answers


Ques. 12): You Really Want To Use My Ec2 Account's Default Vpc? Is that even conceivable?

Answer:

Yes, but we can only enable an existing account for a default VPC if that account has no EC2-Classic resources in that region. All non-VPC deployed Elastic Load Balancers, Amazon RDS, Amazon ElastiCache, and Amazon Redshift resources in that region must also be terminated. All future resource launches, including instances created via Auto Scaling, will be placed in your default VPC after your account has been configured for a default VPC. Contact AWS Support to get your existing account set up with a default VPC. To see if you're eligible for a default VPC, we'll look at your request as well as your existing AWS services and EC2-Classic presence.


AWS Amplify Interview Questions and Answers


Ques. 13): What Is The Best Way To Tell If My Account Is Set To Use A Default Vpc?

Answer:

The Amazon EC2 console shows you which platforms you can use to launch instances in the selected region, as well as whether you have a default VPC there. In the navigation bar, make sure the region you'll be using is selected. Look under "Account Attributes" on the Amazon EC2 console dashboard for "Supported Platforms." If both EC2-Classic and EC2-VPC are present, you can start instances on either platform. You can only launch instances into EC2-VPC if there is only one value, EC2-VPC. If your account is configured to use a default VPC, your default VPC ID will be presented under "Account Attributes". You can also use the EC2 DescribeAccountAttributes API or CLI to describe your supported platforms.


AWS Secrets Manager Interview Questions and Answers


Ques. 14): How to build a custom VPC?

Answer:

In order to build a custom VPC, the following steps must be followed:

  • Create a Virtual Private Cloud
  • Then create Subnets
  • Further create an Internet Gateway
  • Attach this new Gateway to your VPC
  • Create a new Route Table
  • Add the gateway as a route to the new route table
  • Add a subnet to the route table’s subnet association
  • Create a web server for public subnet and a database server for the private subnet
  • Create a new security group for the NAT
  • Add HTTP and HTTPS inbound rules that let in traffic from the private subnets IP
  • Create a NAT for public subnet
  • Create an elastic IP
  • Associate this IP to the NAT
  • Disable destination/source checks for the NAT
  •  Add NAT to the initial VPC route table as a route.


Top 20 AWS Django Interview Questions and Answers


Ques. 15): When it comes to filtering, what's the difference between stateful and stateless?

Answer:

Stateful filtering keeps track of the origin of a request and can send the response back to the original machine automatically. A stateful filter that enables inbound traffic to TCP port 80 on a web server, for example, will allow return traffic on a higher-numbered port (e.g., destination TCP port 63, 912) to pass through the stateful filter between the client and the webserver. The filtering device keeps track of the origin and destination port numbers and IP addresses in a state table. On the filtering device, only one rule is required: Allow inbound traffic on TCP port 80 to the web server.

Stateless filtering, on the other hand, merely looks at the source or destination IP address, as well as the destination port, regardless of whether the traffic is a new request or a response to a request. In the case above, the filtering device would need to implement two rules: one to allow traffic incoming to the web server on TCP port 80, and another to allow traffic outward from the web server (TCP port range 49, 152 through 65, 535).


AWS Cloud Support Engineer Interview Question and Answers


Ques. 16): What is Classiclink, exactly?

Answer:

VPC (Virtual Private Cloud) by Amazon (VPC) ClassicLink allows EC2 instances running on the EC2-Classic platform to communicate with VPC instances through private IP addresses. To use ClassicLink, you must first enable it for a VPC in your account and then link a Security Group from that VPC to an EC2-Classic instance. All of your VPC Security Group's policies will apply to communications between EC2-Classic instances and VPC instances.


AWS Solution Architect Interview Questions and Answers


Ques. 17): What is the best way to link a VPC to my corporate datacenter?

Answer:

By establishing a hardware VPN connection between your existing network and Amazon VPC, you can communicate with Amazon EC2 instances within a VPC as if they were on your local network. On Amazon EC2 instances in a VPC accessible via a hardware VPN connection, AWS does not execute network address translation (NAT).


AWS Glue Interview Questions and Answers


Ques. 18): How do I specify the Availability Zone in which my Amazon EC2 instances will be launched?

Answer:

When you create an Amazon EC2 instance, you must provide the subnet on which the instance will run. The instance will be deployed in the Availability Zone that corresponds to the subnet given.


AWS Aurora Interview Questions and Answers


Ques. 19): Why can't you ping the router that joins my subnets, or my default gateway?

Answer:

Ping (ICMP Echo Request and Echo Reply) requests to your VPC's router are not supported. Pinging between Amazon EC2 instances within a VPC is possible if your operating system's firewalls, VPC security groups, and network ACLs allow it.


AWS DevOps Cloud Interview Questions and Answers


Ques. 20): Is It Possible To Control And Manage Amazon Vpc Using The AWS Management Console?

Answer:

Yes, it is correct. VPCs, subnets, route tables, Internet gateways, and IPSec VPN connections can all be managed through the AWS Management Console. You can also construct a VPC with the help of a simple wizard.

AWS RDS Interview Questions and Answers