April 19, 2022

Top 20 AWS Elastic Block Store (EBS) Interview Questions and Answers

  

            The Amazon Elastic Block Store (EBS) is a block storage solution for long-term data storage. Amazon EBS is a highly available block level storage volume that is suited for EC2 instances. General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic are the three types of volume available. The performance, attributes, and pricing of these three volume categories vary. For usage with Amazon EC2 instances, Amazon Elastic Block Store (EBS) provides block level storage volumes. Amazon EBS volumes are off-instance storage that lasts indefinitely, regardless of how long an instance is running.


AWS RedShift Interview Questions and Answers

AWS AppSync Interview Questions and Answers


Ques.1 ): What is Elastic Block Store, and how does it work?

Answer:

Amazon Elastic Block Store (EBS) is a high-performance, easy-to-use block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. Amazon EBS is a storage service that provides block-level storage volumes for AWS EC2 instances. EBS volumes are off-instance storage that lasts indefinitely. It's a simple-to-use block storage service designed to integrate with AWS EC2 for high-throughput and transaction-intensive operations at any scale.


AWS Cloud Practitioner Essentials Questions and Answers

AWS Cloud9 Interview Questions and Answers


Ques. 2): What are the advantages of using Amazon EBS?

Answer:

Storage that is both reliable and secure - Each EBS volume will automatically respond to its Availability Zone in order to protect against component failure. Secure - You may decide who has access to which EBS volumes using Amazon's flexible access control policies. Access control combined with encryption provides a robust data defense-in-depth security technique. Higher performance - Amazon EBS makes advantage of SSD technology to deliver data results with consistent application I/O performance. Simple data backup - Take point-in-time snapshots of Amazon EBS volumes to save data backup. Benefits of Amazon EBS are as follows:

Reliable and Secure Storage - It automatically respond to its availability zone protecting from component failure.

Secure - It allows us to specify access EBS volumes.

Higher Performance - Delivers data results with consistent performance.

Easy Data Backup - Takes taking point-in-time snapshots of Amazon EBS volumes.


AWS EC2 Interview Questions and Answers

Amazon Athena Interview Questions and Answers


Ques. 3): What is EBS Block Express, and how does it work?

Answer:

EBS Block Express is the next version of Amazon EBS storage server architecture, designed to provide the highest levels of performance for block storage at cloud scale with sub-millisecond latency. Block Express accomplishes this by communicating with Nitro System-based EC2 instances via Scalable Reliable Datagrams (SRD), a high-performance, low-latency network protocol. This is the same high-performance, low-latency network interface used in Elastic Fabric Adapter (EFA) for High Performance Computing (HPC) and Machine Learning (ML) applications for inter-instance communication. Block Express also provides modular software and hardware building blocks that can be built in a variety of ways, allowing us to design and deliver greater performance and new features more quickly.


AWS Lambda Interview Questions and Answers

Amazon OpenSearch Interview Questions and Answers


Ques. 4): What are the various types of EBS volumes?

Answer:

There are five types of EBS volumes available as below:

General Purpose SSD (gp2): The SSD (Solid State Drive) is the volume that EC2 defaults to as the root volume of your instance. SSDs are many times faster than HDDs for modest input/output tasks (Hard Disk Drive). It has a good price-performance ratio (measured in IOPS - Input-Output Operations per second).

Provisioned IOPS SSD (io1): This is the most costly and time-consuming EBS volume. They're designed for applications that require a lot of I/O, such as huge Relational or NoSQL databases.

Throughput Optimized HDD (st1): These are low-cost magnetic storage volumes whose performance is measured in terms of throughput.

Cold HDD (sc1): These are even less expensive magnetic storage options than Throughput Optimized. They are intended for large, sequential cold workloads, such as those found on a file server.

Magnetic (standard): These are older generation magnetic drives that are best suited for workloads with infrequent data access.


AWS Simple Storage Service (S3) Interview Questions and Answers

AWS Cloud Security Interview Questions and Answers


Ques. 5): When would I want to use FSR (Fast Snapshot Restore)?

Answer:

If you are concerned about data access latency when restoring data from a snapshot to a volume and wish to prevent the first performance hit during initialization, you should enable FSR on snapshots. Virtual desktop infrastructure (VDI), backup and restore, test/dev volume copies, and booting from custom AMIs are all examples of use cases for FSR. When you enable FSR on your snapshot, you'll get better and more predictable results anytime you need to restore data from it.


AWS Fargate Interview Questions and Answers

AWS DynamoDB Interview Questions and Answers


Ques. 6): What are the different kinds of EBS Volumes?

Answer:

General Purpose EBS (SSD) This volume type is appropriate for small and medium workloads, such as root disc EC2 volumes, small and medium database workloads, and workloads that access logs regularly. By default, SSDs support 3 IOPS/GB, which means that a 1 GB volume will provide 3 IOPS and a 10 GB volume will provide 30 IOPS. One volume's storage size ranges from 1 GB to 1 TB. For one month, each volume costs $0.10 per GB.

IOPS provisioned (SSD) This volume type is best for transactional workloads that require a lot of I/O, as well as large relational, EMR, and Hadoop workloads. IOPS SSDs support 30 IOPS/GB by default, so a 10GB volume will provide 300 IOPS. One volume's storage size ranges from 10GB to 1TB. For supplied storage, one volume costs $0.125 per GB per month and $0.10 per provisioned IOPS per month.

Magnetic Volumes from EBS Previously, it was known as standard volumes. This volume type is suited for workloads that require infrequent data access, such as data backups for recovery, log storage, and so on. One volume's storage size ranges from 10GB to 1TB. For provisioned storage, one volume costs $0.05 per GB per month and $0.05 per million I/O requests. There are 3 types of EBS Volume:

·      The EBS General Purpose (SSD) volume is suitable for small and medium workloads, such as those on the Root Disc EC2.

·      Povisioned IOPS (SSD) volume is ideal for the most I/O heavy and big workloads, such as Hadoop.

·      EBS Magnetic Volumes, also known as standard volumes, are a type of magnetic volume. It is appropriate for tasks such as data backups and log storage.


AWS SageMaker Interview Questions and Answers

AWS Django Interview Questions and Answers


Ques. 7): How can I change an existing EBS volume's capacity, performance, or type?

Answer:

It's simple to change the volume configuration. Using a single CLI call, API call, or a few console clicks, you can expand capacity, optimise performance, or change your volume type with Elastic Volumes. See the Elastic Volumes documentation for more information on Elastic Volumes.


AWS Cloudwatch interview Questions and Answers

AWS Glue Interview Questions and Answers


Ques. 8): What is the Amazon Web Services (AWS) Key Management Service (KMS)?

Answer:

AWS KMS is a managed service that allows you to easily produce and maintain the encryption keys that are used to encrypt your data. AWS Key Management Service works with other AWS services like Amazon EBS, Amazon S3, and Amazon Redshift to make it simple to encrypt your data with encryption keys you control. AWS Key Management Service and AWS CloudTrail are connected to provide you with logs of all key usage to help you satisfy your regulatory and compliance requirements.


AWS Amplify Interview Questions and Answers

AWS VPC Interview Questions and Answers


Ques. 9): How can we change default root EBS size in cloudformation?

Answer:

Use BlockDeviceMappings to approach

 "BlockDeviceMappings": [

          {

            "DeviceName": "/dev/xvda",

            "Ebs": {

              "VolumeType": "io1",

              "Iops": "300",

              "DeleteOnTermination": "false",

              "VolumeSize": "30"

            }

          }

        ],

 

AWS Cloud Interview Questions and Answers Part - 1 

AWS CloudFormation Interview Questions and Answers


Ques. 10): What happens if the 'deleteOnTermination' flag isn't set on all of my linked instances?

Answer:

The configuration of the last associated instance that is terminated determines the volume's deleteOnTermination behaviour. Enable or disable 'deleteOnTermination' for all instances to which the volume is associated to ensure predictable remove on termination behaviour.

Enable 'deleteOnTermination' for all instances to which the volume is attached if you want the volume to be erased when the attached instances are terminated. Disable 'deleteOnTermination' for all attached instances if you want to keep the volume after the attached instances have been terminated. See the Multi-Attach technical documentation for further information.


AWS Cloud Interview Questions and Answers Part - 2

AWS GuardDuty Questions and Answers


Ques. 11): How to Set Up Amazon EBS?

Answer:

Use the following steps for setting up Amazon EBS:

STEP 1 - Create Amazon EBS volume.

STEP 2 - Store EBS Volume from a snapshot.

STEP 3 - Attach EBS Volume to an Instance.

STEP 4 - Detach a volume from Instance.


AWS Secrets Manager Interview Questions and Answers

AWS Control Tower Interview Questions and Answers


Ques. 12): Is it necessary to unmount volumes before taking a snapshot?

Answer:

No, while the volume is mounted and in use, snapshots can be taken in real time. Snapshots, on the other hand, only capture data that has been written to your Amazon EBS volume, so any data that has been cached locally by your application or OS may be missed. We recommend removing the volume cleanly, issuing the snapshot command, and then reattaching the volume to ensure consistent snapshots on volumes associated to an instance. Shutting down the computer to take a clean snapshot of Amazon EBS volumes that function as root devices is recommended.


AWS Cloud Support Engineer Interview Question and Answers

AWS Lake Formation Interview Questions and Answers


Ques. 13): Does the read and write I/O size of my application effect the rate of IOPS I get from my Provisioned IOPS SSD (io2 and io1) volumes?

Answer:

It certainly does. The IOPS rate you obtain when you provision IOPS for io2 or io1 volumes is determined by the I/O size of your application reads and writes. The base I/O size for provisioned IOPS volumes is 16KB. So, if you provisioned a volume with 40,000 IOPS for an I/O size of 16KB, it will achieve that size and 40,000 IOPS. If you increase the I/O size to 32 KB, you can get up to 20,000 IOPS, and so on.


AWS Solution Architect Interview Questions and Answers

AWS Data Pipeline Interview Questions and Answers


Ques. 14): How do I transfer files from one EBS to another?

Answer:

We need to attach to an instance for copying files from one EBS to another EBS, and we can store the contents on a third storage option if the volumes aren't attached to instances.Follow the following steps for doing the same:

·      Start a temporary instance.

·         Use a larger size for higher IO bandwidth.

·         Attach both EBS volumes to the instance and mount them as, say, /vol1 and /vol2.

·         Copy the files from /vol1 to /vol2.

·         Unmount the volumes, detach the EBS volumes, terminate the temporary instance.


AWS Aurora Interview Questions and Answers

Amazon CloudSearch Interview Questions and Answers 


Ques. 15): Does the size of the read and write I/O in my application effect the rate of throughput I obtain from my HDD-backed volumes?

Answer:

Yes. The throughput rate you get is determined on the read and write I/O sizes of your application. Reads and writes in 1MB I/O sizes are processed by HDD-backed volumes. Even though the actual I/O size is smaller, sequential I/Os are merged and processed as 1 MB units, whereas non-sequential I/Os are treated as 1MB units. While a transactional workload with small, random IOs, such as a database, will not perform well on HDD-backed volumes, sequential I/Os and huge I/O sizes will reach the claimed st1 and sc1 performance for a longer length of time.


AWS DevOps Cloud Interview Questions and Answers

AWS Transit Gateway Interview Questions and Answers


Ques. 16): What is the maximum storage capacity of an EBS device?

Answer:

At the moment, EBS supports a maximum volume size of 16 TiB. This suggests that you can construct an EBS volume with a capacity of up to 16 TiB, but whether the OS recognises all of that capacity is dependent on the OS's own design characteristics and the partitioning of the volume.


AWS(Amazon Web Services) Interview Questions and Answers

Amazon Detective Interview Questions and Answers


Ques. 17): When an EBS volume fails, how do you make it available with no downtime and link it to an EC2 instance?

Answer:

You can use a load balancer and auto scaling to make an EBS volume available with no downtime. If the ec2 instance goes down due to autoscaling, a new instance will be launched, and you can use the shell script to add commands to map to the EBS. We can also take frequent backups and replace the EBS volume with the most recent backup or snapshot if the EBS fails.


AWS Database Interview Questions and Answers

Amazon EMR Interview Questions and Answers


Ques. 18): When an Amazon EC2 instance is terminated, what happens to my data?

Answer:

Data stored on an Amazon EBS volume, unlike data stored on a local instance store (which persists just as long as the instance is alive), can persist regardless of the instance's life. As a result, we suggest that you only use the local instance storage for transient data. We recommend using Amazon EBS volumes or backing up data to Amazon S3 for data that requires a higher level of durability. If you're using an Amazon EBS volume as a root partition, make sure the Delete on termination flag is set to "No" if you want the Amazon EBS volume to survive the instance's life.


AWS ActiveMQ Interview Questions and Answers


Ques. 19): What can I expect from Amazon EBS volumes in terms of performance?

Answer:

Provisioned IOPS SSD (io2 Block Express, io2, and io1), General Purpose SSD (gp3 and gp2), Throughput Optimized HDD (st1), and Cold HDD are the seven volume types offered by Amazon EBS (sc1). These volume kinds differ in terms of performance and cost, allowing you to adjust your storage performance and cost to your applications' requirements. Between EC2 instances and EBS, the typical latency is in the single digit milliseconds. See the EBS product specifics page for more information about performance.

 

Ques. 20): What's the difference between io2 Block Express and io2?

Answer:

For all EC2 instances, io2 volumes provide high-performance block storage. Attaching io2 volumes to R5b instance types, which operate on Block Express and provide 4x the performance of io2, is recommended for applications that demand even more performance. With sub-millisecond average IO latency, you can achieve up to 64 TiB capacity, 256,000 IOPS, and 4,000 MB/s throughput from a single io2 volume.

 

 

 

Top 20 AWS Aurora Interview Questions and Answers

 

AWS Aurora is an Amazon cloud-based managed database service. This is one of the most extensively utilised data storage and processing services for low latency and transactional data. The AWS aurora service combines the benefits of open source databases such as MySQL and PostgreSQL with enterprise-level dependability and scalability. For efficient data availability, it uses a clustered technique with data replication in the AWS availability zone. It is much faster than native MySQL and PostgreSQL databases, and it requires little server maintenance. It has a large storage capacity and can expand up to 64 Terabytes of database size for enterprise use.


AWS RedShift Interview Questions and Answers

AWS AppSync Interview Questions and Answers


Ques. 1): What is Amazon Aurora and how does it work?

Answer:

AWS Aurora is a cloud-based relational database that combines the performance and availability of typical enterprise databases with the ease of use and low cost of open source databases. It's five times faster than a typical MySQL database, and three times faster than a standard PostgreSQL database.

AWS Aurora helps provide commercial databases with security, availability, and dependability. It is fully managed by AWS Relational Database Service (RDS), which automates time-consuming administration activities including hardware provisioning, database setup, patching, and backups. It's a fault-tolerant, self-healing storage system that auto-scales up to 128TB per database instance and provides high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across three Availability Zones, among other features (AZs).


AWS Cloud Practitioner Interview Questions and Answers

AWS Cloud9 Interview Questions and Answers


Ques. 2): What are Amazon Aurora DB clusters, and what do they do?

Answer:

An Amazon Aurora DB cluster is made up of one or more database instances and a cluster volume that stores the data for those databases.

An Aurora cluster volume is a virtual database storage volume that spans multiple Availability Zones and contains a copy of the DB cluster data in each. There are two sorts of database instances in an Aurora DB cluster:

Primary DB instance: Supports read and write operations and handles all data modifications to the cluster volume. There is only one primary DB instance in each Aurora DB cluster.

Aurora Replica: It connects to the same storage disc as the primary DB instance and only enables read operations. Each Aurora DB cluster can contain up to 15 Aurora Replicas in addition to the original DB instance. Aurora automatically switches to an Aurora Replica if the primary DB instance becomes unavailable. The Aurora Replicas' failover priority can be set. Aurora Replicas can also transfer read workloads from the primary DB instance.


AWS EC2 Interview Questions and Answers

Amazon Athena Interview Questions and Answers


Ques. 3): What are the benefits of using Aurora?

Answer:

The following are some of Aurora's benefits:

Enterprise-level security: Because Aurora is an Amazon service, you may be confident in its security and use the IAM capabilities.

Enterprise-level availability: It is ensured by multiple replications of database instances across several zones.

Enterprise-level scalability: With Aurora serverless, you can configure your database to scale up and down dynamically in response to application demand.

Enterprise-level performance: Open-source DB's simplicity and cost-effectiveness.

Aurora is interoperable with MySQL and PostgreSQL at the enterprise level. If your present application is built on MySQL or PostgreSQL, you can move it or utilise Amazon RDS to convert your database to Aurora Engine.

AWS Management Console: Amazon Management Console is easy to use with click and drag features to quickly set-up your Aurora Cluster.

Maintenance: Aurora has almost zero server maintenance. 5 times faster than MySQL and 3 times faster than PostgreSQL.


AWS Lambda Interview Questions and Answers

AWS Cloud Security Interview Questions and Answers


Ques. 4): What are Aurora's advantages?

Answer:

The following are some of the advantages of AWS Aurora:

High Performance and Scalability – We can quickly scale up and down our database deployment from a smaller to a larger instance.

Fully Managed - Because Amazon Relational Database Service (RDS) manages Aurora, we don't have to bother about database management activities like hardware provisioning, software patching, setup, configuration, or backups.

Highly Secure - Aurora is very secure, with various levels of security for your database.

Support for Database Migrations to the Cloud - Aurora is utilised as an attractive target for database migrations to the cloud.

MySQL and PostgreSQL Compatible - Aurora is entirely compatible with existing MySQL and PostgreSQL open source databases, and support for new releases is added on a regular basis.

High Availability and Durability - Aurora's high availability and durability make it simple to recover from physical storage failures.


AWS Simple Storage Service (S3) Interview Questions and Answers

AWS DynamoDB Interview Questions and Answers


Ques. 5): How Aurora Works?

Answer:

Primary DB and Aurora replica DB, as well as a cluster volume to handle the data for those DB instances, make up an Aurora DB cluster. Aurora cluster volume is a virtual database storage volume that spans multiple Availability Zones in order to better support global applications. The DB cluster data is duplicated in each zone.

All read and write operations are performed over cluster volume in the primary database. Each Aurora cluster will have one primary database instance.

It's just a copy of the primary database instance whose sole purpose is to provide data, i.e. solely read operations. To provide high availability in all Zones, a primary DB instance can have up to 15 replicas. In a fail-safe circumstance, Aurora will switch to a replica when a Primary DB is not accessible. Replicas aid in the reduction of read workload on the primary database. For replicas, you can set the priority of failover.

Aurora can have a multi-master cluster as well. All DB instances in a multi-master setup will be able to read and write data. In AWS language, these are known as reader and writer DB instances, and we can call this a multi-master replication.

You can also set up Amazon S3 to keep a backup of your database. Even in the worst-case scenario, where the entire cluster is down, your database remains safe.

You can utilise Aurora Serverless to automatically start scaling and shutting down the database to fit application demand for an unpredictable workload.


AWS Fargate Interview Questions and Answers

AWS Elastic Block Store (EBS) Interview Questions and Answers


Ques. 6): What are the advantages of using Amazon RDS with MySQL?

Answer:

Amazon RDS for MySQL has the following advantages:

Easy, managed deployments - : Simple, managed deployments are used to quickly launch and connect to a production-ready MySQL database.

High availability and read replicas - High availability and read replicas are utilised to ensure that our MySQL databases are available and durable.

Fast, dependable storage - utilised to provide two SSD-backed MySQL database storage alternatives.

Monitoring and metrics - Amazon RDS Enhanced Monitoring gives you access to more than 50 CPU, RAM, file system, and disc I/O metrics.

Backup and recovery - utilised to ensure that our MySQL database instance can be recovered.

Isolation and security – used to ensure that our MySQL databases are kept secure.


AWS SageMaker Interview Questions and Answers

AWS Django Interview Questions and Answers


Ques. 7): What is the relationship between Aurora and Amazon RDS Engines?

Answer:

The following points will demonstrate how Amazon RDS' standard engines, such as MySQL and PostgreSQL, interact with Aurora:

When creating new database servers with Amazon RDS, you can select Aurora as a database engine.

If you're acquainted with Amazon RDS, Aurora should be simple to set up. You can utilise the Amazon RDS administration console to set up Aurora clusters, as well as the CLI commands and API to perform database maintenance activities like backup, recovery, and repair.

Aurora's automatic clustering, replication, and other administration operations are controlled over the entire cluster of database servers, not just one instance, allowing you to manage big MySQL and PostgreSQL servers efficiently and at a cheap cost.

Data from Amazon RDS for MySQL and PostgreSQL can be replicated or imported into Aurora using snapshots. Another feature is push-button migration, which may be used to migrate your Amazon RDS MySQL and PostgreSQL databases to Aurora.


AWS Cloudwatch interview Questions and Answers

AWS Glue Interview Questions and Answers


Ques. 8): What are Endpoints and How Do I Use Them?

Answer:

When a user connects to an Aurora cluster, an endpoint is a combination of host name and port.

Endpoints are divided into four categories:

Cluster Endpoint: Cluster Endpoint is used to connect to the current primary database instance and to assist with write operations.

Custom endpoint: A custom endpoint is used to represent a set of DB instances selected by the user.

Reader Endpoint: Reader Endpoint is a read-only endpoint used to connect Aurora replicas.

Instance Endpoint: Instance Endpoint is used to connect to a specific database instance and to diagnose problems with capacity or performance in that instance.


AWS Amplify Interview Questions and Answers

AWS VPC Interview Questions and Answers


Ques. 9): How can we associate a IAM Role with an Aurora Cluster using CloudFormation?

Answer:

PostRunCommand:

  Description: You must run this awscli command after the stack is created and may also need to reboot the cluster/instance.

  Value: !Join [" ", [

    "aws rds add-role-to-db-cluster --db-cluster-identifier",

    !Ref AuroraSandboxCluster,

    "--role-arn",

    !GetAtt AuroraS3Role.Arn,

    "--profile",

    !FindInMap [ AccountNameMap, !Ref AccountNamespace, profile ]

  ]]


AWS Cloud Interview Questions and Answers Part - 1

Amazon OpenSearch Interview Questions and Answers


Ques. 10): What are AWS Aurora's limitations?

Answer:

If you need additional features or have an older version of MySQL, you won't be able to use it because it only supports MySQL-5.6.10. Amazon will add new MySQL functionality to Aurora in the future, but you'll have to wait.

Because Aurora currently only supports InnoDB, you won't be able to use MyISAM tables.

With Aurora, you don't have the choice of using smaller RDS than r3.large.


AWS Cloud Interview Questions and Answers Part - 2

AWS CloudFormation Interview Questions and Answers


Ques. 11): Is it possible for my application to fail over to the cross-region replica from my current primary?

Answer:

Yes, you can use the Amazon RDS console to promote your cross-region replica to the new primary. The promotion process for logical (binlog) replication takes a few minutes, depending on your workload. When you start the promotion process, the cross-region replication will halt.

You may promote a secondary region to take full read/write workloads in under a minute with Amazon Aurora Global Database.


AWS Secrets Manager Interview Questions and Answers

AWS GuardDuty Questions and Answers


Ques. 12): What is Amazon RDS for MySQL and how does it work?

Answer:

AWS RDS for MySQL manages time-consuming database management activities including backups, software patching, monitoring, scaling, and replication, allowing you to focus on application development.

It is compatible with Amazon RDS Community Edition versions.


AWS Cloud Support Engineer Interview Question and Answers

AWS Control Tower Interview Questions and Answers


Ques. 13): What does it mean to be "MySQL compatible"?

Answer:

Amazon Aurora is plug-and-play compatible with existing MySQL open-source databases, and new releases are added on a regular basis. This implies that using conventional import/export tools or snapshots, you can quickly move MySQL databases to and from Aurora. It also means that the majority of the code, apps, drivers, and utilities you already use with MySQL databases can be utilised with Aurora with little or no modification. When comparing Aurora with MySQL, keep in mind that the Amazon Aurora database engine is designed to be wire-compatible with MySQL 5.6 and 5.7, which uses the InnoDB storage engine. This makes switching applications between the two engines a breeze. Amazon Aurora does not support certain MySQL capabilities, such as the MyISAM storage engine.


AWS Solution Architect Interview Questions and Answers

AWS Lake Formation Interview Questions and Answers


Ques. 14): How can I switch from MySQL to Amazon Aurora and back?

Answer:

There are various options available to you. To export data from MySQL and to import data into Amazon Aurora, use the normal mysqldump and mysqlimport utilities, respectively. You can also utilise the AWS Management Console to move an Amazon RDS for MySQL DB Snapshot to Amazon Aurora utilising Amazon RDS's DB Snapshot migration feature. Most customers have their migration completed in under an hour, while the time varies on the type and amount of the data set.


AWS DevOps Cloud Interview Questions and Answers

AWS Data Pipeline Interview Questions and Answers


Ques. 15): What does it mean to have "five times the performance of MySQL"?

Answer:

By tightly integrating the database engine with an SSD-based virtualized storage layer purpose-built for database workloads, Amazon Aurora improves MySQL performance by lowering writes to the storage system, avoiding lock contention, and eliminating delays caused by database process threads. Amazon Aurora offers over 500,000 SELECTs/sec and 100,000 UPDATEs/sec, five times faster than MySQL running the same benchmark on the same hardware, according to our tests with SysBench on r3.8xlarge instances.


AWS(Amazon Web Services) Interview Questions and Answers

Amazon CloudSearch Interview Questions and Answers 


Ques. 16): What are the best practises for optimising my database workload for Amazon Aurora PostgreSQL-Compatible Edition?

Answer:

Amazon Aurora is designed to be PostgreSQL compatible, allowing existing PostgreSQL applications and tools to run without needing to be modified. However, Amazon Aurora outperforms PostgreSQL in the domain of highly concurrent workloads. We recommend building your applications to support a large number of concurrent queries and transactions in order to maximise your workload's throughput on Amazon Aurora.


AWS Database Interview Questions and Answers

AWS Transit Gateway Interview Questions and Answers


Ques. 17): What are the options for scaling the compute resources associated with my Amazon Aurora DB Instance?

Answer:

By selecting the desired DB Instance and clicking the Modify button in the AWS Management Console, you can scale the compute resources allocated to your DB Instance. Changing the DB Instance class modifies memory and CPU resources.

When you make modifications to your DB Instance class, they will be applied during the maintenance window you specify. You can also utilise the "Apply Immediately" flag to have your scaling requests applied right away. Both of these approaches will have a short-term impact on availability while the scaling operation is carried out. Remember that any other pending system modifications will be applied as well.


AWS ActiveMQ Interview Questions and Answers

Amazon Detective Interview Questions and Answers


Ques. 18): What is my plan of action if my database fails?

Answer:

Amazon Aurora keeps six copies of your data across three Availability Zones (AZs) and will attempt to recover your database in a healthy AZ without losing any data. You can restore from a DB Snapshot or perform a point-in-time restore procedure to a fresh instance if your data is unavailable within Amazon Aurora storage. For a point-in-time restoration procedure, the latest restoreable time can be up to five minutes in the past.


Amazon EMR Interview Questions and Answers

Amazon OpenSearch Interview Questions and Answers

  

Ques. 19): Is it possible for me to share my photos with another AWS account?

Answer:

Yes. Aurora allows you to take snapshots of your databases, which you may then use to restore them later. You can share a snapshot with another AWS account, and the receiving account's owner can use it to restore a database containing your data. You may even make your snapshots public, allowing anyone to restore a database containing your (public) data. You can use this capability to exchange data between different AWS accounts for different settings (production, dev/test, staging, etc.), as well as keep backups of all your data in a separate account in case your main AWS account is ever compromised.

 

Ques. 20): How does Amazon Aurora improve the fault tolerance of my database in the event of a disc failure?

Answer:

Amazon Aurora divides your database volume into 10 GB parts and distributes them over many discs. Your database volume is replicated six times, over three AZs, for each 10 GB piece. Amazon Aurora is built to handle the loss of up to two copies of data without affecting database write availability and up to three copies without affecting database read availability in a transparent manner. Amazon Aurora is also self-healing storage. Data blocks and drives are inspected for faults and corrected automatically on a regular basis.

 

 


April 18, 2022

Top 20 AWS Secrets Manager Interview Questions and Answers

 

 

    AWS Secrets Manager is nothing more than a safe deposit box where you may store all of your valuables that you don't want to expose publicly, such as critical papers and jewellery, and only you will have access to them. AWS secrete manager, in technical terms, manages API keys, secrete keys, client keys, tokens, and DB credentials, among other things.

 AWS RedShift Interview Questions and Answers

Ques. 1): What is AWS Secrets Manager, and how does it work?

Answer:

AWS Secrets Manager is a secret management solution that aids in the security of your applications, services, and IT resources. This service makes it simple to rotate, manage, and retrieve database credentials, API keys, and other secrets at any time during their lifetime. You can safeguard and manage secrets used to access resources in the AWS Cloud, on third-party services, and on-premises with Secrets Manager.

 AWS Cloud Practitioner Interview Questions and Answers

Ques. 2): What are the benefits of using AWS Secrets Manager?

Answer:

Without the upfront investment and ongoing maintenance costs of running your own infrastructure, AWS Secrets Manager protects access to your apps, services, and IT resources.

Secrets Manager is a secure and scalable means of storing and managing secrets for IT managers. Secrets Manager allows security administrators to monitor and cycle secrets without affecting applications, allowing them to meet regulatory and compliance requirements. Secrets Manager can be retrieved programmatically by developers that want to replace hardcoded secrets in their apps.

  AWS EC2 Interview Questions and Answers

Ques .3): What is the significance of a Secrets manager?

Answer:

There are two scenarios:

In server-side programmes, it's sometimes simple to manage environment-specific secret values. Because there are various servers on which you may easily construct environment-specific settings. However, if we don't retain such data in code, we risk losing them, and keeping those values in code or a repository that may be directly available to developers in the production environment is not encouraged.

A client-side application is another example. It's basically static code in a static file, and it's not secure if we store secret values.

In both circumstances, the secrets manager proves to be a lifesaver. AWS credential may manage and get secrete values from the secrete manager for server-side code. Client-side integration with STS token is required to provide temporary AWS credentials that are only valid for the secret manager service.

  AWS Lambda Interview Questions and Answers

Ques. 4): What am I able to accomplish with AWS Secrets Manager?

Answer: 

AWS Secrets Manager gives you centralised storage, retrieval, access control, rotation, auditing, and monitoring of secrets.

You can encrypt secrets at rest to limit the chances of sensitive data being viewed by unauthorised individuals. To retrieve secrets, simply replace plain text secrets in your applications with code that uses the Secrets Manager APIs to pull in those secrets programmatically. To govern which users and applications have access to these secrets, you utilise AWS Identity and Access Management (IAM) policies. You can rotate passwords for supported database types hosted on AWS on a schedule or on demand, with no danger of affecting applications. By changing sample Lambda functions, you can expand this feature to rotate other secrets, such as passwords for Oracle databases stored on Amazon EC2 or OAuth refresh tokens. Secrets Manager interacts with AWS CloudTrail, Amazon CloudWatch, and Amazon Simple Notification Service, allowing you to audit and monitor secrets (Amazon SNS).

  AWS Simple Storage Service (S3) Interview Questions and Answers

Ques. 5): What are the advantages of using Secret Manager?

Answer:

Secrets should be rotated safely ( you can keep expiry and rotate values whenever needed )

Fine-grained policies can be used to control access ( you can create a policy that enables developers to retrieve secrete values )

Secrets should be kept secure and audited centrally ( it gives audit trail how many used from which account )

You can pay as you go ( No of secret value and no of API calls made for retrieval )

Secrets can be easily replicated throughout multiple regions ( cross regions access is allow )

  AWS Fargate Interview Questions and Answers

Ques. 6): In AWS Secrets Manager, what secrets can I manage?

Answer:

Database credentials, on-premises resource credentials, SaaS application credentials, third-party API keys, and Secure Shell (SSH) keys are among the secrets you can manage. You may save a JSON document in Secrets Manager, which allows you to manage any text blurb that is 64 KB or smaller.

  AWS SageMaker Interview Questions and Answers

Ques. 7): With AWS Secrets Manager, what secrets can I rotate?

Answer:

For Amazon Relational Database Service (RDS), Amazon DocumentDB, and Amazon Redshift, you can rotate credentials directly. By changing sample AWS Lambda methods accessible in the Secrets Manager documentation, you can extend Secrets Manager to rotate other secrets, such as Oracle database credentials housed on EC2 or OAuth refresh tokens.

  AWS Cloudwatch interview Questions and Answers

Ques. 8): How will these secrets be used in my application?

Answer:

To begin, create an AWS Identity and Access Management (IAM) policy that allows your app to access specified secrets. Then, in the source code of the application, you may replace plain text secrets with code that allows you to get these secrets programmatically using the Secrets Manager APIs. Please visit the AWS Secrets Manager User Guide for further information and examples.

  AWS Amplify Interview Questions and Answers

Ques. 9): What is require to access secrets manager?

Answer:

AWS credentials ( combination of access key and secret key )

AWS SDK ( server side SDK or client side SDK)

 AWS Cloud Interview Questions and Answers Part - 1

Ques. 10): What is the best way to get started with AWS Secrets Manager?

Answer:

To get started with AWS Secrets Manager, follow these steps:

1.       Find out what your secrets are and where they're used in your apps.

2.       Using your AWS credentials, log in to the AWS Management Console and go to the Secrets Manager console.

3.       Upload the secret you discovered using the Secrets Manager console. You can also upload a secret using the AWS SDK or AWS CLI (once per secret). You can also use a script to upload a large number of secrets.

4.       Follow the instructions on the console to set up automatic rotation if your secret hasn't been used yet. Before establishing automatic rotation, do steps (5) and (6) if applications are using your secret.

5.       If other users or applications need to retrieve the secret, write an IAM policy to grant permissions to the secret.

6.     Update your applications to retrieve secrets from Secrets Manager.

 AWS Cloud Interview Questions and Answers Part - 2

Ques. 11): What is the difference between Secrets Manager and Parameter Store?

Answer:

Secrets Manager: It allows you to name and store a single string or binary value of up to 64kbytes. KMS is used to encrypt the full string, with either a default or customer-specified KMS key. The string is usually a JSON object, which the AWS Console will parse and display as individual name-value pairs for you to inspect or change. You'll have to parse the secret yourself if you use the CLI or a programme to access it.

Parameter Store: Individual values are stored using a hierarchical key in Parameter Store (like many others, I omit the "Systems Manager" part of its name). You can obtain individual keys, such as /database/username and /database/password, or all keys that begin with /database. Simple strings, comma-separated lists (which you must parse), and encrypted strings are all possible values (which also support default and custom KMS keys). You can choose whether or not to decrypt encrypted values while retrieving data.

  AWS Cloud Support Engineer Interview Question and Answers

Ques. 12): How does AWS Secrets Manager handle database credential rotation while keeping apps running smoothly?

Answer:

AWS Secrets Manager allows you to set a schedule for database credential rotation. This allows you to adhere to security best practises and securely rotate your database credentials. When Secrets Manager starts a rotation, it creates a clone user with the same privileges as you, but with a different password, using the super database credentials you gave. The clone user information is then communicated to databases and apps, which retrieve the database credentials. The AWS Secrets Manager Rotation Guide can help you learn more about rotation.

  AWS Solution Architect Interview Questions and Answers

Ques. 13): Is it true that changing database credentials has an influence on open connections?

Answer:

No. When a connection is established, authentication takes place. The open database connection is not re-authenticated when AWS Secrets Manager rotates a database credential.

  AWS DevOps Cloud Interview Questions and Answers

Ques. 14): When AWS Secrets Manager rotates a database credential, how do I know?

Answer:

When AWS Secrets Manager rotates a secret, you can set up Amazon CloudWatch Events to receive a signal. You can also use the Secrets Manager console or APIs to see when a secret was last rotated.

  AWS(Amazon Web Services) Interview Questions and Answers

Ques. 15): What methods does AWS Secrets Manager use to keep my secrets safe?

Answer:

AWS Secrets Manager protects data in transit with encryption keys you own and manage in the AWS Key Management Service (KMS). AWS Identity and Access Management (IAM) policies can be used to restrict access to the secret. When you retrieve a secret, Secrets Manager decrypts it and sends it to your local environment securely over TLS. The secret is not written or cached to persistent storage by default in Secrets Manager.

  AWS Database Interview Questions and Answers

Ques. 16): In AWS Secrets Manager, who may use and manage secrets?

Answer:

To regulate the access permissions of users and applications to retrieve or manage specific secrets, you can use AWS Identity and Access Management (IAM) policies. You can, for example, set up a policy that only allows developers to access secrets used in the development environment. Visit AWS Secrets Manager Authentication and Access Control for additional information.

  AWS ActiveMQ Interview Questions and Answers

Ques. 17): AWS Secrets Manager encrypts my secrets in what way?

Answer:

AWS Secrets Manager encrypts your secrets in AWS Key Management Service using envelope encryption (AES-256 encryption technique) (KMS).

You can specify the AWS KMS keys to encrypt secrets when you initially use Secrets Manager. Secrets Manager produces AWS KMS default keys for your account if you don't give a KMS key. Secrets Manager asks a plaintext and an encrypted data key from KMS when a secret is stored. The plaintext data key is used by Secrets Manager to encrypt the secret in memory. The encrypted secret and encrypted data key are stored and maintained by AWS Secrets Manager. Secrets Manager decrypts the data key (using the AWS KMS default keys) and uses the plaintext data key to decrypt the secret when a secret is retrieved. The data key is encrypted and never written in plaintext to disc. Secrets Manager also doesn't save the plaintext secret to persistent storage or write it to it.

 

Ques. 18): How will AWS Secrets Manager be invoiced and billed to me?

Answer:

There is no minimum price with Secrets Manager; you simply pay for what you use. To start utilising the service, there are no set-up fees or commitments. Your credit card will be automatically charged for the month's usage at the end of the month. Each month, you will be charged for the amount of secrets you store and API requests you make to the service.

Visit AWS Secrets Manager pricing for the most up-to-date pricing information.

 

Ques. 19): Is there a free trial available?

Answer:

Yes, you can use the AWS Secrets Manager 30-day free sample to try Secrets Manager for free. Over the course of the 30-day free trial, you can rotate, manage, and retrieve secrets. When you save your first secret, the free trial begins.

 

Ques. 20): How do I use Lambda's Secrets Manager?

Answer:

A library file for a secret manager is provided in the AWS docs. AWS Secrets Manager JavaScript (SDK V2) Code Examples I constructed a wrapper class SecreteManager based on this reference, and here is the code.

Make a SecretesManager.js file that connects to aws-sdk and allows you to access AWS resources.

'use strict'

const AWS = require('aws-sdk');

class SecretsManager {

/**

      * Uses AWS Secrets Manager to retrieve a secret

      */

     static async getSecret (secretName, region){

         const config = { region : region }

         var secret, decodedBinarySecret;

         let secretsManager = new AWS.SecretsManager(config);

         try {

             let secretValue = await secretsManager.getSecretValue({SecretId: secretName}).promise();

             if ('SecretString' in secretValue) {

                 return secret = secretValue.SecretString;

             } else {

                 let buff = new Buffer(secretValue.SecretBinary, 'base64');

                 return decodedBinarySecret = buff.toString('ascii');

             }

         } catch (err) {

             if (err.code === 'DecryptionFailureException')

                 // Secrets Manager can't decrypt the protected secret text using the provided KMS key.

                 // Deal with the exception here, and/or rethrow at your discretion.

                 throw err;

             else if (err.code === 'InternalServiceErrorException')

                 // An error occurred on the server side.

                 // Deal with the exception here, and/or rethrow at your discretion.

                 throw err;

             else if (err.code === 'InvalidParameterException')

                 // You provided an invalid value for a parameter.

                 // Deal with the exception here, and/or rethrow at your discretion.

                 throw err;

             else if (err.code === 'InvalidRequestException')

                 // You provided a parameter value that is not valid for the current state of the resource.

                 // Deal with the exception here, and/or rethrow at your discretion.

                 throw err;

             else if (err.code === 'ResourceNotFoundException')

                 // We can't find the resource that you asked for.

                 // Deal with the exception here, and/or rethrow at your discretion.

                 throw err;

         }

     }

 }

 module.exports = SecretsManager;

2. Create a file for index.js in your Lambda package to use SecretesManager.js class to retrieve a secret value.

/**

 * index.js

 **/

const SecretsManager = require('./SecretsManager.js');

exports.handler = async (event) => {

     // TODO implement

     var secretName = '<SecreteName>';

     var region = '<Region>';

     var apiValue = await SecretsManager.getSecret(secretName, region);

     console.log(apiValue);

     const response = {

         statusCode: 200,

         body: JSON.stringify('Hello from Lambda!'),

     };

     return response;

 };

3. Go to console.aws.amazon.com/secretsmanager to create a secret manager entry.

4. That's it. Make a zip file with this code and upload it to lambda.