Showing posts with label storage. Show all posts
Showing posts with label storage. Show all posts

April 19, 2022

Top 20 AWS Elastic Block Store (EBS) Interview Questions and Answers

  

            The Amazon Elastic Block Store (EBS) is a block storage solution for long-term data storage. Amazon EBS is a highly available block level storage volume that is suited for EC2 instances. General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic are the three types of volume available. The performance, attributes, and pricing of these three volume categories vary. For usage with Amazon EC2 instances, Amazon Elastic Block Store (EBS) provides block level storage volumes. Amazon EBS volumes are off-instance storage that lasts indefinitely, regardless of how long an instance is running.


AWS RedShift Interview Questions and Answers

AWS AppSync Interview Questions and Answers


Ques.1 ): What is Elastic Block Store, and how does it work?

Answer:

Amazon Elastic Block Store (EBS) is a high-performance, easy-to-use block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. Amazon EBS is a storage service that provides block-level storage volumes for AWS EC2 instances. EBS volumes are off-instance storage that lasts indefinitely. It's a simple-to-use block storage service designed to integrate with AWS EC2 for high-throughput and transaction-intensive operations at any scale.


AWS Cloud Practitioner Essentials Questions and Answers

AWS Cloud9 Interview Questions and Answers


Ques. 2): What are the advantages of using Amazon EBS?

Answer:

Storage that is both reliable and secure - Each EBS volume will automatically respond to its Availability Zone in order to protect against component failure. Secure - You may decide who has access to which EBS volumes using Amazon's flexible access control policies. Access control combined with encryption provides a robust data defense-in-depth security technique. Higher performance - Amazon EBS makes advantage of SSD technology to deliver data results with consistent application I/O performance. Simple data backup - Take point-in-time snapshots of Amazon EBS volumes to save data backup. Benefits of Amazon EBS are as follows:

Reliable and Secure Storage - It automatically respond to its availability zone protecting from component failure.

Secure - It allows us to specify access EBS volumes.

Higher Performance - Delivers data results with consistent performance.

Easy Data Backup - Takes taking point-in-time snapshots of Amazon EBS volumes.


AWS EC2 Interview Questions and Answers

Amazon Athena Interview Questions and Answers


Ques. 3): What is EBS Block Express, and how does it work?

Answer:

EBS Block Express is the next version of Amazon EBS storage server architecture, designed to provide the highest levels of performance for block storage at cloud scale with sub-millisecond latency. Block Express accomplishes this by communicating with Nitro System-based EC2 instances via Scalable Reliable Datagrams (SRD), a high-performance, low-latency network protocol. This is the same high-performance, low-latency network interface used in Elastic Fabric Adapter (EFA) for High Performance Computing (HPC) and Machine Learning (ML) applications for inter-instance communication. Block Express also provides modular software and hardware building blocks that can be built in a variety of ways, allowing us to design and deliver greater performance and new features more quickly.


AWS Lambda Interview Questions and Answers

Amazon OpenSearch Interview Questions and Answers


Ques. 4): What are the various types of EBS volumes?

Answer:

There are five types of EBS volumes available as below:

General Purpose SSD (gp2): The SSD (Solid State Drive) is the volume that EC2 defaults to as the root volume of your instance. SSDs are many times faster than HDDs for modest input/output tasks (Hard Disk Drive). It has a good price-performance ratio (measured in IOPS - Input-Output Operations per second).

Provisioned IOPS SSD (io1): This is the most costly and time-consuming EBS volume. They're designed for applications that require a lot of I/O, such as huge Relational or NoSQL databases.

Throughput Optimized HDD (st1): These are low-cost magnetic storage volumes whose performance is measured in terms of throughput.

Cold HDD (sc1): These are even less expensive magnetic storage options than Throughput Optimized. They are intended for large, sequential cold workloads, such as those found on a file server.

Magnetic (standard): These are older generation magnetic drives that are best suited for workloads with infrequent data access.


AWS Simple Storage Service (S3) Interview Questions and Answers

AWS Cloud Security Interview Questions and Answers


Ques. 5): When would I want to use FSR (Fast Snapshot Restore)?

Answer:

If you are concerned about data access latency when restoring data from a snapshot to a volume and wish to prevent the first performance hit during initialization, you should enable FSR on snapshots. Virtual desktop infrastructure (VDI), backup and restore, test/dev volume copies, and booting from custom AMIs are all examples of use cases for FSR. When you enable FSR on your snapshot, you'll get better and more predictable results anytime you need to restore data from it.


AWS Fargate Interview Questions and Answers

AWS DynamoDB Interview Questions and Answers


Ques. 6): What are the different kinds of EBS Volumes?

Answer:

General Purpose EBS (SSD) This volume type is appropriate for small and medium workloads, such as root disc EC2 volumes, small and medium database workloads, and workloads that access logs regularly. By default, SSDs support 3 IOPS/GB, which means that a 1 GB volume will provide 3 IOPS and a 10 GB volume will provide 30 IOPS. One volume's storage size ranges from 1 GB to 1 TB. For one month, each volume costs $0.10 per GB.

IOPS provisioned (SSD) This volume type is best for transactional workloads that require a lot of I/O, as well as large relational, EMR, and Hadoop workloads. IOPS SSDs support 30 IOPS/GB by default, so a 10GB volume will provide 300 IOPS. One volume's storage size ranges from 10GB to 1TB. For supplied storage, one volume costs $0.125 per GB per month and $0.10 per provisioned IOPS per month.

Magnetic Volumes from EBS Previously, it was known as standard volumes. This volume type is suited for workloads that require infrequent data access, such as data backups for recovery, log storage, and so on. One volume's storage size ranges from 10GB to 1TB. For provisioned storage, one volume costs $0.05 per GB per month and $0.05 per million I/O requests. There are 3 types of EBS Volume:

·      The EBS General Purpose (SSD) volume is suitable for small and medium workloads, such as those on the Root Disc EC2.

·      Povisioned IOPS (SSD) volume is ideal for the most I/O heavy and big workloads, such as Hadoop.

·      EBS Magnetic Volumes, also known as standard volumes, are a type of magnetic volume. It is appropriate for tasks such as data backups and log storage.


AWS SageMaker Interview Questions and Answers

AWS Django Interview Questions and Answers


Ques. 7): How can I change an existing EBS volume's capacity, performance, or type?

Answer:

It's simple to change the volume configuration. Using a single CLI call, API call, or a few console clicks, you can expand capacity, optimise performance, or change your volume type with Elastic Volumes. See the Elastic Volumes documentation for more information on Elastic Volumes.


AWS Cloudwatch interview Questions and Answers

AWS Glue Interview Questions and Answers


Ques. 8): What is the Amazon Web Services (AWS) Key Management Service (KMS)?

Answer:

AWS KMS is a managed service that allows you to easily produce and maintain the encryption keys that are used to encrypt your data. AWS Key Management Service works with other AWS services like Amazon EBS, Amazon S3, and Amazon Redshift to make it simple to encrypt your data with encryption keys you control. AWS Key Management Service and AWS CloudTrail are connected to provide you with logs of all key usage to help you satisfy your regulatory and compliance requirements.


AWS Amplify Interview Questions and Answers

AWS VPC Interview Questions and Answers


Ques. 9): How can we change default root EBS size in cloudformation?

Answer:

Use BlockDeviceMappings to approach

 "BlockDeviceMappings": [

          {

            "DeviceName": "/dev/xvda",

            "Ebs": {

              "VolumeType": "io1",

              "Iops": "300",

              "DeleteOnTermination": "false",

              "VolumeSize": "30"

            }

          }

        ],

 

AWS Cloud Interview Questions and Answers Part - 1 

AWS CloudFormation Interview Questions and Answers


Ques. 10): What happens if the 'deleteOnTermination' flag isn't set on all of my linked instances?

Answer:

The configuration of the last associated instance that is terminated determines the volume's deleteOnTermination behaviour. Enable or disable 'deleteOnTermination' for all instances to which the volume is associated to ensure predictable remove on termination behaviour.

Enable 'deleteOnTermination' for all instances to which the volume is attached if you want the volume to be erased when the attached instances are terminated. Disable 'deleteOnTermination' for all attached instances if you want to keep the volume after the attached instances have been terminated. See the Multi-Attach technical documentation for further information.


AWS Cloud Interview Questions and Answers Part - 2

AWS GuardDuty Questions and Answers


Ques. 11): How to Set Up Amazon EBS?

Answer:

Use the following steps for setting up Amazon EBS:

STEP 1 - Create Amazon EBS volume.

STEP 2 - Store EBS Volume from a snapshot.

STEP 3 - Attach EBS Volume to an Instance.

STEP 4 - Detach a volume from Instance.


AWS Secrets Manager Interview Questions and Answers

AWS Control Tower Interview Questions and Answers


Ques. 12): Is it necessary to unmount volumes before taking a snapshot?

Answer:

No, while the volume is mounted and in use, snapshots can be taken in real time. Snapshots, on the other hand, only capture data that has been written to your Amazon EBS volume, so any data that has been cached locally by your application or OS may be missed. We recommend removing the volume cleanly, issuing the snapshot command, and then reattaching the volume to ensure consistent snapshots on volumes associated to an instance. Shutting down the computer to take a clean snapshot of Amazon EBS volumes that function as root devices is recommended.


AWS Cloud Support Engineer Interview Question and Answers

AWS Lake Formation Interview Questions and Answers


Ques. 13): Does the read and write I/O size of my application effect the rate of IOPS I get from my Provisioned IOPS SSD (io2 and io1) volumes?

Answer:

It certainly does. The IOPS rate you obtain when you provision IOPS for io2 or io1 volumes is determined by the I/O size of your application reads and writes. The base I/O size for provisioned IOPS volumes is 16KB. So, if you provisioned a volume with 40,000 IOPS for an I/O size of 16KB, it will achieve that size and 40,000 IOPS. If you increase the I/O size to 32 KB, you can get up to 20,000 IOPS, and so on.


AWS Solution Architect Interview Questions and Answers

AWS Data Pipeline Interview Questions and Answers


Ques. 14): How do I transfer files from one EBS to another?

Answer:

We need to attach to an instance for copying files from one EBS to another EBS, and we can store the contents on a third storage option if the volumes aren't attached to instances.Follow the following steps for doing the same:

·      Start a temporary instance.

·         Use a larger size for higher IO bandwidth.

·         Attach both EBS volumes to the instance and mount them as, say, /vol1 and /vol2.

·         Copy the files from /vol1 to /vol2.

·         Unmount the volumes, detach the EBS volumes, terminate the temporary instance.


AWS Aurora Interview Questions and Answers

Amazon CloudSearch Interview Questions and Answers 


Ques. 15): Does the size of the read and write I/O in my application effect the rate of throughput I obtain from my HDD-backed volumes?

Answer:

Yes. The throughput rate you get is determined on the read and write I/O sizes of your application. Reads and writes in 1MB I/O sizes are processed by HDD-backed volumes. Even though the actual I/O size is smaller, sequential I/Os are merged and processed as 1 MB units, whereas non-sequential I/Os are treated as 1MB units. While a transactional workload with small, random IOs, such as a database, will not perform well on HDD-backed volumes, sequential I/Os and huge I/O sizes will reach the claimed st1 and sc1 performance for a longer length of time.


AWS DevOps Cloud Interview Questions and Answers

AWS Transit Gateway Interview Questions and Answers


Ques. 16): What is the maximum storage capacity of an EBS device?

Answer:

At the moment, EBS supports a maximum volume size of 16 TiB. This suggests that you can construct an EBS volume with a capacity of up to 16 TiB, but whether the OS recognises all of that capacity is dependent on the OS's own design characteristics and the partitioning of the volume.


AWS(Amazon Web Services) Interview Questions and Answers

Amazon Detective Interview Questions and Answers


Ques. 17): When an EBS volume fails, how do you make it available with no downtime and link it to an EC2 instance?

Answer:

You can use a load balancer and auto scaling to make an EBS volume available with no downtime. If the ec2 instance goes down due to autoscaling, a new instance will be launched, and you can use the shell script to add commands to map to the EBS. We can also take frequent backups and replace the EBS volume with the most recent backup or snapshot if the EBS fails.


AWS Database Interview Questions and Answers

Amazon EMR Interview Questions and Answers


Ques. 18): When an Amazon EC2 instance is terminated, what happens to my data?

Answer:

Data stored on an Amazon EBS volume, unlike data stored on a local instance store (which persists just as long as the instance is alive), can persist regardless of the instance's life. As a result, we suggest that you only use the local instance storage for transient data. We recommend using Amazon EBS volumes or backing up data to Amazon S3 for data that requires a higher level of durability. If you're using an Amazon EBS volume as a root partition, make sure the Delete on termination flag is set to "No" if you want the Amazon EBS volume to survive the instance's life.


AWS ActiveMQ Interview Questions and Answers


Ques. 19): What can I expect from Amazon EBS volumes in terms of performance?

Answer:

Provisioned IOPS SSD (io2 Block Express, io2, and io1), General Purpose SSD (gp3 and gp2), Throughput Optimized HDD (st1), and Cold HDD are the seven volume types offered by Amazon EBS (sc1). These volume kinds differ in terms of performance and cost, allowing you to adjust your storage performance and cost to your applications' requirements. Between EC2 instances and EBS, the typical latency is in the single digit milliseconds. See the EBS product specifics page for more information about performance.

 

Ques. 20): What's the difference between io2 Block Express and io2?

Answer:

For all EC2 instances, io2 volumes provide high-performance block storage. Attaching io2 volumes to R5b instance types, which operate on Block Express and provide 4x the performance of io2, is recommended for applications that demand even more performance. With sub-millisecond average IO latency, you can achieve up to 64 TiB capacity, 256,000 IOPS, and 4,000 MB/s throughput from a single io2 volume.

 

 

 

April 16, 2022

Top 20 AWS Simple Storage Service (S3) Interview Questions and Answers

  

Ques. 1): What is Amazon S3?

Answer:

Amazon S3 is a storage service that offers the best scalability in the industry. We can utilise S3 to store and retrieve a specific quantity at any time and from any location on the internet.

We can store a limitless quantity of data and objects, with items ranging in size from 0 bytes to 5 terabytes. We can carry out duties related to the AWS administration console, which is a simple and automated web interface. Amazon S3 is a web-based service that is meant for online backup and preservation of data and application programmes. It is scalable, high-speed, and low-cost.

AWS RedShift Interview Questions and Answers

Ques. 2): When it comes to storage, what's the difference between object and block storage?

Answer:

Data on a raw physical storage device is separated into individual blocks and managed by a file system with block-level storage. The file system is in charge of allocating space for files and data kept on the underlying device, as well as giving access anytime the operating system needs to read data.

A flat surface for data storage is provided by an object storage system like S3. This straightforward approach eliminates some of block storage's OS-related difficulties and allows anyone to access any amount of storage capacity with ease.

When you upload files to S3, you can include up to 2 KB of metadata. The metadata is made up of keys that establish system details like data permissions and the appearance of a file system location within nested buckets.

Mostly Asked AWS Cloud Practitioner Interview Questions and Answers

Ques. 3): What is Amazon Web Services (AWS) Backup?

Answer:

  • We can store backups on the AWS Cloud using the AWS Backup service.
  • It merely serves as a conduit or proxy for storing backups in the cloud.
  • AWS Backup can backup a variety of AWS products, including EBS volumes (used by EC2 instances or virtual machines).
  • RDS databases, DynamoDB, and even Amazon Elastic File System, or EFS, may all be backed up.
  • To do so, you'll need to construct a backup plan that includes scheduling, retention, and the ability to tag the recovery points that are saved as backups.
  • AWS Backup has a scheduling feature that is related to the recovery point goal. The RPO, or recovery point objective, is a disaster recovery phrase that expresses the greatest amount of data loss that may be tolerated in terms of time. Within the backup plan, we have backup retention rules and lifecycle rules for changing the storage class of items that are backed up.
  • To store the recovery points, we'll need a backup vault.
  • We can either pick objects based on their AWS resource ID or specify AWS Resources to be assigned to backup plans.
  • Using the consolidated AWS backup console, we can keep track of backup operations.
  • We can also perform backups on demand. We don't have to wait for the schedule, and we can restore data from backups taken before.

AWS Lambda Interview Questions and Answers

Ques. 4): What is S3 Versioning?

Answer:

Versioning is a feature that S3 buckets support. Versioning is turned on for the bucket as a whole. Versioning allows you to track the numerous changes made to a file over time. When versioning is enabled, each file receives a unique Version ID each time it is uploaded. Consider the following scenario: a bucket contains a file, and a user uploads a fresh updated copy of the same file to the bucket; both files have their own Version ID and timestamps from when they were uploaded. So, if one has to go back in time to an earlier version of a file, versioning makes it simple. Please keep in mind that versioning can be costly in a variety of situations.

Also, while S3's versioning may appear to be similar to Version Control System (VCS), it is not. Please utilise Git, SVN, or any similar software if your developers require a VCS solution.

AWS Cloudwatch interview Questions and Answers

Ques. 5): What is resource-based bucket policy, and how does it work?

Answer:

•        We may use a conditional statement to verify that complete control permissions are granted to a specific account defined by an ID, and we can use a resource-based bucket policy to allow another AWS account to upload objects to another bucket.

•        We can't use a resource-based ACL with IAM policy since conditional statements aren't supported in this configuration.

AWS Cloud Support Engineer Interview Question and Answers

Ques. 6): What are the prerequisites for using AWS SDK S3 with a Spring Boot app?

Answer:

We'll need a few things to use the AWS SDK:

1.      AWS Account: We'll need an account with Amazon Web Services. If you don't already have one, go ahead and create one.

2.      AWS Security Credentials: These are the access keys that enable us to call AWS API actions programmatically. We can obtain these credentials in one of two ways: using AWS root account credentials from the Security Credentials page's access keys section, or using IAM user credentials from the IAM console. Refer How to generate access key and secret key to access Amazon S3

3.      AWS Region to store S3 object: We must choose an AWS region (or regions) to store our Amazon S3 data. Remember that the cost of S3 storage varies by region. Visit the official documents for more information.

4.      AWS S3 Bucket: We will need S3 Bucket to store the objects/files.

AWS Solution Architect Interview Questions and Answers

Ques. 7): What Scripting Options Are Available When Mounting a File System to Amazon S3?

Answer:

There are a number different ways to set up Amazon S3 as a local drive on Linux-based systems, including installations with Amazon S3 mounted EC2.

1. S3FS-FUSE: This is a basic utility and a free, open-source FUSE plugin that supports major Linux and MacOS systems. S3FS is also in charge of caching files locally in order to improve performance. The Amazon S3 bucket will appear as a disc on your PC thanks to this plugin.

2. ObjectiveFS: The Amazon S3 and Google Cloud Storage backends are supported by ObjectiveFS, a commercial FUSE plugin. It claims to provide a full POSIX-compliant file system interface, avoiding the need for appends to rewrite entire files. It also has the same level of efficiency as a local drive.

3. RioFS: RioFS is a little tool created in the C programming language. RioFS is similar to S3FS, but it has a few limitations: it does not allow adding to files, it does not fully support POSIX-compliant file system interfaces, and it cannot rename files.

AWS DevOps Cloud Interview Questions and Answers

Ques. 8): What is Amazon S3 Replication, and how does it work?

Answer:

AWS S3 Replication facilitates asynchronous object copying between AWS S3 buckets.

It is an elastic, low-cost, and fully managed feature that aids in duplicating items in buckets, as well as providing flexibility and functionality in cloud storage while providing us with the controls we require to meet our data sovereignty and other business demands.

AWS(Amazon Web Services) Interview Questions and Answers

Ques. 9): What Are The AWS S3 Storage Classes?

Answer:

Some of the storage classes offered in S3 are as follows:

  • S3 Storage Class (Standard): The S3 standard class is used as a default storage class if no other storage classes are provided during the upload.
  • S3 (Standard Storage Class for Infrequent Access): S3 (Standard Storage Class for Infrequent We can utilise the standard storage class for infrequent access when we need to access data less frequently but quickly and without delay.
  • S3 Storage Class with Reduced Redundancy: In the case of lesser degrees of redundancy, this storage class performs better in replicating data. For this purpose, it's a great alternative to S3 standard storage.
  • S3 Glacier Storage Class: S3 glacier storage is designed for low-cost data archiving and backup.

AWS Database Interview Questions and Answers

Ques. 10): What is CloudFront in Amazon S3?

Answer:

CloudFront is a Content Distribution Network that pulls data from an Amazon S3 bucket and distributes it across many datacenters.

It also sends data through a network of edge sites, which are routed when a user requests data, resulting in minimal latency and low network traffic, as well as quick access to data.

ActiveMQ Interview Questions and Answers

Ques. 11): Using the GUI, how will you upload files to S3?

Answer:

•        S3 buckets are cloud storage solutions in AWS.

•        Go to the S3 Management Console and log in. It includes a bucket list.

•        If the bucket is empty, we can use a storage device to arrange files.

•        Make a folder system. There is a four-column table on the screen. Name, Last modified, Size, and Storage class are the column headers.

•        For encryption, use the bucket settings, leave the defaults, and click Save.

•        As a result, we established a folder in our S3 bucket.

•        Within the Overview tab, we may create a subordinate folder.

•        Using the Upload dialogue and Select files, we can upload files.

•        We can upload files by dragging and dropping them from other parts of our screen to this spot, or we can select Add files, which I will do.

•        When you've chosen some files, you'll see the total number of files and their sizes at the top, which might help you estimate how long it will take to upload them depending on your Internet connection speed.

•        We can see the Target path, which is where it will be uploaded to our bucket's Projects folder.

•        We may just click Add more files if we forgot to add files.

•        If we don't want to upload something, we can click the x to remove it.

•        If both Read and Write permissions are granted. Other AWS accounts with rights to these uploaded objects can be added.

•        We can modify the encryption by selecting a file and going to the Actions menu and selecting Change encryption.

 

Ques. 12): Can you explain the S3 Block Public Access feature?

Answer:

•     New buckets, access points, and objects do not allow public access by default. To provide public access, users can change bucket policies, access point policies, or object permissions.

•     We get a list of buckets in the S3 Management Console.

•     For instance, select an existing bucket.

•     Go to the Permissions tab.

•     Overview, Properties, Permissions, and Management are the four tabs on the page. The Overview tab is now active. We have a number of alternatives when we click to the Permissions tab. Block public access, Access Control List, Bucket Policy, and CORS configuration are the four choices available.

•     Select Block public access from the drop-down menu. Select Access Control List from the drop-down menu. Select the Bucket Policy option from the drop-down menu. Block all public access by clicking Edit. That option will be saved.

•     We can restrict public access to buckets and objects granted by any access control list, as well as buckets and objects granted by new public bucket policies.

 

 

Ques. 13): What is S3 Bucket Encryption, and how does it work?

Answer:

•        Encryption ensures data security, and you may set default encryption on an S3 bucket, ensuring that all things uploaded to S3 are encrypted.

•        We may also choose individual items within a bucket to see if they are encrypted separately.

•        Open the bucket settings in the S3 Management Console by clicking on it.

•        The corresponding page opens when you click the bucket.

•        Overview, Properties, Permissions, and Management are the four tabs on the page.

•        The Overview tab is now active. It has the following options: Upload, Create folder, Download, and Actions.

•        Select the Properties tab from the drop-down menu.

•        Encryption is disabled by default.

•        Select that panel by clicking on it. It is currently set to None.

•        With Amazon Managed keys, we may use AES-256, Advanced Encryption Standard 256 bits server-side encryption. I can also utilise Key Management Service, or AWS-KMS managed keys, which allows me to select the encryption keys.

 

Ques. 14): What is S3 Lifecycle Management, and how are you going to construct a Rule?

Answer:

•        An S3 Lifecycle configuration is a set of rules that specify how Amazon S3 handles a collection of objects.

•        We may determine about S3 items over time using lifecycle management settings. For example, 30 days after generating an item, you might choose to transition it to the S3 Standard-IA storage class, or one year later, archive it to the S3 Glacier storage class.

•        Make a Rule:

•        When we open the S3 Management Console, we can see that S3 is divided into three portions. The toolbar is the first part. The navigation pane is the second part. In the navigation pane, the Buckets option is selected. A content pane is the third component.

•        Any S3 bucket pane can be opened. Overview, Properties, Permissions, and Management are among the tabs.

•        For the bucket's contents, we can actually establish lifecycle management settings. When a user selects the Management tab, Lifecycle appears as an option. If a user clicks on the bucket, for example, the associated page opens. He chooses the Management option. Lifecycle, Replication, Analytics, Metrics, and Inventory are among the tabs.

•        Navigate to the Lifecycle tab. It has the Add lifecycle rule, Edit, Delete, and Actions buttons. The user selects Add lifecycle rule from the drop-down menu. The dialogue box for the Lifecycle rule appears. A page named "Name and Scope" is now available.

•        Add a lifecycle Rule1.

•        Within an S3 bucket, users can activate versioning. We can choose whether this lifecycle rule applies to the current or earlier version of files or objects. Select the most recent option. Then select Add Transition from the drop-down menu.

•        There is a drop-down list box labelled Object creation, as well as a text box labelled Days after creation.

•        The Transition to Standard-IA after, Transition to Intelligent-Tiering after, Transition to One Zone-IA after, and Transition to Glacier after options are available in the Object creation drop-down list box.

•        Select Glacier if the user requires things to be archived. They are not to be removed or deleted, according to the user. So, we may say that the object was generated after one year, 365 days, and it was automatically archived to Glacier.

•        Now, Glacier is a cheaper storage mechanism over the long term.

•        Click on next and save finally to create Rule 1.

  

Ques. 15): How Do I Control Who Has Access To An S3 Bucket?

Answer:

Some of the most frequent methods for controlling access to an S3 bucket are as follows:

Access Points for S3: Each application has its own set of S3 access points, which we may utilise to manage S3 dataset access.

S3 Bucket Policies: We can set up access policies to control who has access to S3 resources. Permissions that only apply to objects within a bucket can also be configured at the bucket level.

Access Control List (ACL): We can use ACL to manage access to S3 resources and objects within a bucket.

IAM: To govern access to S3 resources, we can utilise AWS Identity and Access Management (IAM) Groups, Roles, and Users.

 

Ques. 16): What is an object URL, and how does one create one?

Answer:

 In AWS parlance, any file uploaded to S3 is referred to as a 'object.' A unique url is assigned to each object stored in an S3 bucket. This url is simply the object's address, and it can be used to access the object over the internet if it is public. The object url is made up of 'https://' and the bucket name, followed by's3-' region API name, '.amazonaws.com/', the file name with extension, and finally '?versionId=' the Version ID. As an example, consider the following.

https://bucket1.s3-eu-west-1.amazonaws.com/test1.txt?versionId=BdfDasp.WSEVgRTg46DF_7MnbVcxZ_4AfB

Please note, if this bucket would had been in Virginia region, the url wont consist the region API name in it and would appear as follows:

https://bucket1.s3.amazonaws.com/test.txt?versionId=BdfDasp.WSEVgRTg46DF_7MnbVcxZ_4AfB

 

Ques. 17): What Is AWS S3's Object Lock Feature?

Answer:

AWS S3's object lock functionality allows users to save data in WORM (write-once, read-many) format.

For a limited time or indefinitely, the user can prevent the data from being erased or rewritten in this way. Organizations use the AWS S3 object lock capability to comply with WORM storage regulatory requirements.

 

Ques. 18): What Are the Retention Methods for Object Locks?

Answer:

The two main object retention choices are as follows:

Retention Time: A user can provide a retention period (days, months, or years) for their object in the S3 bucket using this technique. No one can overwrite or remove the protected object during this time.

Legal Restrictions: The duration of an object lock is not specified in this function. Unless a user deactivates it directly, it remains active.

 

Ques. 19):  How would you upgrade or downgrade a system with Near-Zero downtime?

Answer:

The following steps can help us upgrade or downgrade a system having near-zero downtime:

Step 1: Enter the EC2 console

Step 2: Navigate to the AMI operating system

Step 3: Use the recent instance type to open an instance

Step 4: Install updates and applications

Step 5: Check the instance if it’s working or not

Step 6: If the instance is working, cover up the old instance with the new one by expanding it

Step 7: After the instance is extended, we can upgrade or downgrade a system with near-zero downtime.

 

Ques. 20): In S3, what is Static Website Hosting?

Answer:

A static website is a document that is kept in an AWS S3 bucket and is written in HTML, CSS, or Javascript. This website can be hosted on an S3 bucket that serves as a web server. Other AWS options for hosting dynamic websites are available.

Uploading an html page to an AWS S3 bucket is required to host a static website. The 'Static Website Hosting' option is easily found in the bucket properties. Select the Enable option and specify the index file that was uploaded to S3. To keep things simple, the index document should be uploaded to the root of the S3 bucket.