AWS AppSync Interview Questions and Answers
Ques. 1): What is AWS Fargate, and how does it work?
Answer:
AWS Fargate is a serverless container compute engine that works with Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate focuses on the development of your applications. Simply offer resources to your application, and Fargate will provision compute resources in a highly secure and isolated environment.
Ques. 2): What are containers, exactly?
Answer:
Making software for a different system can be difficult since it must run in a variety of contexts; to overcome this, developers utilise a technology known as containers. Each container contains the whole runtime environment. This comprises the programme as well as any dependencies, libraries, frameworks, and files required to run it.
Ques. 3): What are the advantages of AWS Fargate?
Answer:
· Application deployment and management: You just interact with and pay for your containers, avoiding the operational costs of scaling, patching, securing, and managing servers. Fargate ensures that the infrastructure supporting your containers is always patched.
· Protected design isolation: Each ECS-task or EKS-Pod runs in its kernel runtime and does not use any other tasks and pods to share the CPU, memory, store, or network resources.
· No Cluster management: You only need to think of containers and concentrate on building and running your app with Fargate. the Fargate Service manages all infrastructure needs.
Ques. 4): What is AWS Fargate and how does it work?
Answer:
• Create a container image: Create an image that is appropriate for your application. Your container image is a read-only template that you can create from a docker and save in the registry, then execute on the cluster from there.
• Select an orchestration service, such as Amazon ECS or Amazon EKS.
• Create a cluster: a cluster allows you to keep track of all the containers and resources you've assigned to your application.
• Using the AWS Fargate option, create a cluster: AWS Fargate manages and configures all information and clusters fields while running your container.
Ques. 5): Is Fargate less expensive than EC2?
Answer:
Not really; you could definitely get away with a lower pricing if you ran your programme natively in t3 instances rather than using a Fargate container.
Fargate's strength is that it makes container adoption simple. Previously, you had to operate your own container service (which resided on an EC2 you purchased) before you could make a container run with ECS/EKS. You can just stand up the container using Fargate. The remainder of the container service faded into the background.
With Fargate you are really paying for ease:
• create your Dockerfile
• push it to ECR - Elastic Container Registry
• and consume it with a container
My hypothesis is that the pricing is more because the infrastructure required to keep a container service running in a highly available and secure manner is more variable than simply assigning a t3 EC2 instance.
Ques. 6): AWS Fargate supports which use cases?
Answer:
All of the major container use cases, such as microservices architecture apps, batch processing, machine learning applications, and transferring on-premise applications to the cloud, are supported by AWS Fargate.
Ques. 7): What are pros and cons of AWS Fargate?
Answer:
Pros:
· Less Complexity
· Better Security
· Lower Costs (Maybe)
Cons:
· Less Customization.
· Higher Costs (Maybe)
· Region Availability
Ques. 8): What's the difference between Amazon EC2 and Amazon Fargate, and how do you use them?
Answer:
The prices of the underlying EC2 instances are based on the billing with the EC2 Launch type. It allows you to streamline the process by utilising charging models such as spot events, reserved instances, and so forth. However, it is your responsibility to guarantee that containers are densely packed in order to get the most out of them; otherwise, it is a waste of money. The payment for AWS Fargate Launch Type is dependent on the number of CPU or memory requests per second. You just have to pay for the tasks you've assigned; there's no need to pay for the unused EC2 instance.
Ques. 9): What's the best way to put a pod on a Fargate worker node?
Answer:
A Fargate Profile, which is simply a list of namespaces, must be defined. These may be generated in the AWS Console by going to EKS > Pick your cluster > Compute > Add Fargate Profile.
When you create a pod after the Fargate Profile has been built, if the pod's namespace matches one of the namespaces in the Fargate Profile, a Fargate worker node will be constructed and the pod will be deployed on the worker.
AWS
Django Interview Questions and Answers
Ques. 10): What are the advantages of using AWS Fargate to run airflow?
Answer:
We can execute Airflow key components on AWS Fargate without having to create and manage servers. Fargate ensures that the infrastructure supporting your containers is always patched and up to date. Fargate allows you to match the computing resource requirements of your Airflow jobs, as well as automatically add capacity when your cluster becomes busier without paying for any idle capacity. Each Fargate task runs in its own virtual machine-isolated environment, ensuring that concurrent tasks do not compete for resources.
Ques. 11): On Fargate, how can I upgrade Kubernetes?
Answer:
Any newly formed fargate pods will launch EC2 instances with AMIs based on the version the master has once the control plane has been upgraded to the newer version.
There is no way to upgrade existing pods using automation. The pods must be destroyed and then rebuilt. Kubernetes Deployments, which can handle rolling updates, should be used to accomplish this without causing downtime for the app.
Ques. 12): So, what is a Node Group worker?
Answer:
These are “normal, regular, vanilla, plain ‘ol” Kubernetes worker nodes based on EC2 instances. The Kubernetes Scheduler will cram as many pods onto these VMs until CPU, Memory, or Network Interfaces resources are exhausted. When you create a Node Group you select:
· Name
· Node IAM Role
· AMI Type (linux x64, x64+GPU or ARM)
· Instance Type (ie: t3.medium)
· Disk Size (for the node’s local storage)
· You pay for these EC2 instances regardless of their utilization.
Ques. 13): What's the difference between AWS Fargate and Elastic Beanstalk multi-container?
Answer:
The ECS is used by Multi Container Elastic Beanstalk to feed the cluster, allowing it to operate as a Platform-As-A-Service (PaaS) architecture.
Fargate runs on a fully managed server, so you don't have to worry about the cluster or server as you do with Elastic Beanstalk. To make it more of a SaaS model, you merely manage a Docker container.
Ques. 14): Does autoscaling work?
Answer:
There are a few things to unpack here, as both Node Autoscaling and (Horizontal) Pod Autoscaling are present.
· Does Fargate support Horizontal Pod Autoscaling? Yes. What's the deal with node groups? Yes. Make sure that Metrics Server is turned on for each.
· If you're utilising Fargate for the pod, simply add another pod to get another transparent EC2 worker node for node autoscaling. If you're utilising Node Groups, you'll need to set the following:
· The bare minimum of nodes
· The maximum number of nodes possible
· Desired number of nodes – This is the initial number of nodes that the Node Group will construct. As you add or delete pods, the number of nodes will scale according to the guardrails set above.
Ques. 15): How can you migrate AWS Lambda rails code in ruby to AWS Fargate?
Answer:
· Containerized your Ruby code.
· Put image in repository like (DockerHub, ECR etc).
· Establish ECS Task Definition
· Build an ECS cluster.
· To run your task, create the services.
Ques. 16): How can I stop the AWS Fargate container programmatically?
Answer:
It can be done via the console or the AWS CLI. You'll probably want to utilise the API stop task for this. To do so, you must first capture the task ID and then filter it based on how you want to filter the jobs in your cluster.
Ques. 17): Can I use my own AMI?
Answer:
For the Control Plane – No, this is controlled by AWS
For Fargate workers – No, this is controlled by AWS
For Node Group workers – Yes, but a complicated yes. When you create a Node Group via the AWS Console, the default behavior is to not use Launch Templates so you only have the 3 AMI options (linux x64, linux x64+GPU, linux ARM). If you opt to use Launch Templates you can select your own full list of AMIs. Launch Templates will also allow you to leverage spot instances, IAM instance profiles, tenancy and a dozen other configurations. If you use Launch Templates then you’ll need to maintain them going forward.
Ques. 18): What is Azure's version of AWS Fargate?
Answer:
Unfortunately, there isn't anything comparable. That's one of the Azure services I really miss, but I'm sure something comparable will appear in the not-too-distant future.
Azure Container Instances (ACI) is Azure's serverless container service (kind of), however it lacks Fargate's features.
First and foremost, ACI does not scale automatically. There is no load balancer built-in. You don't have to define the type of instance or the number of instances with Fargate.
Second, Fargate is more tightly linked with the AWS environment than ACI, which is more of a stand-alone service.
I still prefer ACI for simple container deployments, but it's not ideal for clusters with varying loads.
Ques. 19): Does Amazon Fargate, like Lambda, have a slow start?
Answer:
No. A Fargate Task requires time to provide and start, but once it's up and running, the allotted CPU is dedicated to the Task's containers.
As long as a loop keeps the developer's code alive, it will continue to run in the Fargate container. This differs from a Lambda, where code executes and quits, and the Lambda system must then restart the loop, wait for input, and either use a current Lambda container or start from scratch with a new one.
Ques. 20): What is Google Cloud's answer to AWS Fargate?
Answer:
Cloud Run is the closest thing to Fargate in terms of use and potential, but it's a little more limited. Any Docker container that listens on port 8080 and responds within 15 minutes is supported by Cloud Run. As a result, web apps are its principal application. It can handle up to 80 requests at the same time per instance. You always get 1 vCPU and can choose up to 2GB of RAM. You pay for the amount of time that an instance is active.