Exam Preparation - AWS Sol Architect Associate(SAA-C02)

This is consolidated Note section of youtube channel https://www.youtube.com/techpechu . All we expect from you is to share our channel with your peers and subscribe our channel for more videos and notes. If you wish to donate / send any token of love ❤️💰 drop a mail to [email protected] for details.

Section 1


  • Region vs Availability Zone vs Edge Locations
    • Region
      • Region - Physical Location in the world consists of 2 or more Availability Zones
    • AZ
      • AZ - one or more data centres Distinct locations from within an AWS region that are engineered to be isolated from failures.
    • Edge Location
      • Edge Location -Endpoints used in AWS for caching content . Used in Products like (Cloudfront)
    Full Details @
    View as Globe!

  • VPC stands for ?

    Virtual Private Cloud

  • Region vs AZ vs Edge Locations which has high count?

    # of Edge Locations > # of Availability Zones > # of Regions

  • An AWS VPC is a component of which group of AWS services?

    Network Services

  • Network zone

    network zones are isolated units with their own set of physical infrastructure and service IP addresses from a unique IP subnet. If one IP address from a network zone becomes unavailable, due to network disruptions or IP address blocking by certain client networks, your client applications can retry using the healthy static IP address from the other isolated network zone.

  • Shared Responsibility between AWS and Customer


  • What is S3?

    S3 is Object-based. ie) You can't install OS

  • File Size Limit

    Files can be 0 Bytes to 5 TB

  • Max file size that you ca upload in a single PUT is?

    5 GB

  • Multipart upload?

    To upload if a file size is > 5 GB

  • Storage Limit

    Unlimited Storage

  • What is durability of S3 bucket?

    99.999999999% (11 9's)

  • What is S3 select?

    S3 Select is an Amazon S3 feature that makes it easy to retrieve specific data from the contents of an object using simple SQL expressions without having to retrieve the entire object.

  • Files are stored in Buckets
  • Buckets named to be unique Globally.
  • Which Status code on success upload?

    200 Status code on success upload

  • Object Key Structure
    • Key - Name of the object
    • Value - Data made of sequence of bytes
    • Version Id - Importance of versioning
    • Metadata - Info on data storing
  • Consistency Models
    • Puts (adding New object)

      Read after Write consistency - Immediately available

    • Overwrite and Delete

      Eventual Consistency - Take some time to Reflect

  • Storage Classes(Sorted cost wise)
    • S3 Standard
      • 99.99 % availability
      • 99.999999999% durability (11 9s)
      • Designed to sustain loss of 2 facilities concurrently.
    • S3- IA( Infrequent Accessed)
      • For Data that accessed infrequently, but required rapid access when needed.
      • You will be charged a retrieval fee
    • S3 Intelligent Tiering
      • Automatically move data to effective tier
    • S3 One Zone - IA
      • Low cost option for infrequently accessed data
      • Run in Single AZ
    • S3 Redundancy Storage (RRS)

      Decommissioned same as One Zone IA

    • S3 Glacier
      • Secure, Durable and Lowcost
      • Retrieval time configurable from minutes to hours
    • S3 Glacier Deep Archive
      • Low cost storage class
      • Retrieval time of 12 hours
  • Versioning
    • Stores all versions of the objects
    • Versioning included all writes, including even if you delete an object
    • Once Versioning is enabled it can't be disabled only suspended.
    • MFA delete option can act as additional layer of security.
  • Life Cycle Management
    • What it is?
      • Automates moving your object between different storage tiers
    • Can be used with versioning (both current and previous versions)
  • 3 Ways to share S3 buckets across accounts
    1. Using Bucket Policies & IAM -(Applies across the entire bucket)- Programmatic access only.
    1. Using Bucket ACL and IAM (individual objects)- Programmatic access only.
    1. Cross-Account IAM Roles- Programmatic and Console access.
  • Cross Region Replication
    • To Replicate Object in multiple Regions
    • You can also replicate in same regions
    • Must known
      • Versioning must be enabled for both source and Destination buckets
      • Files in existing buckets are not replicated automatically
      • Delete markers are not replicated
      • Deleting individual versions will not be replicated.
      • All further uploads will be auto replicated
  • S3 Transfer Acceleration

    To accelerate uploading of files with the help of AWS edge locations.

  • Encryption @ S3
    • Encryption at Transit
      • Achieved by SSL/TLS
    • Encryption at Rest
      • S3 Managed Keys- SSE S3
      • AWS Key Management Service, Manage Keys,SSE-KMS
      • Server Side Encryption with Customer provided Keys- SSE-C
  • Serverside encryption types
    • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

      When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is encrypted with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data. For more information, see Protecting Data Using Server-Side Encryption with Amazon S3-Managed Encryption Keys (SSE-S3).

    • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)

      Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS) is similar to SSE-S3, but with some additional benefits and charges for using this service. There are separate permissions for the use of a CMK that provides added protection against unauthorized access of your objects in Amazon S3.

      SSE-KMS also provides you with an audit trail that shows when your CMK was used and by whom. Additionally, you can create and manage customer managed CMKs or use AWS managed CMKs that are unique to you, your service, and your Region.

    • Server-Side Encryption with Customer-Provided Keys (SSE-C)

      With Server-Side Encryption with Customer-Provided Keys (SSE-C), you manage the encryption keys and Amazon S3 manages the encryption, as it writes to disks, and decryption, when you access your objects.

    • S3 Default Encryption

      Enables encryption at rest, and enables encryption to all objects in S3 as default.

  • Retrieve S3 Glacier Archieve
    • Expedited — Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required. For all but the largest archives (250 MB+), data accessed using Expedited retrievals are typically made available within 1–5 minutes. Provisioned Capacity ensures that retrieval capacity for Expedited retrievals is available when you need it. For more information, see Provisioned Capacity.
    • Standard — Standard retrievals allow you to access any of your archives within several hours. Standard retrievals typically complete within 3–5 hours. This is the default option for retrieval requests that do not specify the retrieval option.
    • Bulk — Bulk retrievals are S3 Glacier’s lowest-cost retrieval option, which you can use to retrieve large amounts, even petabytes, of data inexpensively in a day. Bulk retrievals typically complete within 5–12 hours.
  • S3 Object Lifecycle Management

    An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:

    • Transition actions

      Define when objects transition to another storage class. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class one year after creating them.

      There are costs associated with the lifecycle transition requests.

    • Expiration actions—Define when objects expire. Amazon S3 deletes expired objects on your behalf.
  • S3 Retrievals
    • Standard Retrieval

      Standard retrievals allow you to access any of your archives within several hours. Standard retrievals typically complete within 3 – 5 hours.

    • Bulk Retrieval

      Bulk retrievals are S3 Glacier’s lowest-cost retrieval option, enabling you to retrieve large amounts, even petabytes, of data inexpensively in a day. Bulk retrievals typically complete within 5 – 12 hours.

    • Expedited Retrieval

      Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required. For all but the largest archives (250MB+), data accessed using Expedited retrievals are typically made available within 1 – 5 minutes.

      There are two types of Expedited retrievals: On-Demand and Provisioned. On-Demand requests are fulfilled when we are able to complete the retrieval within 1 – 5 minutes. Provisioned requests ensure that retrieval capacity for Expedited retrievals is available when you need them.

  • What is S3 object Lock?

    Once S3 object lock is enabled you can avoid object being deleted for any fixed duration

  • What is S3 Glacier Archieves?

    Data is stored in Amazon S3 Glacier in "archives." An archive can be comprised of any data such as photos, videos, or documents. You can upload a single file as an archive or aggregate multiple files into a TAR or ZIP file and upload as one archive.

    A single archive can be as large as 40 terabytes. You can store an unlimited number of archives and an unlimited amount of data in Amazon S3 Glacier. Each archive is assigned a unique archive ID at the time of creation, and the content of the archive is immutable, meaning that after an archive is created it cannot be updated.

  • S3 Glacier vaults

    Amazon S3 Glacier uses "vaults" as containers to store archives.

    An AWS account can consist max of 1000 Glacier vaults

  • Uploading the data directly to Glacier through the Amazon Glacier Console
    • you cannot upload objects to Amazon Glacier directly through the Management Console. To upload data, such as photos, videos, and other documents, you must either use the AWS CLI or write code to make requests, by using either the REST API directly or by using the AWS SDKs.
  • Retrieving an object

    • Retrieve an entire object—A single GET operation can return you the entire object stored in Amazon S3.
    • Retrieve object in parts—Using the Range HTTP header in a GET request, you can retrieve a specific range of bytes in an object stored in Amazon S3.

The lifecycle expiration costs depend on when you choose to expire objects.

There is no Data Transfer charge for data transferred within an Amazon S3 Region via a COPY request. Data transferred via a COPY request between AWS Regions is charged at rates specified in the pricing section of the Amazon S3 detail page. There is no Data Transfer charge for data transferred between Amazon EC2 and Amazon S3 within the same region,
Amazon S3 does not support Amazon MQ as a destination to publish events.
Storage classes are object specific not bucket based, Same Bucket can have one folder in Standard and other in Glacier


  • What is Cloudfront?

    Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

  • Edge Location
    • Location where content will be cached.
    • You can also add files at edge locations, they are not just read only
    • This is separate to AWS Region/AZ
  • Origin
    • Origin of all files that the CDN will distribute.
    • It can be S3, Ec2 instance, Elastic Loadvbalance or Route 53
  • Distribution
    • Collection of Edge Locations
  • Web Distribution
    • Used for Websites
  • RTMP
    • Realtime Messaging Protocol
    • Used for Media Streaming

  • TTL
    • Time to Live - how long to cache
  • Invalidate cache
    • Delete caching.
    • You will be charged for invalidating.
  • What is the accuracy of geo location database to know the location of IP?


  • limit to the number of invalidation requests I can make?

    You can invalidate 3000 objects (same object or different or any combination) at one time.

    If you invalidate wildcard path then 15 path at a time.

  • What is websockets?
    • WebSocket is a real-time communication protocol that provides bidirectional communication between a client and a server over a long-held TCP connection.
    • By using a persistent open connection, the client and the server can send real-time data to each other without the client having to frequently reinitiate connections checking for new data to exchange.
    • WebSocket connections are often used in chat applications, collaboration platforms, multiplayer games, and financial trading platforms.
  • What is the maximum size of a file that can be delivered through Amazon CloudFront?

    20 GB

  • When to use signed url's and when to use signed cookies?
    • Use signed URLs for the following cases:

      - You want to use an RTMP distribution. Signed cookies aren't supported for RTMP distributions.

      - You want to restrict access to individual files, for example, an installation download for your application.

      - Your users are using a client (for example, a custom HTTP client) that doesn't support cookies.

    • Use signed cookies for the following cases:

      - You want to provide access to multiple restricted files, for example, all of the files for a video in HLS format or all of the files in the subscribers' area of a website.

      - You don't want to change your current URLs.

  • Price classes

    CloudFront edge locations are grouped into geographic regions, and we've grouped regions into price classes. The default price class includes all regions. Another price class includes most regions (the United States; Canada; Europe; Hong Kong, Philippines, South Korea, Taiwan, and Singapore; Japan; India; South Africa; and Middle East regions) but excludes the most expensive regions. A third price class includes only the least expensive regions (the United States, Canada, and Europe regions).

  • Access Logs

    You can configure CloudFront to create log files that contain detailed information about every user request that CloudFront receives. These access logs are available for both web and RTMP distributions. If you enable logging, you can also specify the Amazon S3 bucket that you want CloudFront to save files in.


  • What is IAM?
    • AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely.
    • Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.
  • Users vs Groups vs Policies vs Roles
    • Users
      • A user is a unique identity recognised by AWS services and applications. Similar to a login user in an operating system like Windows or UNIX
    • Groups
      • A group is a collection of IAM users. Manage group membership as a simple list.
      • A user can be added to multiple groups
    • Roles
      • Roles - Roles are assigned to AWS Resources like EC2 , Lambda
      • Roles are easy to manage and secure. ie) you dont need to store access keys/password
      • Roles can be assigned to an EC2 instance once it's created via console/CLI
      • Roles are universal, can be used in any region
      • IAM Roles can be attached or detached from instances at any time, regardless of whether the instance is started or stopped.
    • Policies
      • Policies - made of JSON doc called Policy To give permission for Users/Groups/Roles
  • What is Identity federation?

    Using identity federation external identities are granted secure access to resources in your AWS account without having to create IAM users.

  • IAM is universal
  • What is Root Account?

    Account that created when 1st setting up AWS account . It has complete Admin Access.

  • Access Users have by default when created
    • They have No Permissions
    • Access Key ID and Secret Access Keys are assigned

  • Access key & Session Key vs Password
    • Access key and Session key are for programatic access like CLI
    • Password is for AWS console access
  • Can I Re view the password , Secret & Access key once again ?

    You can't do that, can be viewed only once.

  • MFA

    Multi Factor Authentication

  • What is IAM DB Authentication?

    You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token.

  • FYI - Username per AMI
  • Administrate public share setting
    • No public sharing - Users cannot send view links to anyone outside the organization.
    • All managed users can share publicly All users can send view links to anyone outside the organization.
    • Only Power users can share publicly Only Power users can send view links to people outside the organization.
While connecting to Ec2 from Windows maching if error " No supported authentication error mthods available" then wrong username is used


SCPs affect only principals that are managed by accounts that are part of the organization. SCPs don't affect resource-based policies directly. They also don't affect users or roles from accounts outside the organization. For example, consider an Amazon S3 bucket that's owned by account A in an organization. The bucket policy (a resource-based policy) grants access to users from accounts outside the organization. Account A has an SCP attached. That SCP doesn't apply to those outside users. It applies only to users that are managed by account A in the organization.

AWS Organization

  • What is AWS Organisation?

    AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS.

  • Best Practices
    • Always enable MFA on Root account
    • Use Strong and Complex password for Root account
    • Paying account to be used only for billing, don't deploy any resources into billing account.

  • What is organisation?

    An organization is a collection of AWS accounts that you can organize into a hierarchy and manage centrally.

  • What is an organisational unit (OU)?

    You club all relevant AWS accounts into one OU. By this you can apply same rules for all accounts.

  • AWS Organisation vs Cloud Tower

    Cloud Tower enables to create multi account automatically with predefined blue print and security rules

    If you want to create multiaccount setup on your own with custom requirements then go with AWS Organisation

  • SCP- Service Control Policies

    Service Control Policies - Enable/Disable AWS Services on OU or individual accounts. Eg: Finance team don't need to use Ec2 servers.

  • Consolidated billing benefits

    Consolidated billing has the following benefits:

    • One bill – You get one bill for multiple accounts.
    • Easy tracking – You can track the charges across multiple accounts and download the combined cost and usage data.
    • Combined usage – You can combine the usage across all accounts in the organization to share the volume pricing discounts, Reserved Instance discounts, and Savings Plans. This can result in a lower charge for your project, department, or company than with individual standalone accounts. For more information, see Volume discounts.
    • No extra fee – Consolidated billing is offered at no additional cost.
  • When to create multiple AWS accounts?
    • Does the business require administrative isolation between workloads?

      Administrative isolation by account provides the most straightforward approach for granting independent administrative groupsdifferent levels of administrative control over AWS resources based on the workload, development lifecycle, business unit (BU), or data sensitivity.

    • Does the business require limited visibility and discoverability of workloads?

      Accounts provide a natural boundary for visibility and discoverability. Workloads cannot be accessed or viewed unless an administrator of the account enables access to users managed in another account.

    • Does the business require isolation to minimize blast radius?

      Blast-radius isolation by account provides a mechanism for limiting the impact of a critical event such as a security breach, if an AWS Region or Availability Zone becomes unavailable, account suspensions, etc.Separate accounts help define boundaries and provide natural blast-radius isolation.

    • Does the business require strong isolation of recovery and/or auditing data?

      Businesses that are required to control access and visibility to auditing data due to regulatory requirements can isolate their recovery data and/or auditing data in an account separate from where they run their workloads (e.g., writing CloudTrail logs to a different account).

AWS Resource Access Manager

  • What is Resource Access Manager?

    AWS Resource Access Manager (RAM) is a service that enables you to easily and securely share AWS resources with any AWS account or within your AWS Organization.

    You can share AWS Transit Gateways, Subnets, AWS License Manager configurations, and Amazon Route 53 Resolver rules resources with RAM.

Athena vs Macie

  • What is Athena?
    • Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.
    • It is Serverless
  • What is Macie?
    • Used Artificial Intelligence to analyze data in S3 and helps to identify any personal information.
    • Includes Dashboard, Alerting and Reporting
    • Great for PCI compliance and ID theft prevention
    • Can analyze Cloudtrail logs for any suspicious activity.


  • What it is?

    To move Large Amount of data from local to AWS environment

  • Snowball can Import/Export to S3
  • what is snowball mobile?
    • Exabyte-scale data transfer service used to move extremely large amounts of data to AWS.
    • It is not suitable for transferring a small amount of data, like 80 TB .
    • You can transfer up to 100PB per Snowmobile,
  • What is snowball edge?

    AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can undertake local processing and edge-computing workloads in addition to transferring data between your local environment and the AWS Cloud.

    Each Snowball Edge device can transport data at speeds faster than the internet. This transport is done by shipping the data in the appliances through a regional carrier.

    Snowball Edge devices have three options for device configurations – storage optimized, compute optimized, and with GPU.

AWS Direct Connect

  • What is AWS Direct connect

    It is primarily used to establish a dedicated network connection from your premises network to AWS. This is not suitable for one-time data transfer tasks, like what is depicted in the scenario.

  • What does it does?
    • AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct Connect router. With this connection, you can create virtual interfaces directly to public AWS services (for example, to Amazon S3) or to Amazon VPC, bypassing internet service providers in your network path.
  • Encryption in Transit

    AWS Direct Connect does not encrypt your traffic that is in transit. To encrypt the data in transit that traverses AWS Direct Connect, you must use the transit encryption options for that service

Storage Gateway

  • What it is Storage Gateway?

    AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage.

  • Use case for Storage Gateway?

    Storage Gateway supports three key hybrid cloud use cases –

    (1) Move backups and archives to the cloud

    (2) Reduce on-premises storage with cloud-backed file shares, and

    (3) Provide on-premises applications low latency access to data stored in AWS.

  • File Gateway
    • Enables you to store and retrieve objects in Amazon S3 using file protocols such as Network File System (NFS) and Server Message Block (SMB). Objects written through File Gateway can be directly accessed in S3.
  • Volume Gateway
    • Stored Volumes
      • It creates snapshot and stored in S3
      • Entire Dataset is stored onsite & is asynchroously backedup to S3
    • Cached Volumes
      • Entire Dataset is stored on S3 and the most frequent accessed data is cached on site
  • Tape Gateway

    The Tape Gateway provides your backup application with an iSCSI virtual tape library (VTL) interface, consisting of a virtual media changer, virtual tape drives, and virtual tapes. Virtual tapes are stored in Amazon S3 and can be archived to Amazon S3 Glacier or Amazon S3 Glacier Deep Archive.

Route Origin Authorization (ROA)

  • What is Route Origin Authorization?

    is a document that you can create through your Regional internet registry (RIR), such as the American Registry for Internet Numbers (ARIN) or Réseaux IP Européens Network Coordination Centre (RIPE). It contains the address range, the ASNs that are allowed to advertise the address range, and an expiration date.

Section 2 - Storage , Server , Monitoring & Securiy


  • What is Ec2
    • Ec2 is a webservice that provides resizable compute capacity in the cloud.
  • Price Class
    • On Demand
      • Pay a fixed rate by hour / seconds with no commitment
    • Reserved
      • Contract terms of 1 or 3 Years
      • Significant discount of hourly charge and with capacity reservation.
    • Spot
      • Enables you to bid.
      • You can have greater savings
      • You can do this if the application has flexible start and end times
      • If you terminate the instance then you will be billed till for that hour.
    • Dedicated Host
      • Dedicated physical machine
  • EC2 instance type - Mnemonic
    F - For FPGA
    I - For IOPS
    G - For Graphics
    H - High Disk Throughput
    T - Cheap General Purpose (think T2 Micro)
    D - For Density
    R - RAM
    M - Main choice for General purpose apps
    A - ARM Based Workloads
    P - Graphics
    C - Compute
    X - Extreme Memory
    Z - Extreme Memory and CPU
    U - Bare Metal
  • Termination Protection
    • To avoid accidental termination you must turn it on, it is turned off by default.
  • EBS with EC2
    • Once Ec2 instance is terminated the root EBS volume will be deleted
    • You can mention to keep root EBS volume on terminating EC2 instance
    • EBS root volume can be encrypted. You can also encrypt with 3rd party like bitlocker for windows AMI
    • Additional volumes can also be encrypted.
  • Security Group
    • Multiple Ec2 can be mapped to single Security group
    • Multiple Security group can be attached to single Ec2 instances.
  • Instance Metadata
    • What is Instance Metadata?
      • Provides info on instances such as IP, instance ID
    • How to get meta-data & user-data

      curl to
  • Which EC2 feature allows you to utilize SR-IOV?

    Enhanced Networking

  • What is the underlying Hypervisor for EC2?

    1.Hyper-V 2.Nitro

  • Where in the AWS Global Infrastructure are EC2 instance provisioned?

    Availability Zones

  • Instance states

    Start : Instance boot and running

    Stop : Instance shutdown and it can be restart

    Terminate : Instance shutdown and get deleted

  • Ec2 RI Utilisation%

    This provides an opportunity to increase the utilization.

    To calculate utilisation of Reserved instance.

    Calculated by resserved instance hours/total reserver instance hours

  • Ec2 RI Coverage %

    Helps to determine the need to increase reserved instance.

    Reserved instance used hours / Total Ec2 On-Demand + Reserved instance hours

  • Classic link to connect Classic instances to VPC

    ClassicLink allows you to link EC2-Classic instances to a VPC in your account, within the same Region. If you associate the VPC security groups with a EC2-Classic instance, this enables communication between your EC2-Classic instance and instances in your VPC using private IPv4 addresses. ClassicLink removes the need to make use of public IPv4 addresses or Elastic IP addresses to enable communication between instances in these platforms

  • Run command

    You can use Run Command from the console to configure instances without having to login to each instance.

    AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. A managed instance is any Amazon EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager. Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale. You can use Run Command from the AWS console, the AWS Command Line Interface, AWS Tools for Windows PowerShell, or the AWS SDKs. Run Command is offered at no additional cost.

  • Scheduled reserved instance

    Instance are reserved for a specific duration.

  • AWS Server Migration

    AWS Server Migration Service (SMS) is an agentless service which makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations.


  • ENI - Elastic Network Interface
    • Virtual Network card in a VPC
    • For Basic Networking
    • When you need Seperate Management Network for Prod Network and other Network at low cost
  • ENA - Enhanced Networking
    • Speed between 10Gbps to 100Gbps
    • You need high reliable throughput
    • Amazon EC2 provides enhanced networking capabilities through the Elastic Network Adapter (ENA). To use enhanced networking, you must install the required ENA module and enable ENA support.
  • EFA - Elastic Fabric Adapter
    • When High Performance Computing (HPC) is required
    • Also used for Machine Learning applications
    • An Elastic Fabric Adapter (EFA) is a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications. EFA enables you to achieve the application performance of an on-premises HPC cluster, with the scalability, flexibility, and elasticity provided by the AWS Cloud.
    • EFA are not supported in Windows instances, even if you attach it act as the normal ENA without adaptive feature
  • Ways to attach a network interface to an EC2 instance
    • When it's running (hot attach)
    • When it's stopped (warm attach)
    • When the instance is being launched (cold attach)


  • To Know
    • Supports Network File System version 4 (NFSv4)
    • Pay only for the storage you use.
    • Can scale upto Petabytes
    • Support 1000s of concurrent NFS connections
    • Data is stored across multiple AZ within a region
    • Read after write Consistency

EFS vs Amazon FSx for Windows vs Amazon FSx for Lustre

  • EFS

    Suited for high distributed, high resilient storage for linux instance/ linux based applications.

  • EFS Storage classes
    • Standard Storage
    • EFS - IA ( Infrequent Access)
  • EFS Lifecycle Management on your file system

    Automatically and transparently move the file class from standard to IA

  • Amazon FSx for Windows

    Best suited for Windows based applications such as Sharepoint / Microsoft based product

  • Amazon FSx for Lustre
    • For Tasks like Big Data, where in need of High Performance Compute.
    • FSx for Lusture can directly store data into S3
  • When to use EFS?

    Amazon EFS is designed to provide performance for a broad spectrum of workloads and applications, including Big Data and analytics, media processing workflows, content management, web serving, and home directories.

  • Single EFS can be mapped to how many max Ec2 servers?

    Can be mapped upto 1000 servers

  • “General Purpose” vs “Max I/O” performance modes?
    General Purpose : Default
    Max I/O : When same EFS is shared by multiple instances simultaneously (10's, 100's o EC2 instances) . This will cause some latency.
  • Ports to be allowed for EFS to Ec2 connection

    Port 22 on EC2 2049 (NFS)

  • How to encrypt unencrypted EFS volume at rest?

    You can only encrypt at the time of EFS creation.

    In this case copy the EFS and get it encrypted

  • Bursting Throughput vs Provisioned Throughput.

    There are two throughput modes to choose from for your file system, Bursting Throughput and Provisioned Throughput.

    With Bursting Throughput mode, throughput on Amazon EFS scales as the size of your file system in the standard storage class grows.

    With Provisioned Throughput mode, you can instantly provision the throughput of your file system (in MiB/s) independent of the amount of data stored.

You can create up to 1,000 file systems per region.
Files smaller than 128 KiB are not eligible for Lifecycle Management and will always be stored on EFS Standard.

EFS vs EBS vs S3

  • EFS

    Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances.

  • EBS

    Amazon EBS is a block level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance.

  • S3

    Amazon S3 is an object storage service. Amazon S3 makes data available through an Internet API that can be accessed anywhere.

Placement Groups

  • What is?
    • Way of placing Ec2 instances
    • Placement Group name should be unique
  • Types
    • Clustered

      A cluster placement group is a logical grouping of instances within a single Availability Zone. A cluster placement group can span peered VPCs in the same Region.

      Preferred for application with low network latency

    • Spread
      • Spread - Spread placement groups are recommended for applications that have a small number of critical instances that should be kept separate from each other.
      • Launching instances in a spread placement group reduces the risk of simultaneous failures that might occur when instances share the same racks.
      • Spread placement groups provide access to distinct racks, and are therefore suitable for mixing instance types or launching instances over time.
      • A spread placement group can span multiple Availability Zones in the same Region.
      • You can have a maximum of seven running instances per Availability Zone per group.
    • Partitioned
      • Partitioned - multiple instances under single partition. Each partition has seperate N/w cards so they are isolated
      • Partition placement groups can be used to deploy large distributed and replicated workloads, such as HDFS, HBase, and Cassandra, across distinct racks.
      • When you launch instances into a partition placement group, Amazon EC2 tries to distribute the instances evenly across the number of partitions that you specify.
      • You can also launch instances into a specific partition to have more control over where the instances are placed.
  • Can you merge placement groups ?
    • You can't merge placement groups
  • Does all Ec2 instance supports Placement groups?
    • Not all Ec2 instance can be launched in placement group.
  • Can existing instance can be moved to placement groups?
    • Existing instance can be moved to placement group , but while moving it should be in stopped state
    • You can't move instance to placement group only via CLI or AWS SDK, can't do via console yet.
  • What type of instances can be launched in Placement groups?
    1. Compute Optimized
    1. GPU
    1. Memory Optimized
    1. Storage Optimized
  • Max running instance per AZ in spread placement groups?
    • Spread placement groups have a specific limitation that you can only have a maximum of 7 running instances per Availability Zone

  • A spread and partitioned placement group can span multiple AZ

Instance Store

  • Also known as Ephemeral Storage ie) ( If the Host fails all the data will be lost)
  • You can Reboot instance no data loss will be there
  • Root volume will be deleted on terminating instances.
  • You can't attach instance store once Ec2 is launched
  • You can't transfer one instance store from one instance to other, also data stored will be lost once.
  • Data will be lost if the instance type is switched
  • Instance store fails on below conditions
    • The underlying disk drive fails
    • The instance stops
    • The instance terminates
  • When to use Instance store?

    Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.

Data will be lost even when the instance stops and restarts

AWS Shield

  • Used for DDOS protection.


  • What is WAF?
    • AWS WAF is a web application firewall that helps protect web applications from attacks by allowing you to configure rules that allow, block, or monitor (count) web requests based on conditions that you define.
    • These conditions include IP addresses, HTTP headers, HTTP body, URI strings, SQL injection and cross-site scripting.
  • What type of attacks WAF can stop?

    AWS WAF helps protects your website from common attack techniques like

    SQL injection and Cross-Site Scripting (XSS).

    In addition, you can create rules that can block attacks from specific user-agents, bad bots, or content scrapers.

  • Can you protect content which is not hosted on AWS?

    Link it to Cloudfront and use WAF

ECS - Elastic Container Services

  • What is ECS?

    Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service

  • How ECS Works?
  • Amazon ECS is integrated with familiar features like Elastic Load Balancing, EBS volumes, VPC, and IAM.
  • Docker is the only container platform supported by Amazon ECS at this time.
  • Features available in ECS

    After a cluster is up and running, you can define task definitions and services that specify which Docker container images to run across your clusters. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure.

    • In short
      • Containers and images
      • Task Definitions
      • Tasks and Scheduling
      • Cluster
      • Container Agent
  • Container instance must have external internet access to connect to ECS. Use NAT incase if instance don't have internet access.
  • What is Amazon ECS interface VPC endpoints

    You can improve the security posture of your VPC by configuring Amazon ECS to use an interface VPC endpoint. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access Amazon ECS APIs by using private IP addresses.

  • Parameters that can be defined in task definitions
    • The Docker image to use with each container in your task
    • How much CPU and memory to use with each task or each container within a task
    • The launch type to use, which determines the infrastructure on which your tasks are hosted
    • The Docker networking mode to use for the containers in your task
    • The logging configuration to use for your tasks
    • Whether the task should continue to run if the container finishes or fails
    • The command the container should run when it is started
    • Any data volumes that should be used with the containers in the task
    • The IAM role that your tasks should use
  • What is Service definitions


    • Which Tasks should run within your Service ( ARN of task)
    • How many instantiations that the task has to run
    • Which Load balancer to associate to your tasks
    • IAM the task should use to call the resources
  • ECS Container agent

    You can pass user data as parameter while launching ECS instances, these values can be used while building docker,/ while running some automated tasks


  • What is Fargate?

    AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS)

  • Why to use Fargate?

    AWS Fargate enables you to focus on your applications.

    You define your application content, networking, storage, and scaling requirements. There is no provisioning, patching, cluster capacity management, or any infrastructure management required.

  • When to use Fargate?

    Choose AWS Fargate to launch their containers without having to provision or manage EC2 instances.

  • When not to use Fargate?
    • If you require greater control of your EC2 instances to support compliance and governance requirements or broader customization options, then use ECS or EKS without Fargate.
    • Use EC2 for GPU workloads, which are not supported on Fargate today.

AWS Code Commit


  • What is EBS
    • Elastic Block Store. Virtual Hard disk
  • Snapshot of EBS
    • Snapshots are like photo of current state
    • Snapshots are stored in S3
    • Snapshots are incremental. Means only whatever changed will be in latest snapshot
    • Snapshots can be taken when instance is running
    • When Snapshoting the root volume always consider stopping the instance before taking the snapshot
    • You can't delete the snapshot of EBS volume which is used as root device of an registered AMI

  • EBS file size can be changed when instance is running
  • EBS should be in same region as of EC2 Region
  • How to move EBS mounted Ec2 from one Region/AZ to other ?
    • Take snapshot of EBS then create an AMI from that.
    • Launch new Instasnce from the created AMI
  • EBS Snapshots are backed up to S3 in what manner?


  • Must known facts on EBS
    • When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to a failure of any single hardware component.
    • An EBS volume can only be attached to one EC2 instance at a time.
    • After you create a volume, you can attach it to any EC2 instance in the same Availability Zone
    • An EBS volume is off-instance storage that can persist independently from the life of an instance. You can specify not to terminate the EBS volume when you terminate the EC2 instance during instance creation.
    • EBS volumes support live configuration changes while in production which means that you can modify the volume type, volume size, and IOPS capacity without service interruptions.
    • Amazon EBS encryption uses 256-bit Advanced Encryption Standard algorithms (AES-256)
    • EBS Volumes offer 99.999% SLA.
  • EBS Hard disk options

  • EBS Encryption
    • Amazon EBS encryption offers a straight-forward encryption solution for your EBS resources that doesn't require you to build, maintain, and secure your own key management infrastructure.
    • It uses AWS Key Management Service (AWS KMS) customer master keys (CMK) when creating encrypted volumes and snapshots.
  • What type of data are encrypted?
    • Data at rest inside the volume
    • All data moving between the volume and the instance
    • All snapshots created from the volume
    • All volumes created from those snapshots
  • You can encrypt while copying an un encrypted volume.
  • Amazon EBS Elastic volume
    • With Amazon EBS Elastic Volumes, you can increase the volume size, change the volume type, or adjust the performance of your EBS volumes.
    • If your instance supports Elastic Volumes, you can do so without detaching the volume or restarting the instance. This enables you to continue using your application while the changes take effect.
  • EBS Volume status check

    Volume status checks are automated tests that run every 5 minutes and return a pass or fail status. You can view the results of volume status checks to identify any impaired volumes and take any necessary actions.

    • Ok - If all checks pass, the status of the volume is ok
    • impaired - If a check fails, the status of the volume is impaired
    • warning - If the volume is severely degraded or the volume performance is well below expectations, then the status is warning
    • insufficient data - the checks may still be in progress on the volume
  • Copying EBS Snapshot
    • With Amazon EBS, you can create point-in-time snapshots of volumes, which we store for you in Amazon S3.
    • After you create a snapshot and it has finished copying to Amazon S3 (when the snapshot status is completed), you can copy it from one AWS Region to another, or within the same Region.
    • Amazon S3 server-side encryption (256-bit AES) protects a snapshot's data in transit during a copy operation. The snapshot copy receives an ID that is different from the ID of the original snapshot.
    You can't directly cross copy snapshot, 1st you need to snapshot in region1 and then copy it to region2
HDD volumes cannot be used as a bootable volume

Data Lifecycle Manager

  • What is Data Lifecycle Manager?

    Amazon Data Lifecycle Manager (Amazon DLM) is an automated procedure to back up the data stored on your Amazon EBS volumes. Use Amazon DLM to create lifecycle policies to automate snapshot management.8

Security Group

  • Inbound and Outbound
    • All Inbound Traffic is blocked by default
    • All Outbound Traffic is allowed by default
  • Changes in SG take effect immediately
  • Security groups are STATEFUL ie)If you all in Inbound auto allowed in Outbound and vice versa
  • You can't block IPs using Security Group
  • You can set Allow Rules, but can't set any deny rules, as by default they deny all.


    • It's all about performance
    • What Cloudwatch is for?
      • Monitoring Performance
      • Monitor most of AWS, also your app runs on AWS
      • Can trigger notification with cloudwatch alarms
      • You can create Dashboard
    • Standard Monitoring Time

      5 Minute

    • Detailed Monitoring

      Just monitor for detailed 1 Minute frequency, metrics remains same

    • Custom metrics that you can configure

      Memory utilization Disk swap utilization Disk space utilization Page file utilization Log collection

    • Cloudwatch monitoring scripts

      Install CloudWatch monitoring scripts in the instances. Send custom metrics to CloudWatch which will trigger your Auto Scaling group to scale up.

    • To Trigger autoscaling group

      - Install CloudWatch monitoring scripts in the instances. Send custom metrics to CloudWatch which will trigger your Auto Scaling group to scale up.

      - Install the CloudWatch agent to the EC2 instances which will trigger your Auto Scaling group to scale up.

    • Amazon CloudWatch Alarms – Watch a single metric over a time period that you specify, and perform one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The action is a notification sent to an Amazon Simple Notification Service (Amazon SNS) topic or Amazon EC2 Auto Scaling policy. CloudWatch alarms do not invoke actions simply because they are in a particular state; the state must have changed and been maintained for a specified number of periods.
    • Amazon CloudWatch Logs – Monitor, store, and access your log files from AWS CloudTrail or other sources.
    • Amazon CloudWatch Events – Match events and route them to one or more target functions or streams to make changes, capture state information, and take corrective action.
    • AWS CloudTrail Log Monitoring – Share log files between accounts, monitor CloudTrail log files in real time by sending them to CloudWatch Logs, write log processing applications in Java, and validate that your log files have not changed after delivery by CloudTrail.
    Autoscaling are triggered mostly from cloudwatch

    You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources.
    CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis. CloudWatch Logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time, and you can query them and sort them based on other dimensions, group them by specific fields, create custom computations with a powerful query language, and visualize log data in dashboards.


    • With CLI you can interact with AWS from any wherein the world
    • You need to setup access in IAM



    • It's all about auditing
    • Monitors API calls within the AWS platform. ie) if any changes made, like a cctv
    • If to monitor any API calls then it's Cloudtrail.
    • Cloudtrail logs are encrypted by default
    • Log File Integrity Validation

      You can validate the integrity of the CloudTrail log files stored in your S3 bucket and detect whether they were deleted or modified after CloudTrail delivered them to your S3 bucket. You can use the log file integrity (LFI) validation as a part of your security and auditing discipline.

    Cloud trail are enabled for all regions by default on creating, logged logs are stored in S3


    • Amazon QuickSight is a fast, cloud-powered business intelligence service that makes it easy to deliver insights.
    • Provides interactive graphical data from your input. The input can be any of Excel / DB / your AWS service / Cloudwatch logs

    Access Logs

    • Access log to see who accessed your API and how the API been accessed.

    Snapshot Encryption

    • Snapshots of Encrypted volumes are encrypted automatically
    • Volumes restored from encrypted snapshots are encrypted automatically
    • Snapshots can be shared only if its unencrypted
    • Root volumes can be encrypted upon instance creation.
    • How to encrypt the unencrypted root volume
      1. Create Snapshot of unencrypted root volume
      1. Copy the Snapshot and select encrypt option
      1. Create an AMI from the encrypted Snapshot
      1. Use the AMI to launch encrypted instance

    Elastic Beanstalk

    • What is Elastic Beanstalk?
      • AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and manage applications in the AWS Cloud.
      • Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.

    3rd Party certificate

    If you got your certificate from a third-party CA, import the certificate into ACM or upload it to the IAM certificate store.

Section 3 - Database 101

  • Types of Database
    1. Relational Database (SQL)
    1. Non-Relational Database ( No-SQL)
  • What is Relational Database?

    Like a table it contains fixed row and column (schema)


  • Relational Database service in AWS

    RDS Aurora

  • RDS supported engines
    • SQL
    • MySQL
    • PostgreSQL
    • Oracle
    • Aurora
    • MariaDB
  • Is RDS server less?

    No: Runs on Virtual machines, you can't ssh Maintenance of servers are taken care by AWS

  • Multi AZ vs Read Replica in RDS
    • Multi AZ
      • Used only for Disaster recovery
      • You can force failover from one AZ to other by rebooting RDS instance
    • Read Replicas
      • Read replicas are for performance.
      • Read replicas can be Multi-AZ and Multi Region.
      • Read replicas support all engines other than SQL
        • Supports

          MySQL PostgreSQL MariaDB Oracle Aurora

          Note: in short Other than SQL

      • An Replica can be moved to master but that will break Read Replica.
      • Read replica replicates data asynchronous baased
  • How many DB instances can I run with Amazon RDS?

    Can run up to 40 instance in an account. Out of 40 , max of 10 can be SQL or Oracle server DB

  • Can I move my existing DB instances from inside VPC to outside VPC?

    No: You can't move instance from inside VPC to outside VPC for security reason.

    You also can't create instance outside VPC from AMI of instance within VPC

  • When to consider Read replica?
    • To handle high load read base Database.
    • Business scenarios to run query on Replica Database without directly running on a Production instance.
    • To serve read traffic while source DB is unavailable.
    • You may use a read replica for disaster recovery of the source DB instance.
  • RDS Backup options
    • Automated Backups

      You can recover data within retention period of 1-35 days

      Snapshots are taken daily and transaction logs are stored

      All are stored in S3 and no separate charge for S3 storage.

      Can be reverted to most latest change that caused issue

    • Database Snapshots

      Taken manually

      Can be used even if original RDS is deleted

  • How Encryption @ rest handled by RDS?

    Handled using AWS KMS

  • While taking snapshot , how to handle queries without impacting performance?

    RDS can perform operations synchronously , without impacting any performance while taking snapshot in parallel.

  • What happens when instance failed in Multi-AZ?

    CNAME of instance switched

  • RDS synchronously replicates data if it's in same region
To encrypt a encrypted RDS, you can't directly encrypt the unencrypted RDS 1→ Copy the snapshot and encrypt the snapshot 2→ Create RDS from encrypted snapshot

Use MultiAZ-RDS where to avoid redudancy during backup process In MultiAZ data is synchronously replicated to standby and backup can be taken at standby


  • What is DynamoDB?
    • DynamoDB is a nonrelational Database(No-SQL)
    • DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.
  • What are Consistency model at Dynamo DB
    • Default Consistent Reads?

      Eventual Consistent Reads (Takes > 1 ms)

    • Strongly consistent Reads

      can be switched if should reflect < 1ms

  • Where DynaoDB is stored?

    Stored in SSD

  • How DynamoDB scaling works?
  • Amazon DynamoDB Accelerator (DAX)
    • is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds
  • Global Tables: Multi-Region Replication with DynamoDB

    Amazon DynamoDB global tables provide a fully managed solution for deploying a multiregion, multi-master database, without having to build and maintain your own replication solution. With global tables you can specify the AWS Regions where you want the table to be available. DynamoDB performs all of the necessary tasks to create identical tables in these Regions and propagate ongoing data changes to all of them.

    For example, suppose that you have a large customer base spread across three geographic areas—the US East Coast, the US West Coast, and Western Europe. Customers can update their profile information using your application. To satisfy this use case, you need to create three identical DynamoDB tables named CustomerProfiles, in three different AWS Regions where the customers are located. These three tables would be entirely separate from each other. Changes to the data in one table would not be reflected in the other tables. Without a managed replication solution, you could write code to replicate data changes among these tables. However, doing this would be a time-consuming and labor-intensive effort.

    Instead of writing your own code, you could create a global table consisting of your three Region-specific CustomerProfiles tables. DynamoDB would then automatically replicate data changes among those tables so that changes to CustomerProfiles data in one Region would seamlessly propagate to the other Regions. In addition, if one of the AWS Regions were to become temporarily unavailable, your customers could still access the same CustomerProfiles data in the other Regions.

  • Dynamo DB streams

    A DynamoDB stream is an ordered flow of information about changes to items in a DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.

Don't need to configure multi AZ Dynamo DB since it's highly available by default.
DynamoDB automatically scales where as in RDS you need to scale manually
Supports 3 Geographically distinct data centres

  • Athena

    Amazon Athena is an interactive query service that makes it easy to analyse data in Amazon S3, using standard SQL commands. It will work with a number of data formats including "JSON", "Apache Parquet", "Apache ORC" amongst others, but "XML" is not a format that is supported.


  • What is Aurora?
    • Amazon Relational database , it is MySQL and PostgreSQL-compatible.
    • Aurora is serverless so can be used for unpredictable workload.
  • How many copies of data maintained?

    2 copies of data are contained in each AZ, with min of 3 AZ ie) 3 *2 = 6 copies of your data

Aurora snapshots can be shared to other AWS accounts

  • Replica Types

    Aurora Replicas -15

    MySQL Replicas - 5

    Postgresql - 1

  • Automated Backups are turned on by default
  • Aurora infra

    Amazon Aurora typically involves a cluster of DB instances instead of a single instance. Each connection is handled by a specific DB instance. When you connect to an Aurora cluster, the host name and port that you specify point to an intermediate handler called an endpoint.

  • Types of Aurora Endpoint
    • Cluster Endpoint

      A cluster endpoint (or writer endpoint) for an Aurora DB cluster connects to the current primary DB instance for that DB cluster. This endpoint is the only one that can perform write operations such as DDL statements.

    • Custom Endpoint

      A custom endpoint for an Aurora cluster represents a set of DB instances that you choose. When you connect to the endpoint, Aurora performs load balancing and chooses one of the instances in the group to handle the connection.

    • Read Endpoint

      A reader endpoint for an Aurora DB cluster provides load-balancing support for read-only connections to the DB cluster.

    • Instance Endpoint

      An instance endpoint connects to a specific DB instance within an Aurora cluster. Each DB instance in a DB cluster has its own unique instance endpoint.


  • What is Elasticache?
    • Webservice to boost performance of an existing Database
    • Cache the high volume fetched data for queries and caches it
    • Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing.
  • How Elasticache works?
  • Redis and Memcached
    • Redis

      Redis stores data in cache and provides sub-milliseconds latency

      Redis provides more fetures comparing mem-cached

    • Memcached

      Memcached provides most simplest solution for sub-milliseconds latency

  • Memcached vs Redis


  • What is Redshift?

    It is Amazon Datawarehousing Solution & Business ingency

    Amazon Redshift is a fast, fully managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools.

    It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution.

  • When to use?

    Data warehousing and Analytics

    it is primarily used for OLAP systems.- Online Analytical processing

  • What is Redshift spectrum?

    Amazon Redshift Spectrum is a feature of Amazon Redshift that enables you to run queries against exabytes of unstructured data in Amazon S3 with no loading or ETL required.

  • Managing Snapshots

    Amazon Redshift takes automatic, incremental snapshots of your data periodically and saves them to Amazon S3. Additionally, you can take manual snapshots of your data whenever you want.

  • What is Redshift Enhanced enhanced VPC Routing
    • When you use Amazon Redshift enhanced VPC routing, Amazon Redshift forces all COPY and UNLOAD traffic between your cluster and your data repositories through your Amazon VPC.
    • If enhanced VPC routing is not enabled, Amazon Redshift routes traffic through the internet, including traffic to other services within the AWS network.


  • What is Elasticsearch?

    Elasticsearch is a distributed, open source search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured.

    AWS provides as a service to bring up Elastic search instances and perform analysis. Entire infra is managed by AWS so need to worry on managing instances

Amazon Athena, Amazon EMR, and Amazon Redshift? Athena : - Query service Redshift:- Warehouse solution EMR : - You can use any data processing frameworks like Hadoop , Spark and Presto. Amazon EMR is flexible - you can run custom applications and code, and define specific compute, memory, storage, and application parameters to optimize your analytic requirements.

  • Amazon DynamoDB Accelerator

    Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second.

  • DynamoDB accelerator flow
RDS backups
Elastic cache

Section 4 - DNS Route53

ELB are pointed by Alias record
Use Weighted Routing for Blue/Green Approach
DNS- Route 53

Section 5


  • What is VPC?

    Virtual private cloud (VPC) — A virtual network dedicated to your AWS account.

  • What is Subnet?

    A range of IP addresses in your VPC.

  • What is Routetable?

    A set of rules, called routes, that are used to determine where network traffic is directed.

  • What is Internetgateway?

    A gateway that you attach to your VPC to enable communication between resources in your VPC and the internet.

  • VPC endpoint?

    Endpoint to communicate among AWS services in your account without need of internet via private IP.

  • Types of VPC endpoint

    There are two types of VPC endpoints: interface endpoints and gateway endpoints. You should create the type of VPC endpoint required by the supported service. As a rule of thumb, most AWS services use VPC Interface Endpoint except for S3 and DynamoDB, which use VPC Gateway Endpoint.

  • What is Amazon VPC quotas?

    Number of VPC components that you can provision.

  • VPC consists of

    Internet gateways

    Route tables

    Network Access control lists


    Security groups

  • 1 Subnet always belongs to single Availability Zones
  • When you create a VPC
    • Default Route table, Network Access Control List and a Security Group are created
    • Subnets / Internetgateway wont be created
  • What does Default VPC consist of
    • Your default VPC includes an internet gateway, and each default subnet is a public subnet.
    • Each instance that you launch into a default subnet has a private IPv4 address and a public IPv4 address.
    • These instances can communicate with the internet through the internet gateway. An internet gateway enables your instances to connect to the internet through the Amazon EC2 network edge.
  • EC2 instance at non default subnet

    By default, each instance that you launch into a nondefault subnet has a private IPv4 address, but no public IPv4 address, unless you specifically assign one at launch, or you modify the subnet's public IP address attribute. These instances can communicate with each other, but can't access the internet.

  • How to enable internet for Ec2 instance in VPC
    1. You can use NAT
    1. By attaching an internet gateway to its VPC (if its VPC is not a default VPC) and associating an Elastic IP address with the instance.
  • What is AWS PrivateLink?
    • That enables you to privately connect your VPC to supported AWS services, services hosted by other AWS accounts
    • Traffic between VPC and AWS service does not leaves the AWS network.
  • VPC peering
    • VPC peering connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network.
    • VPC peering can be done for VPC in your account or also to VPC in other AWS account
  • Amazon always reserves 5 IP's in your subnets
  • You can have only 1 Internet gateway per VPN
  • In cases where your EC2 instance cannot be accessed from the Internet (or vice versa), you usually have to check two things

    - Does it have an EIP or public IP address?

    - Is the route table properly configured?

  • VPC endpoints
    • Enables you to privately connect your VPC to supported AWS services .
    • End to end traffic occur within Amazon network
    • Used to access AWS components such as S3 directly without need of internet access from EC2
    • Types of VPC endpoints
      1. Interface Endpoints
      1. Gateway endpoints used for Amazon S3 and Dynamo DB

  • You can connect your VPC to remote networks by using a VPN connection which can be Direct Connect, IPsec VPN connection, AWS VPN CloudHub, or a third party software VPN appliance.
  • VPC peering + VPC endpoint

    Above can help to connect two resources in different region without crossing through internet.

  • Performance on Site to site VPN

    AWS Site-to-Site VPN offers customisable tunnel options including inside tunnel IP address, pre-shared key, and Border Gateway Protocol Autonomous System Number (BGP ASN). In this way, you can set up multiple secure VPN tunnels to increase the bandwidth for your applications or for resiliency in case of a down time. In addition, equal-cost multi-path routing (ECMP) is available with AWS Site-to-Site VPN on AWS Transit Gateway to help increase the traffic bandwidth over multiple paths.

  • Classic link to connect Classic instances to VPC

    ClassicLink allows you to link EC2-Classic instances to a VPC in your account, within the same Region. If you associate the VPC security groups with a EC2-Classic instance, this enables communication between your EC2-Classic instance and instances in your VPC using private IPv4 addresses. ClassicLink removes the need to make use of public IPv4 addresses or Elastic IP addresses to enable communication between instances in these platforms

ECMP to be enable on client side not on Virtual gateway
- Each subnet maps to a single Availability Zone. - Every subnet that you create is automatically associated with the main route table for the VPC. - If a subnet's traffic is routed to an Internet gateway, the subnet is known as a public subnet.
You cannot create a VPC peering connection between VPCs with matching or overlapping IPv4 CIDR blocks.


  • What is VPN?

    AWS Site-to-Site VPN extends your data center or branch office to the cloud via IP Security (IPSec) tunnels, and supports connecting to both virtual private gateway and AWS Transit Gateway.

    You can optionally run Border Gateway Protocol (BGP) over the IPSec tunnel for a highly available solution.

  • Accelerated Site-to-Site VPN
    • When you connect an on-premises location to the AWS cloud, Accelerated Site-to-Site VPN will route your VPN traffic to the closest AWS edge location.
    • Accelerated VPN improves the performance of your Site-to-Site VPN connections by reducing the distance over which data is being shared on the internet and leveraging instead the reliability and performance of the AWS global fiber network.
    • Accelerated Site-to-Site VPN is ideal to connect business-critical locations with your global network, both on premises and in AWS.
    • VPN acceleration will incur additional charges from utilizing both AWS Site-to-Site VPN and AWS Global Accelerator.
  • Site-Site VPN Limitations
    • You can have up to five (50) customer gateways per AWS account per AWS Region.
    • You can have up to five (5) virtual gateways per AWS account per AWS Region.
    • You can have up to ten (10) Accelerated Site-to-Site VPN connections per AWS account per AWS Region.
    • You can have up to fifty (50) Site-to-Site VPN connections per AWS account per AWS Region.
    • You can have up to fifty (50) Site-to-Site VPN connections per virtual gateway.
    • You can advertise up to one hundred (100) routes to a Site-to-Site VPN connection from your customer gateway device.
    • Your Site-to-Site VPN connection can advertise up to one thousand (1000) routes to your customer gateway device.

  • What is VPN Cloudhub ?
    • If you have multiple AWS Site-to-Site VPN connections, you can provide secure communication between sites using the AWS VPN CloudHub.
    • This enables your remote sites to communicate with each other, and not just with the VPC. The VPN CloudHub operates on a simple hub-and-spoke model that you can use with or without a VPC.
    • This design is suitable if you have multiple branch offices and existing internet connections and would like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices.
    • The sites must not have overlapping IP ranges.
  • How to connect Headquarters and other office location replacing WAN?

    → VGW- Virtual private Gateway , BGP -Border Gateway Protocol

    • VGW to connect multiple locations.
    • Each location to setup VPN connection from each customer gateway pointing to VGW
    • BGP peering to configure between customer gateway router and VGW using unique BGP ASN at each location.
    • VGW will receive prefixes from each location & re-advertise to other peers.

    Note : BGP ASN should be unique on each location if not additional allow-in to be configured

  • Increase Application Bandwidth
    • AWS Site-to-Site VPN offers customizable tunnel options including inside tunnel IP address, pre-shared key, and Border Gateway Protocol Autonomous System Number (BGP ASN). In this way, you can set up multiple secure VPN tunnels to increase the bandwidth for your applications or for resiliency in case of a down time.
    • Equal-cost multi-path routing (ECMP) is available with AWS Site-to-Site VPN on AWS Transit Gateway to help increase the traffic bandwidth over multiple paths.
  • VPN supports encryption in Transit
AWS Site-to-Site VPN supports NAT Traversal applications so that you can use private IP addresses on private networks behind routers with a single public IP address facing the internet.



  • What is NAT?

    You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances. For more information about NAT, see NAT.

    You are charged

  • When creating NAT instance
    • Disable Soure/Destination check on instance
    • NAT instance must be in a public subnet
    • Amount of traffic that NAT instance supports depends on the size of the instance
    • You can use multiple subnets in different AZ, Autoscaling groups to create high availability
    • Must be route to private subnet to the NAT instance inorder to Work
    • Behind Security Group
  • NAT Gateways
    • Redundant inside the AZ
    • Throughput can scale automatically
    • No need to patch
    • Not associated with security groups
    • Automatically assigned public IP address
    • Don't need to disable source/destination checks
    • Need to update the route table
    • Always design multiple NAT in multiple AZ for high availability
  • NAT gateway/ NAT instance is used to provide internet traffic to EC2 instance in a private subnets
  • NAT with Ec2

    Ec2 instance can connect to public internet where as you can't connect to server from public internet

  • Security group can't span VPC
    • NACL - Network Access Control list - Must known
      • VPC comes with default NACL
      • By default it allows all inbound and outbound traffic
      • Custom Network ACL denies all inbound and outbound traffic
      • Each subnet in VPC should be mapped to a network ACL. If nothing explicitly mapped then those mapped to a default network ACL
      • You can block IP address using NACL
      • multiple subnets can be associated to single NACL , but a subnet can be associated to only one NACL
      • If a subnet is mapped to a NACL then any previous mapping will be removed
      • NACL is stateless, so need to allow/deny explicitly on inbound and outbound
      • NACL contains numbered list of rules, and are evaluated in order, starts with lowest number.
    • VPC Flow logs
      • You can't enable flow logs in VPC that are peered unless the peered VPC in your account.
      • You can tag flow.
      • Once flow log is created, you can't change its configuration.
      • Not all IP traffic is monitored
        • DHCP traffic
        • for instance meta-data
        • Traffic generated by a windows instance for amazon windows license activation
        • Traffic generated by instance when they reach Amazon DNS server. If you use your own DNS server then all traffic to that DNS server is logged.
      • can be created at the?

        VPC, subnet, and network interface levels.

    • What is Bastion host?
      • A server that used to securely administer Ec2 instance (ie: You connect to internal server via Baston server)
      • You can't use NAT gateway as a Bastion host
    • Best Practise for implement Bastion host
      • The best way to implement a bastion host is to create a small EC2 instance which should only have a security group from a particular IP address for maximum security.
      • This will block any SSH Brute Force attacks on your bastion host. It is also recommended to use a small instance rather than a large one because this host will only act as a jump server to connect to other instances in your VPC and nothing else.
    • Direct Connect
      • Directly connects to your AWS datacenter
      • Useful for high throughput workloads (ie.lots of network traffic)
      • Or if you need a stable and reliable secure connection
    • Global accelerators
      • Service in which you create accelerators to improve availability and performance of your application.
      • You are assigned 2 static IPs(you can also use your own)
      • You can control traffic using traffic dials. Done within the endpoint group.
    • By default, instances in new subnets in a custom VPC can communicate with each other across Availability Zones.
    • how many VPCs am I allowed in each AWS Region?


    • Egress only gateway

      purpose of an "Egress-Only Internet Gateway" is to allow IPv6 based traffic within a VPC to access the Internet, whilst denying any Internet based resources the possibility of initiating a connection back into the VPC.

    • In a default VPC, all Amazon EC2 instances are assigned 2 IP addresses at launch. What are they?

      private and public IP

    • A VPN connection consists of which of the following components?

      The correct answers are "Customer Gateway" and "Virtual Private Gateway". When connecting a VPN between AWS and a third party site, the Customer Gateway is created within AWS, but it contains information about the third party site e.g. the external IP address and type of routing. The Virtual Private Gateway has the information regarding the AWS side of the VPN and connects a specified VPC to the VPN. "Direct Connect Gateway" and "Cross Connect" are both Direct Connect related terminology and have nothing to do with VPNs.

    • You may have only one gateway per vpc
    You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances.

    App Mesh

    • What is AppMesh?

      AWS App Mesh is a new technology that makes it easy to monitor, control, and debug the communications between services. App Mesh uses Envoy, an open source service mesh proxy which is deployed alongside your microservice containers. App Mesh is integrated with AWS services for monitoring and tracing, and it works with many popular third-party tools. App Mesh can be used with microservice containers managed by Amazon ECS, Amazon EKS, AWS Fargate, Kubernetes running on AWS, and services running on Amazon EC2.

    Link aggregation groups

    • What is LAG?

      A link aggregation group (LAG) is a logical interface that uses the Link Aggregation Control Protocol (LACP) to aggregate multiple connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection.

      For Higher throughput, LAG can be used to aggregate multiple DX connections to give a max od 40 Gig bandwidth.

Transit Gateway

A transit gateway is a transit hub that you can use to interconnect your virtual private clouds (VPC) and on-premises networks.

Section 6 - High Availability

Load balancing

  • What is Load balancer?

    Physical / Virtual device which is used to balance load.

  • What is ELB ( Elastic Load Balancer)?

    Webservice that distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions.

  • Types of Load balancers that ELB offers
    1. Application Load balancer
    1. Network Load balancer
    1. Classic Load balancer
  • What is Application Load balancer and when to use?
    • Load balance at Layer 7 (Application Layer). HTTP and HTTPS traffic based
    • Load balancing can be done based on application logic such as IP Header Cookie Geo-Location
    • An Application Load Balancer must be deployed into at least two subnets.
  • What is Network Load balancer and when to use?
    • Operates at Layer4 (Network Layer)
    • Can load balance based on ports (LDP, UDP , TLS )
    • Mainly used to load balance within your VPC
  • Cross Zone load balancing by Network Load balancer
    • Network Load Balancer can now distribute requests regardless of Availability Zone with the support of cross-zone load balancing.
    • This feature allows Network Load Balancer to route incoming requests to applications that are deployed across multiple Availability Zones.
    • Network Load Balancer relies on Domain Name System (DNS) to distribute requests from clients to the Load Balancer nodes deployed in multiple Availability Zones.
  • What is Classic Load balancer and when to use?

    Classic Load Balancer is intended for applications that were built within the EC2-Classic network.

  • What is the SLA for Load balancers ?

    monthly availability of at least 99.99%

  • What is connection draining?

    To ensure that a Classic Load Balancer stops sending requests to instances that are de-registering or unhealthy while keeping the existing connections open, use connection draining. This enables the load balancer to complete in-flight requests made to instances that are de-registering or unhealthy.

  • Access Logs

    Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.

    Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time.

ELB can balance traffic only in one region
SNI is not supported in Classic Load balancer Application and Network Load balancer supports SNI

Trouble shooting ELB

  • A registered target is not in service

    If a target is taking longer than expected to enter the InService state, it might be failing health checks.

    Verify that your instance is failing health checks and then check for the following:

    A security group does not allow traffic The security group associated with an instance must allow traffic from the load balancer using the health check port and health check protocol. You can add a rule to the instance security group to allow all traffic from the load balancer security group. Also, the security group for your load balancer must allow traffic to the instances.

    A network access control list (ACL) does not allow traffic The network ACL associated with the subnets for your instances must allow inbound traffic on the health check port and outbound traffic on the ephemeral ports (1024-65535). The network ACL associated with the subnets for your load balancer nodes must allow inbound traffic on the ephemeral ports and outbound traffic on the health check and ephemeral ports.

    The ping path does not exist Create a target page for the health check and specify its path as the ping path.

    The connection times out First, verify that you can connect to the target directly from within the network using the private IP address of the target and the health check protocol. If you can't connect, check whether the instance is over-utilized, and add more targets to your target group if it is too busy to respond. If you can connect, it is possible that the target page is not responding before the health check timeout period. Choose a simpler target page for the health check or adjust the health check settings.

    The target did not return a successful response code By default, the success code is 200, but you can optionally specify additional success codes when you configure health checks. Confirm the success codes that the load balancer is expecting and that your application is configured to return these codes on success.

  • Client can't connect to internet facing Load balancer

    If the load balancer is not responding to requests, check for the following:

    • Your Internet-facing load balancer is attached to a private subnet Verify that you specified public subnets for your load balancer. A public subnet has a route to the Internet Gateway for your virtual private cloud (VPC).
    • A security group or network ACL does not allow traffic The security group for the load balancer and any network ACLs for the load balancer subnets must allow inbound traffic from the clients and outbound traffic to the clients on the listener ports.

Elastic Beanstalk

  • What is Elastic Beanstalk?

    You just upload the code of application, Elastic Beanstalk take care of everything from Ec2 instance to Autoscaling, so you don't need prior knowledge of infra management.

  • What type of application supported by Elastic Beanstalk?

    AWS Elastic Beanstalk supports Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker, and is ideal for web applications.

Cloudformation vs Elastic Beanstalk → Cloudformation are for individual resource like Ec2 → Elastic Beanstalk are for resources on application level ie) covers all dependant resouces of an application

Code Deploy

CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda functions. It allows you to rapidly release new features, update Lambda function versions, avoid downtime during application deployment, and handle the complexity of updating your applications, without many of the risks associated with error-prone manual deployments.

Code Commit

you mainly use CodeCommit for managing a source-control service that hosts private Git repositories. You can store anything from code to binaries and work seamlessly with your existing Git-based tools. CodeCommit integrates with CodePipeline and CodeDeploy to streamline your development and release process.


  • What is Autoscaling?

    Helps you optimise the performance of your applications while lowering infrastructure costs by easily and safely scaling multiple AWS resources.

    Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes below this size. You can specify the maximum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes above this size. If you specify the desired capacity, either when you create the group or at any time thereafter, Amazon EC2 Auto Scaling ensures that your group has this many instances. If you specify scaling policies, then Amazon EC2 Auto Scaling can launch or terminate instances as demand on your application increases or decreases.

    For example, the following Auto Scaling group has a minimum size of one instance, a desired capacity of two instances, and a maximum size of four instances. The scaling policies that you define adjust the number of instances, within your minimum and maximum number of instances, based on the criteria that you specify.

  • Component of Autoscaling

    Groups - Resource details which are to be grouped

    Config Template - Resource config details, eg: Ec2 AMI ID,security key etc.

    Scaling options - How to scale

  • What are Autoscaling options available?

    Manual scalling

    Scale based on schedule - Done using schedule plan

    Scale based on demand

    Scale to maintain minimum resource count

  • What is Predictive Scaling?

    AWS looks at the historic data of traffic and autoscales.

  • How Autoscaling decides which instance to terminate?
  • Autoscaling must known facts
    • It ensures that the Auto Scaling group does not launch or terminate additional EC2 instances before the previous scaling activity takes effect.
    • Its default value is 300 seconds.
    • It is a configurable setting for your Auto Scaling group.
  • You can use SQS Queue length for Autoscaling Ec2 instance.
  • ASG diagram
  • Step scaling

    With step scaling, you choose scaling metrics and threshold values for the CloudWatch alarms that trigger the scaling process as well as define how your scalable target should be scaled when a threshold is in breach for a specified number of evaluation periods. Step scaling policies increase or decrease the current capacity of a scalable target based on a set of scaling adjustments, known as step adjustments. The adjustments vary based on the size of the alarm breach. After a scaling activity is started, the policy continues to respond to additional alarms, even while a scaling activity is in progress. Therefore, all alarms that are breached are evaluated by Application Auto Scaling as it receives the alarm messages.

    When you configure dynamic scaling, you must define how to scale in response to changing demand. For example, you have a web application that currently runs on two instances and you want the CPU utilization of the Auto Scaling group to stay at around 50 percent when the load on the application changes. This gives you extra capacity to handle traffic spikes without maintaining an excessive amount of idle resources. You can configure your Auto Scaling group to scale automatically to meet this need. The policy type determines how the scaling action is performed.

  • Target tracking scaling -

    Increase or decrease the current capacity of the group based on a target value for a specific metric. This is similar to the way that your thermostat maintains the temperature of your home – you select a temperature and the thermostat does the rest.

  • Step scaling

    - Increase or decrease the current capacity of the group based on a set of scaling adjustments, known as step adjustments, that vary based on the size of the alarm breach.

  • Simple Scaling

    Simple scaling - Increase or decrease the current capacity of the group based on a single scaling adjustment.

  • Lifecycle hooks

    Use this to hold the termination of instance, so you can perform actions like copying of data before termination.

If 2 policies are triggered at same time, then it scales the high weightage one. eg: If 2 policies triggered at once , one to scale 2 EC2 and other for 4 EC2 instance, then only 4 EC2 instance will take place
If you are scaling based on a utilization metric that increases or decreases proportionally to the number of instances in an Auto Scaling group, then it is recommended that you use target tracking scaling policies. Otherwise, it is better to use step scaling policies instead.

An Auto Scaling group is associated with one launch configuration at a time, and you can't modify a launch configuration after you've created it. To change the launch configuration for an Auto Scaling group, use an existing launch configuration as the basis for a new launch configuration. Then, update the Auto Scaling group to use the new launch configuration.
After you change the launch configuration for an Auto Scaling group, any new instances are launched using the new configuration options, but existing instances are not affected. In this situation, you can allow automatic scaling to gradually replace older instances with newer instances based on your termination policies. With the maximum instance lifetime and instance refresh features, you can also replace existing instances in the Auto Scaling group to launch new instances that use the new configuration.


  • What is Cloudformation?

    Cloudformation provides developers to create AWS/3rd party resources via code

  • How Clouformation works?
  • What is the AWS CloudFormation Registry?

    AWS CloudFormation Registry is a managed service that lets you register, use, and discover AWS and third party resource providers.

  • What is a resource schema?

    In a resource provider, a resource type is expressed using a CloudFormation Resource Schema to define its properties and attributes. This schema is also used to validate the definition of a resource type.

  • What are the limits to the number of parameters or outputs in a template?

    Limitation of 60 Parameters

    Limitation of 60 Outputs

  • DeletionPolicy Attribute
    • With the DeletionPolicy attribute you can preserve or (in some cases) backup a resource when its stack is deleted.
    • You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by default.
  • Detecting unmanaged configuration changes to stacks and resources

    To detect changes / difference between cloudformation and changes done outside cloudformation.

    By default it detects changes parameters exist in cloud formation

    Explicitly set

  • Migrating ways to

    Canary: Traffic is shifted in two increments. You can choose from predefined canary options that specify the percentage of traffic shifted to your updated Lambda function version in the first increment and the interval, in minutes, before the remaining traffic is shifted in the second increment.

    Linear: Traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the number of minutes between each increment.

    All-at-once: All traffic is shifted from the original Lambda function to the updated Lambda function version at once.

HA Architecture

Adding resilience and autoscaling
elastic beanstalk

Section 7 - Applications


  • What is SQS?
    • Simple Queue Service
    • Web service that gives you access to a message query. Query can be saved as messages that can be stored
    • SQS is Pull based. ie) Ec2 should pull the message
  • Can messages been saved?

    SQS messages been stored in S3

  • Message size limit of text?

    256 KB is the max size of a message in text format

  • Max visibility of message of timeout?

    12 hours

  • Types of Queue
    • Standard Queue
      • It's default queue
      • Can perform unlimited transactions per second
      • It tries to deliver message in the same way as requested(Not Guaranteed)
      • Sometime message delivered mored than once.
    • First In First Out
      • Message processed exactly once
      • FIFO limitation

        limits to 300 transactions per second

      • FIFO also supports multiple ordered message in a single Queue
      • FIFO process one message at a time. Duplicates are not introduced into the queue.
  • Does Amazon SQS provide message ordering?

    FIFO type que preserve ordering

  • Does Amazon SQS guarantee delivery of messages?

    Standard queues provide at-least-once delivery, which means that each message is delivered at least once.

  • SQS can be used with what AWS services?

    with compute services such as Amazon EC2 Container Service (Amazon ECS), and AWS Lambda, as well as with storage and database services such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.

  • What is a visibility timeout?
    • If the message is selected for processing then it goes hidden. The message will visible again unless message finished processing / reaches visibility timed out.

    Above may result in processing the message double time.

  • Max Visibility timed-out?

    12 hours

  • What is Amazon SQS long polling?

    Way to retrieve message from your AWS SQS queue.

    Long polling won't return return response until message arrives the queue/ long-poll timed out.

  • Message Retention period?

    Default Retention period is 4 Days

    Retention period can vary from 1 minute - 14 Days

  • Can I share messages between queues in different regions?

    No. Each Amazon SQS message queue is independent within each region.

  • SQS decouples the component of an application, so that they run independently
  • Moving from Standard Queue to FIFO

    You can't move existing message in standard queue to FIFO

    • Ensure below
      • If your application can send messages with identical message bodies, you can modify your application to provide a unique message deduplication ID for each sent message.
      • If your application sends messages with unique message bodies, you can enable content-based deduplication.
SQS uses redundant infrastructure this provides high availability. So Need to worry regarding any failover.

Kinesis Datastream

  • What is Kinesis?

    Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs.

  • How it works?
  • What is Kinesis Data stream?

    A Kinesis data stream is an ordered sequence of data records meant to be written to and read from in real time.

  • Types of Kinesis
    • Kinesis Streams
      • You can use Amazon Kinesis Data Streams to collect and process large streams of data records in real time. You can create data-processing applications, known as Kinesis Data Streams applications. A typical Kinesis Data Streams application reads data from a data stream as data records. These applications can use the Kinesis Client Library, and they can run on Amazon EC2 instances.
      • Place where the data stored
      • Data stored from 1 Day - 7 Days
    • Kinesis Firehose

      Data to be directly processed here

      Example use Kinesis Firehose to upload data to S3

      Here Data processed nearly realtime(not precise realtime)

    • Kinesis Analytics

      Works with Streams or Firehose to perform analytics

  • What is a shard?

    Shard is the base throughput unit of an Amazon Kinesis data stream.

  • Data input and output capacity of one shard

    One shard provides a capacity of 1MB/sec data input and 2MB/sec data output.

  • One shard can support up to ? PUT records per second.


  • Retention period of Datasream stores

    A Kinesis data stream stores records from 24 hours by default to a maximum of 168 hours (7 days).

  • Kinesis Data streams shrads

    Amazon Kinesis Data Streams supports resharding, which lets you adjust the number of shards in your stream to adapt to changes in the rate of data flow through the stream. Resharding is considered an advanced operation.

    There are two types of resharding operations: shard split and shard merge. In a shard split, you divide a single shard into two shards. In a shard merge, you combine two shards into a single shard. Resharding is always pairwise in the sense that you cannot split into more than two shards in a single operation, and you cannot merge more than two shards in a single operation. The shard or pair of shards that the resharding operation acts on are referred to as parent shards. The shard or pair of shards that result from the resharding operation are referred to as child shards.

    Splitting increases the number of shards in your stream and therefore increases the data capacity of the stream. Because you are charged on a per-shard basis, splitting increases the cost of your stream. Similarly, merging reduces the number of shards in your stream and therefore decreases the data capacity—and cost—of the stream.

    If your data rate increases, you can also increase the number of shards allocated to your stream to maintain the application performance. You can reshard your stream using the UpdateShardCount API. The throughput of an Amazon Kinesis data stream is designed to scale without limits via increasing the number of shards within a data stream.


  • Who should use MQ?

    Best suited to who are managing a message broker themselves–whether on-premises or in the cloud–and want to move to a fully managed cloud service without rewriting the messaging code in their applications.


  • What is STS?

    Simple Token service for temporary access.


  • What is CloudHSM?

    AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. CloudHSM offers you the flexibility to integrate with your applications using industry-standard APIs, such as PKCS#11, Java Cryptography Extensions (JCE), and Microsoft CryptoNG (CNG) libraries.

    If you want to have complete control over Encryption key including hardware & controlling lifecycle of key

  • To encrypt its data, the HSM uses a unique, ephemeral encryption key known as the ephemeral backup key (EBK). The EBK is an AES 256-bit encryption key generated inside the HSM when AWS CloudHSM makes a backup. The HSM generates the EBK, then uses it to encrypt the HSM's data with a FIPS-approved AES key wrapping method that complies with NIST special publication 800-38F. Then the HSM gives the encrypted data to AWS CloudHSM. The encrypted data includes an encrypted copy of the EBK.

AWS Opswork

  • What is AWS Opswork?

    AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments. OpsWorks has three offerings, AWS Opsworks for Chef Automate, AWS OpsWorks for Puppet Enterprise, and AWS OpsWorks Stacks.

AWS Trusted Advisor

  • What is AWS Trusted Advisor?

    AWS Trusted Advisor is an online tool that provides you real time guidance to help you provision your resources following AWS best practices.

    Trusted Advisor checks help optimize your AWS infrastructure, increase security and performance, reduce your overall costs, and monitor service limits. Whether establishing new workflows, developing applications, or as part of ongoing improvement, take advantage of the recommendations provided by Trusted Advisor on a regular basis to help keep your solutions provisioned optimally.


  • What is Lexicons

    Pronunciation lexicons enable you to customize the pronunciation of words.

    Amazon Polly provides API operations that you can use to store lexicons in an AWS region. Those lexicons are then specific to that particular region. You can use one or more of the lexicons from that region when synthesizing the text by using the SynthesizeSpeech operation. This applies the specified lexicon to the input text before the synthesis begins. For more information, see SynthesizeSpeech.

Amazon Polly

  • What is Amazon Polly?

    Amazon Polly is a cloud service that converts text into lifelike speech. You can use Amazon Polly to develop applications that increase engagement and accessibility. Amazon Polly supports multiple languages and includes a variety of lifelike voices, so you can build speech-enabled applications that work in multiple locations and use the ideal voice for your customers. With Amazon Polly, you only pay for the text you synthesize. You can also cache and replay Amazon Polly’s generated speech at no additional cost.


  • What is SWF?
    • Simple WorkFlow
    • Amazon Simple Workflow Service (Amazon SWF) is a web service that makes it easy to coordinate work across distributed application components
    • Tracks all tasks and events

  • Where it can be used?

    Used to co-ordinate multiple tasks like co-ordinating all tasks right from receiving order till delivering in a ecom website.

    Here Tasks can be any even manual action.

  • SWF Actors
    • Starters

      Application that can initiates workflow

    • Deciders

      Controls flow of task in the activity. Decides what to do next?

    • Activity workers

      Carryout the activity tasks

Amazon SWF provides useful guarantees around task assignments. It ensures that a task is never duplicated and is assigned only once. Thus, even though you may have multiple workers for a particular activity type (or a number of instances of a decider), Amazon SWF will give a specific task to only one worker (or one decider instance). Additionally, Amazon SWF keeps at most one decision task outstanding at a time for a workflow execution. Thus, you can run multiple decider instances without worrying about two instances operating on the same execution simultaneously.
This is a fully-managed state tracker and task coordinator service. It does not provide serverless orchestration to multiple AWS resources.

Step Function

  • What is AWS Step Function?

    AWS Step Functions provides serverless orchestration for modern applications. Orchestration centrally manages a workflow by breaking it into multiple steps, adding flow logic, and tracking the inputs and outputs between the steps.

    As your applications execute, Step Functions maintains application state, tracking exactly which workflow step your application is in, and stores an event log of data that is passed between application components.

    That means that if networks fail or components hang, your application can pick up right where it left off.

Step function supports server less orchestration

AWS Batch

  • AWS Batch

    is primarily used to efficiently run hundreds of thousands of batch computing jobs in AWS.


  • What is SNS?
    • Simple Notification Service
    • Enables developer to publish message from an application and immediately delivers to subscribers or other applications.
  • SNS allows to push notifications via?

    Mobile Email HTTPS endpoints Text message

  • Recipients can be grouped on multiple topics
  • SNS Attributes
    • Name – The message attribute name can contain the following characters: A-Z, a-z, 0-9, underscore(_), hyphen(-), and period (.).

    The name must not start or end with a period, and it should not have successive periods. The name is case-sensitive and must be unique among all attribute names for the message. The name can be up to 256 characters long. The name cannot start with "AWS." or "Amazon." (or any variations in casing) because these prefixes are reserved for use by Amazon Web Services.

    • Type – The supported message attribute data types are String, String.Array, Number, and Binary. The data type has the same restrictions on the content as the message body. The data type is case-sensitive, and it can be up to 256 bytes long. For more information, see the Message attribute data types and validation section.
    • Value – The user-specified message attribute value. For string data types, the value attribute has the same restrictions on the content as the message body. For more information, see the Publish action in the Amazon Simple Notification Service API Reference.
  • Delevery protocls at SNS
Note: SQS and SNS both are messaging services → SQS is pull based ie)process the query → SNS is push based

Elastic Transcode

  • What is Elastic Transcode?

    Service to convert Media from one format to other format.

  • Billing based on?

    Minutes and Resolution you transcode

API Gateway

  • What is API Gateway?

    Service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale.

  • What API types are supported by Amazon API Gateway?
    • HTTP API

      proxied by lambda or http workloads

    • REST API

      REST APIs offer API proxy functionality and API management features in a single solution.

    • WebSocket API
      • WebSocket APIs maintain a persistent connection between connected clients to enable real-time message communication.
      • Charged for Message sent and received. and time the connection maintained.

      You can invoke Lambda if specific message received from client

  • How API Gateway works

    service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale.

  • With what backends can Amazon API Gateway communicate?
    • Amazon API Gateway can execute AWS Lambda functions in your and other AWS account
    • Can start AWS Step Functions state machines
    • call HTTP endpoints hosted on AWS Elastic Beanstalk, Amazon EC2, and also non-AWS hosted HTTP based operations that are accessible via the public Internet.
    • API Gateway can generate response as if sent by backend system
    • You can also integrate API Gateway with other AWS services directly – for example, you could expose an API method in API Gateway that sends data directly to Amazon Kinesis.

  • What can be done with Resource policy in API?
    • You can use a Resource Policy to enable users from a different AWS account to securely access your API
    • Allow the API to be invoked only from specified source IP address ranges or CIDR blocks.
  • Throttling and Burst

    AWS default Throttling value is 10000 rps (request per second)

    Burst is 5000

    • What is Burst?

      Max no of request that can serve concurrently at any given point of time

    • What is Throttling?

      Maximum requests that can be server per second

    Serves 429 as status code if the Throttling/ Burst kicks in

  • How are throttling rules applied?

    First, API Gateway checks against your AWS account limit. If the traffic is below the set account limit, API Gateway checks the limit you have set on a stage or method. If the traffic is below the stage limit, then API Gateway applies the usage plans limits you set on a per-API key basis.

  • API Gateway with private integrations

    Private integrations were made possible via VPC Link and Network Load Balancers, which support backends such as EC2 instances, Auto Scaling groups, and Amazon ECS using the Fargate launch type.

  • As a default security feature API gateway provides DDOS protection.
  • API caching at API Gateway
    • You can enable API caching in Amazon API Gateway to cache your endpoint's responses. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API.
    • Cache data encryption may increase the size of the response when it is being cached.

  • What is the Max size of data that can be cached in API Gateway?

    The maximum size of a response that can be cached is 1048576 bytes.

  • When you enable caching for a stage

    API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds. API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint. The default TTL value for API caching is 300 seconds. The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled.

Web identity federation and Cognito

  • What is Web identity federation and Cognito?
    • Provides signup , signin , guest feature for your app with out any additional code.
    • Cognito act as broker between identity provider(google/fb) and AWS
  • User pools?

    User directories that used to manage signin/signup functionality for you app

  • Identity pools?

    Provides Temporary access to AWS to access services like S3, Ec2

Cognito identity providers are to authenticate users not services


  • What is a SES?
    • SES - Simple Emailing Service
    • Service for sending and receiving email. Amazon SES eliminates the complexity and expense of building an in-house email solution or licensing, installing, and operating a third-party email solution.

Elastic Transcoder
API gateway
Web identity federation and Cognito

Section8 - Serverless

  • What is Lambda?

    Let's you run your code without provisioning server.

  • How Lambda Works?
  • What is server less computing?

    Allows you to build and run applications without running a server.

  • What all can directly trigger Lambda?

    ALB, Cognito, Lex, Alexa, API Gateway, CloudFront, and Kinesis Data Firehose are all valid direct (synchronous)

  • What languages does AWS Lambda support?

    Lambda natively supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby code, and provides a Runtime API which allows you to use any additional programming languages to author your functions.

  • How to troubleshoot a serverless application?

    Use AWS X-Ray to track and troubleshoot Lambda calls

  • Limitation of codesize in Lambda?

    Uploads must be no larger than 50MB (compressed).

  • What is temporary storage space for lambda function

    Each Lambda function receives 500MB of non-persistent disk space in its own /tmp directory.

  • EFS for Lambda

    Lambda functions can download data from EFS and process and not to rely on default temp storage of 500MB

  • What is [email protected]?

    Map Lambda to Cloudfront, end user can access API endpoint with low latency

  • What Amazon CloudFront events can be used to trigger my functions?

    Viewer Request and Response

    Origin Request and Response

  • How does Lambda Scales?
    • Lambda scales out (not up) automatically

    ie) If 5 peoples triggers a lambda functions then 5 different lambda functions are triggered.

  • Security and Access for lambda functions

    For Lambda to use other AWS services , required IAM/Role to be provided to the lambda function

  • Lambda functions are independent , 1 event = 1 function
  • service vs serverless

    In Service actual servers runs at backend just that it's maintained by AWS means there are chances of failover

  • Lambda one functions can trigger multiple functions
  • Lambda can do things globally

    like backing up S3

  • Lambda billing is based on?
    • Execution billed in fraction of seconds
    • Total memory assigned
  • What is the max timeout of a Lambda function?

    Max of 15 minutes.

  • Lambda function calling resource in VPC

    AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPC-specific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.

    Your Lambda function automatically scales based on the number of events it processes. If your Lambda function accesses a VPC, you must make sure that your VPC has sufficient ENI capacity to support the scale requirements of your Lambda function.

    If your VPC does not have sufficient ENIs or subnet IPs, your Lambda function will not scale as requests increase, and you will see an increase in invocation errors with EC2 error types like EC2ThrottledException.

  • Monitoring functions in the AWS Lambda console
    • Invocations – The number of times that the function was invoked in each 5-minute period.
    • Duration – The average, minimum, and maximum execution times.
    • Error count and success rate (%) – The number of errors and the percentage of executions that completed without error.
    • Throttles – The number of times that execution failed due to concurrency limits.
    • IteratorAge – For stream event sources, the age of the last item in the batch when Lambda received it and invoked the function.
    • Async delivery failures – The number of errors that occurred when Lambda attempted to write to a destination or dead-letter queue.
    • Concurrent executions – The number of function instances that are processing events.
    AWS Lambda automatically monitors functions on your behalf, reporting metrics through Amazon CloudWatch. These metrics include total invocation requests, latency, and error rates. The throttles, Dead Letter Queues errors and Iterator age for stream-based invocations are also monitored.
    You can use AWS X-Ray to trace and analyze user requests as they travel through your Amazon API Gateway APIs to the underlying services. API Gateway supports AWS X-Ray tracing for all API Gateway endpoint types: regional, edge-optimized, and private. You can use AWS X-Ray with Amazon API Gateway in all regions where X-Ray is available.

If they asked about highload application, don't prefer Lambda due to max timeout in Lambda

AWS AppSync

  • What is AppSync?

    AWS AppSync simplifies application development by letting you create a flexible API to securely access, manipulate, and combine data from one or more data sources.

    AppSync is a managed service that uses GraphQL to make it easy for applications to get exactly the data they need.


Alexa skills


  • What is AWS X-Ray?

    AWS X-Ray receives data from services as segments. X-Ray then groups segments that have a common request into traces. X-Ray processes the traces to generate a service graph that provides a visual representation of your application.

  • What is Segments?

    The compute resources running your application logic send data about their work as segments.

    A segment provides the resource's name, details about the request, and details about the work done.

    For example, when an HTTP request reaches your application, it can record the following data about:

    • The host – hostname, alias or IP address
    • The request – method, client address, path, user agent
    • The response – status, content
    • The work done – start and end times, subsegments
  • What is Subsegments?

    A segment can break down the data about the work done into subsegments.

    Subsegments provide more granular timing information and details about downstream calls that your application made to fulfill the original request.

    A subsegment can contain additional details about a call to an AWS service, an external HTTP API, or an SQL database. Y

  • What is Service graph?

    X-Ray uses the data that your application sends to generate a service graph. Each AWS resource that sends data to X-Ray appears as a service in the graph.

  • What is Trace?

    A trace ID tracks the path of a request through your application.

    A trace collects all the segments generated by a single request.

    That request is typically an HTTP GET or POST request that travels through a load balancer, hits your application code, and generates downstream calls to other AWS services or external web APIs.

    The first supported service that the HTTP request interacts with adds a trace ID header to the request, and propagates it downstream to track the latency, disposition, and other request data.

  • What is Sampling ?

    To ensure efficient tracing and provide a representative sample of the requests that your application serves, the X-Ray SDK applies a sampling algorithm to determine which requests get traced. By default, the X-Ray SDK records the first request each second, and five percent of any additional requests.

    → Sampling at Default rate → Sampling at higher rate

    → Sampling at

  • What is Tracing header?

    All requests are traced, up to a configurable minimum. After reaching that minimum, a percentage of requests are traced to avoid unnecessary cost. The sampling decision and trace ID are added to HTTP requests in tracing headers named X-Amzn-Trace-Id. The first X-Ray-integrated service that the request hits adds a tracing header, which is read by the X-Ray SDK and included in the response.

  • What is Filter expressions?

    Even with sampling, a complex application generates a lot of data. The AWS X-Ray console provides an easy-to-navigate view of the service graph. It shows health and performance information that helps you identify issues and opportunities for optimization in your application. For advanced tracing, you can drill down to traces for individual requests,

    Use filter expressions to find traces related to specific paths or users.
  • Groups?

    Extending filter expressions, X-Ray also supports the group feature. Using a filter expression, you can define criteria by which to accept traces into the group.

    You can call the group by name or by Amazon Resource Name (ARN) to generate its own service graph, trace summaries, and Amazon CloudWatch metrics. Once a group is created, incoming traces are checked against the group’s filter expression as they are stored in the X-Ray service. Metrics for the number of traces matching each criteria are published to CloudWatch every minute.

  • Annotations

    Annotations are simple key-value pairs that are indexed for use with filter expressions. Use annotations to record data that you want to use to group traces in the console, or when calling the GetTraceSummaries API.

    X-Ray indexes up to 50 annotations per trace.

  • Metadata

    Metadata are key-value pairs with values of any type, including objects and lists, but that are not indexed. Use metadata to record data you want to store in the trace but don't need to use for searching traces.

  • Errors, Faults & Exceptions

    X-Ray tracks errors that occur in your application code, and errors that are returned by downstream services. Errors are categorized as follows.

    • Error – Client errors (400 series errors)
    • Fault – Server faults (500 series errors)
    • Throttle – Throttling errors (429 Too Many Requests)