AWS Developer Associate Notes

Last Updated on March 16, 2025

------------------------------------
CORS
  the ec2 hosting the files needs the policy to allow access
  -----
  to restrict servicing unwanted requests
    you must disable cross-origin resource sharing for selected methods
------------------------------------
Multi Part Upload to S3
  for any files over 100 MB
------------------------------------
Cookies
  cannot start with "AWS"
------------------------------------
------------------------------------
------------------------------------
EC2
  query metadata: http://169.254.169.254/latest/meta-data
    a special link-local IP address for the Instance Metadata Service (IMDS)
  provides metadata about the instance, such as:
    instance ID
    availability zone
    security credentials  -----
  enable detailed monitoring:
    aws ec2 monitor-instances --instance-ids i-1234567890abcdef0
  -----
  to capture lifecycle events of the EC2 instances, create and EventBridge 
    rule that matches all EC2 instance lifecycle events
  -----
  user data runs on first start boot cycle, not restart
  ----------
  basic monitoring collects metrics in 5-minute periods
  detailed monitoring collects metrics in 1-minute periods
  ----------
  IAM Instance Role and
  Instance Profile
    used to gain access to AWS services, common setup when deploying any apps on EC2
    -----
    a container for an IAM role that you can use to pass role information 
      to an EC2 instance when the instance starts
    -----
    the SDK will use the EC2 metadata service to obtain temporary credentials
      thanks to the IAM instance role
  ----------
  Zonal Reserved Instances
    provide a capacity reservation in a specific Availability Zone
  ----------
  Regional Reserved Instances
    provide a discount on instance usage in any AZ in a region, without reserving capacity 
  ----------
  To get instance RAM data
    use a cron job that pushes the EC2 RAM statistics as a Custom metric into CloudWatch
  ----------
  T3, T3a, and T2 instances, are designed to provide a baseline level of CPU performance
    with the ability to burst to a higher level when required by your workload
  ----------
  Burstable performance instances are the only instance types that use credits for CPU usage
------------------------------------
------------------------------------
------------------------------------
AWS CLI
  requires Python runtime
  -----
  a Role on EC2 running CLI command uses "Instance Metadata, temporary"
  -----
  cannot retrieve IAM policies on the EC2
  -----
  use GetSessionToken - to make calls to MFA protected API to get TEMP credentials
  -----
  order which it find credentials is:
    CL-options -> Env Vars ->Instance Profile
  -----
  SDK signs API req with: SigV4
  -----
  3 options to control the number of items returned:
    --max-items
    --starting-token
    -----------------
    --page-size
      retrieves the full list (bad)
      for smaller display purpose only
------------------------------------
------------------------------------
Security Groups - stateful
-----------------
NACL            - stateless
------------------------------------
------------------------------------
ECS
  binpack is cheapest
  -----
  terminating a container intance while in STOPPED state will cause sync issues
    it's not auto removed
  -----
  cluster names are in "/etc/ecs/ecs.config"
  -----
  supports docker volumes and Docker volumes are only supported 
    when running tasks on Amazon EC2 instances
  -----
  you must specify the volume and mount point in the TASK DEFINITION - for EC2 only
  ----------
  to run a serverless data store service on two docker containers that share resources...
    put the two containers into a SINGLE task definition using a Fargate Launch Type
  ----------
  Logging
    AwsLogs Log Driver - sends log information to CloudWatch Logs
    Add the required logConfiguration parameters to your task definition
  ---------------
  Step Scaling Policy (ECS)
    define sets of thresholds & corresponding scaling adjustments based on CloudWatch metric
    ----------
    scale your ECS service in response to changes in a specific metric, 
      such as CPU utilization or request count
    ----------
    more flexibility than a Target Tracking Scaling Policy
    ----------
    Use backlog per instance metric with target tracking scaling policy
    --------
    ApproximateNumberOfMessagesVisible is NOT a part of Target Tracking Scaling Policy
      This is a CloudWatch Amazon SQS queue metric
  ----------------
  Target Tracking Scaling Policy Metrics (either ASG or ECS)
    ASGAverageCPUUtilization
    ASGAverageNetworkOut
    ALBRequestPerTarget
------------------------------------
------------------------------------
------------------------------------
Security Groups
  do not matter when EC2 registers with ECS service, by default all outbound traffic is allowed
  to ensure only an ALB can access an EC2 on port 80 (demo)
    add inbound rule with port 80
    make the ALBs security group as the source
------------------------------------
Ephemeral Ports 
  temporary ports assigned by the operating system for outgoing connections
  typically used for return traffic from a server to a client
------------------------------------
------------------------------------
------------------------------------
EBS - HIGH PERFORMANCE
  to detach volumes safely, Stop the instance then detach
  Root volumes deleted upon terminated EC2
  EBS volume is tied to a single instance
  All other types remain as "Delete On Termination" disabled by default
  boot volume types: gp2, gp3, io1, io2 
  Multi-Attach: one ebs volume attached to multi EC2s in ONE AZ
  -----
  Set the DeleteOnTermination attribute to False using the command line
  -----
  AWS CloudTrail event logs for 'CreateVolume' aren't available for EBS volumes created
    during an Amazon EC2 launch
  -----
  performance of gp2 volumes is tied to volume size
  -----
  Provisioned IOPS:
    gp2, gp3: 3,000 IOPS, with burst up to 16,000
    io1: 50:1  Ex: 200gb * 50  = 10000  IOPS
    io2: 500:1 Ex: 200gb * 500 = 100000 iops 
  ------
  GP3: Good Performance, No Bursting
  IO1: Intensive Operations, High Performance
  IO2 Block Express: Incredible Performance, Extreme Latency
  -----
  The maximum ratio of provisioned IOPS to requested volume size (in GiB) is 50:1. 
  So, for a 200 GiB volume size, max IOPS possible is 200*50 = 10000 IOPS.
  -----
  support in-flight and at rest encryption using KMS
  ------
  automatically replicated within its Availability Zone to prevent data loss due to the
    failure of any single hardware component
  CAN attach to an EC2 instance in the SAME AZ
  EBS volumes are AZ LOCKED
  ------
  Encryption
  - A volume restored from encrypted snapshot, or copy of encrypted snapshot is encrypted
  - Encryption by default is a Region-specific setting. 
  - If you enable it for a Region, you cannot disable it for individual volumes 
    or snapshots in that Region
------------------------------------
------------------------------------
------------------------------------
EFS - HIGH AVAILABILITY
  file-level storage using NFS protocol
  shared file system, can share between EC2 instances & ECS tasks
  good for multi-AZ shared storage for containers
  can be mounted across MULTIPLE AZs
------------------------------------
Instance Store - i3, i4
  IOPS ???
  lose cache upon termination or restart
------------------------------------
------------------------------------
------------------------------------
ELB - High Availability
  provides static DNS but NOT static IP
  enable health checks
  can separate public traffic from private traffic
  Elastic IPs DO NOT need to be assigned to EC2 instances while using an ALB
  ----------
  to automate the replacement of unhealthy EC2 instances,
    change the health check type of your instance's ASG from EC2 to ELB with a config file
  ----------
  When all instances return as UNHEALTHY
    likely cause - the route for the health check is misconfigured
------------------------------------
------------------------------------
------------------------------------
ALB - Application Load Balancer
  use Cognito Authentication via Cognito User Pools for your ALB
  can enable multi value headers
  provides static DNS but NOT static IP
  -----
  ALB itself is replaced on each new deployment, 
    so maintaining sticky sessions via the Application Load Balancer will not work
  -----
  Sticky sessions rely on a cookie - which is NOT consistent across devices
  -----
  cannot attach Elastic IPs
  -----
  3 target types: Instance, IP and Lambda
  You can not specify publicly routable IP addresses to an ALB - When the target type is IP, you can specify IP addresses from specific CIDR blocks only
  -----
  cannot target:
    GeoLocation, Network Load Balancers
  --------
  Dynamic Port Mapping 
    feature to expose apps for use with ECS
  --------
  route to target groups based on: 
    URLPath, Headers, Query Strings, Hostname, Source IP 
  --------
  503 Service Unavailable - when no targets registered
  ----------
  ALB access logs
    view incoming requests for latencies and the client's IP address patterns
------------------------------------
------------------------------------
------------------------------------
Network Load Balancer
  provides static DNS & IP
  has one static IP per AZ and can have Elastic IP attached
  lowest latency, high performance
  -----
  handles millions of requests per second
  Supports TCP and UDP
  -----
  can capture IP address and source port WITHOUT the use of X-Forwarded-For
------------------------------------
Cross Zone Load Balancing
  distributes acress zones
------------------------------------
------------------------------------
------------------------------------
ASG
  can be configured to use ALB health checks instead of EC2 health checks
  -----
  when EC2 fails the ALB health check, will terminate unhealthy, and launch a new EC2
  -----
  can span Availability Zones, but not regions
  ----------------
  Scaling Policies
    enable rulesets for scaling including:
    Target, Scheduled, Step, Simple
  ----------------
  Cooldown Period - 300 seconds (5minutes)
    after scaling activities, will not launch/terminate EC2s
    gives time for metrics to stabilize
------------------------------------
------------------------------------
------------------------------------
CloudWatch
  -----
  collects monitoring and operational data in the form of logs, metrics, and events, 
    and visualizes it using automated dashboards to get a unified view of your resources,
    applications, and services that run in AWS and on-premises
  -----
  CMK - customer master key
    Log group data is always encrypted in CloudWatch Logs
    Optionally use AWS Key Management Service for this encryption
    If you do, the encryption is done using an AWS KMS (AWS KMS) customer master key (CMK)
    -----
    to encrypt log data 
    use the AWS CLI associate-kms-key command and specify the KMS key ARN
  -----
  dimensions - can be location...
  -----
  when monthly database backups must be retained:
  Create a cron event in CloudWatch, which triggers an AWS Lambda function
    that triggers the database snapshot
  -----
  metric filters
    uses terms and patterns in LOG DATA as it's sent to CW logs
    CW turns these logs into numerical data you can graph or set ALARM on
  -----
  custom metrics
    can perform action on alarm
    min resolution of 1 second
    -----
    High res: every second
    Standard: every minute
    -----
    CloudWatch considers metrics with a period less than 60 seconds (1 minute) 
      as high-resolution custom metrics
  -----
  PutMetricData
    custom metric api to cloudwatch
    such as scaling to meet increase in request demands
    ensure the correct IAM Role Policy is attached to enable the API call
  -----
  can't help you debug microservices specific issues on AWS
  -----
  Method Level Logging
    Security Token Service (STS) is used by API Gateway for logging data to CloudWatch logs
    so, AWS STS has to be enabled for the Region that you're using.
    -----
    To enable CloudWatch Logs for all or only some of the methods
    you must also specify the ARN of an IAM role that enables API Gateway to write information to CloudWatch Logs on behalf of your user
    The IAM role must also contain the following trust relationship statement
  -----
  CloudWatch Events (Rules)
    near real-time stream of system events that describe changes in AWS resources
    match events and route them to target functions or streams
  -----
  Alarms
    can be sent directly to SNS topic, no need for Lambda function
    Unified Cloudwatch Agents can push EC2 memory usage as a custom metric
    Log Retention - defined at Log Group level
  -----
  Metrics
    are variables you can measure for your resources and applications
    Metric data is kept for 15 months
    and enables you to view both up-to-the-minute data and historical data
  -----
  Agent
    to read on-prem data - install the agent on the server and configure the agent to use
    IAM user credentials with permissions for cloudwatch
------------------------------------
------------------------------------
------------------------------------
RDS
  To encrypt an unencrypted db, create SNAPSHOT, then enable encryption and restore
  rds supports: 
    MySQL, PostgreSQL, MariaDB, Oracle, MS SQLServer and Aurora
  does NOT support MongoDB
  has ability to AUTO SCALE STORAGE for unpredictable loads
  -----
  Use RDS Enhanced Monitoring to see how different processes or threads on a 
    DB instance use the CPU
  -----
  Read Replicas
    can have up to 15 replicas
    asynchronous replication
    improves scalability, and support an increase in demands
    can be used to run intensive tasks without downtime
    set in a different region from the source, an be used as a standby as DR pattern,
      and can then become new PROD in the case of regional disruption
    -----
    for faster reads add a connection string to user read replicas for read queries
  -----
  Multi AZ
    good for disaster recovery
    automatically inits failover to standby, in case primary fails
    applies OS updates on standby, then prmotes standby to primary, 
      then fixes old primary, which becomes new standby
    synchronous replication
    keeps same connection string regardless of DB
  -----
  Disaster Recovery
    Use cross-Region Read Replicas
    SINGLE REGION ONLY backups
    Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates
      backups in a SINGLE AWS Region  
  ------
  Eventual Consistency
    default type, but can be set
    provides high read throughput and scalability
    not always reflect the most recent writes
    replica lag can range from a few seconds to several minutes
      depending on the workload and network conditions
  Strongly Consistent
    Data consistency is paramount
    when you require the most recent data, even if it means reduced read throughput
    when data is sensitive or critical, such as financial or healthcare information
  ------
  ConsistentRead = true
    application requires most up-to-date data, reflecting all prior successful writes
  ------
  ConsistentRead = false (default)
    application can tolerate some delay between writes and reads
------------------------------------
------------------------------------
------------------------------------
Aurora
  allows up to 15 read replicas in different AZs
  primary should be closest to main headquarters
  supports MySQL, PostgreSQL
  -----
  Aurora supports global TRANSACTION GROUPINGS across multiple tables and region - same as 
    DynamoDB Transactions API
------------------------------------
Athena
  can be used to analyze S3 logs
  'Alter Table Add Partition' command to update column names
------------------------------------
ElastiCache
  good for READ heavy, and compute-intensive workloads, NOT WRITE heavy workloads
  -----
  commonly used to store user session state
  -----
  can be used with any DB
  -----
  use ElastiCache to maintain user sessions
  -----
  ElastiCache defined in .ebextensions/ and will get deleted if the environment is terminated
  -----
  to help cache-aside stratgey
------------------------------------
Redis
  supports advanced data structures like lists, hashes, sets, and sorted sets
  -----
  offers replication, persistence, transactions, and pub/sub capabilities
  -----
  suitable for more complex use cases that require data persistence and advanced operations
  -----
  all the nodes in a Redis cluster must reside in the same region
  -----
  best way to re-factor the performance and availability of the app session
  -----
  self-managed Redis apps work seamlessly with ElastiCache for Redis with no code changes
  -----
  While using Redis with cluster mode enabled
    you cannot manually promote any of the replica nodes to primary
  ----------
  Redis Cluster
    when mode disabled, has MAX 5
    ability to horizontally scale your Redis cluster, with almost 0 impact on the performance 
    also, enhance reliability and availability with little change to your existing workload
------------------------------------
Memcached
  designed for simplicity, in-memory key-value store, multithreaded, 
    efficient on larger EC2 instances with multiple cores
  simple caching model and do not need advanced data types or persistence
------------------------------------
Cache Evictions
------------------------------------
------------------------------------
------------------------------------
Cache Strategies
-----
Lazy Loading
  caching strategy that loads data into the cache only when necessary
  -----
  With TTL
    for a while, old data will be served to users
  -----
  Without TTL
    If the data doesn't exist in the cache or has expired, app requests data from data store
------------------------------------
Write Through
  longer writes, but faster reads
  cache data always updated
  the application writes to the backend first and then invalidate the cache
------------------------------------
TTL
  determines how long a client will cache DNS record
  in order to not overload the DNS resolver
------------------------------------
------------------------------------
------------------------------------
Route 53
  Latency
  Weightwd
  Geolocation
  Health Check Types
    Endpoints, Other Health Checks, CloudWatch Alarms
------------------------------------
Public Hosted Zones
  used for requests from internet
------------------------------------
------------------------------------
------------------------------------
NAT Gateway - Network Address Translation
  least admin, seamless scaling
  highly available and horizontally scalable
  allows private subnets with IPv4 to ACCESS to the INTERNET while remaining PRIVATE
  -----
  target resources can be in the same VPC, a different VPC, on the internet, 
    or within your on-premises network
  -----
  deployed in public subnets and act as a bridge between instances
    in private subnets and the internet
------------------------------------
VPC Gateway Endpoint
  ONLY for S3, DynamoDB, EC2
  all others have INTERFACE Endpoint  (powered by PrivateLink, meaning private IP)
------------------------------------
VPC Interface Endpoint
  private IP
  privately connect VPC to SQS
  -----
  since it restricts all access to inside the network, 
    no need for Internet Gateway, or NAT or Virtual Private Gateway
------------------------------------
------------------------------------
------------------------------------
VPC Flow Logs
  captures IP traffic info in/out
------------------------------------
Direct Connect (Gateway)
  private, consistent, AVOIDS using public internet
  HIGH BANDWIDTH connection for on-prem network to cloud
  uses VPN, cannot EXTEND VPC
  hybrid connection 
------------------------------------
------------------------------------
------------------------------------
S3
  100 - max number of buckets per account
  bucket names must be globally unique
  explicit DENY will take precendence over S3 policy
  -----
  REPLICATION:
  replication can be from region to region
  S3 LIFECYCLE ACTIONS are not replicated with S3 replication
  Same-Region Replication (SRR) and Cross-Region Replication (CRR) can be configured at
    bucket level, shared prefix level, or object level using object tags
  ------------
  Use Cognito identity PREFIX to restrict users to use THEIR OWN folders in Amazon S3
  ------------
  Lifecycle Rules - Expiration Actions: to delete versions in batch
  Lifecycle Actions can delete old/unused file parts
  (private) object owners can share objects with others by creating a pre-signed URL
  -----
  Ensure authorized users access their own files only:
  Leverage an IAM policy with Cognito identity prefix to restrict users to their own folders
  ------------
  Object Retention
    -----
    Governance Mode
      Protects objects against deletion by most users, 
        but allows some users to alter retention settings or delete objects if necessary
    -----
    Compliance Mode
      Ensures that objects cannot be overwritten or deleted by any user
        including the root user, until the retention period has passed
  ------------
  Authorization Access
    IAM Policies
    Bucket Policies
    Access Control Lists (ACLs)
    Pre-Signed URLs
    Signature Version 4 (SigV4)
    CloudFront Functions
    Query String Authentication
      Use query string parameters to authenticate requests. Deprecated in favor of SigV4
  ------------
  S3 Events
    Object - Create, delete, update, restore 
    Bucket - Create, Delete
  ------------
  Strongly Consistent - Object Reads
    delete an existing object and immediately try to read it
    S3 will not return any data as the object has been deleted
  ------------
  Eventually Consistent - Bucket Configurations
    If you delete a bucket and immediately list all buckets
    the deleted bucket might still appear in the list
  ------------
  S3 notification feature
    in the use case of object modification notifications, you
    can invoke a Lambda function that inserts records into DynamoDB
  ------------
  S3 Select
    enables applications to retrieve a subset of data from objects using SQL expressions
  ------------
  S3 lifecycle actions are not replicated with S3 replication
  ------------
  Data at rest
    SSE, Client Sid
  ------------
  PutObject API
    to encrypt at rest set the x-amz-server-side-encryption header as AES256
  ------------
  Encryption
    Encrypted by default with SSE-S3 (Managed Keys) AES-256
    SSE-S3 -> Server-Side Encryption with S3-Managed Keys
      is default, S3 manages keys, at rest, encryption status in CloudTrail logs
      encrypt your object before saving it, then decrypt it when you download the objects
    SSE-S3: 'x-amz-server-side-encryption': 'AES256'
    SSE-KMS:'x-amz-server-side-encryption': 'aws:kms'
    ------
    You have three choices for encryption
      SSE-S3, CMK, SSE-KMS, SSE-C
    ------
    SSE-C -> Server-Side Encryption with Customer-Provided Keys
      customer managed, at rest, 
      requires specifying key for each upload
    ------
    Client-Side Encryption
      you can encrypt the data client-side and upload the encrypted data to Amazon S3
      you manage the encryption process, the encryption keys, and related tools
  ------------
    Secrets Manager
      for managing sensitive data
      automatic key rotation, auditing, and integration with services IAM and Lambda
      not for encrypting data at rest
      in place of hard-coded credentials or table lookups, the app calls Secrets Manager
      -----
      designed to handle sensitive information like access tokens securely
      also allows easy retrieval and is integrated with encryption and policy management
      requiring less setup and management overhead compared to other options
      -----
      Secrets Manager is more feature-rich for sensitive credentials like access tokens,
       ESPECIALLY when CROSS ACCOUNT access is needed
  ------------------------------------
  Bucket Policies
    access policy option available for you to grant permission to your Amazon S3 resources
  ------------------------------------
  S3 Analytics 
    provides a way to analyze and optimize storage usage, activity trends, 
    and data access patterns that help you decide when to transition 
      the right data to the right storage class
    help make informed decisions about data storage, retrieval, and management
    -----
    NOT used to identify unintended access to your S3 resources
  ------------------------------------
  Cross Region Replication
    S3 feature to replicate across regions
  ------------------------------------
  Presign URLs
    grant time-limited access to some S3 actions and objects
------------------------------------
------------------------------------
------------------------------------
Cloudfront
  CloudFront Key Pairs can ONLY bemade by AWS Account root user
  ------
  use mulitple origins to serve both STATIC and DYNAMIC content at low latency globally
  ------
  encrypt all traffic between USER > CLOUDFRONT > APP:
    set origin protocol policy to 'HTTPS ONLY'
    set viewer protocol 'HTTPS ONLY' or 'Redirect HTTP to HTTPS'
  ------
  Cache Behavior 
    you must create at least as many cache behaviors (including the default cache behavior) as you have origins if you want CloudFront to serve objects from all of the origins. 
  -----
    Each cache behavior specifies the one origin from which you want CloudFront to get objects. 
  ------
    If you have two origins and only the default cache behavior, the default cache behavior will cause CloudFront to get objects from one of the origins, but the other origin is never used.
  -----
  Cache Policy
  Query string forwarding and caching - to fix incorrect forwarding with url parameters
------------------------------------
CloudFront with Origin Groups
  routes all incoming requests to the primary origin,
    even when a previous request failed over to the secondary origin
  -----
  fails over to the secondary origin only when the HTTP method of the viewer request is:
    GET, HEAD or OPTIONS
------------------------------------
Cloudfront Distributions
  only 2 CF keypairs per account
  creating a signer: pub key is with CF, private signs URL portion

------------------------------------
Cloudfront Signed URLs
  to distribute paid content through dynamically generated signed urls
------------------------------------
Cloudfront Geo-Restriction
  restrict locations/countries access
------------------------------------
Cloudfront Signed Cookies
  provides access to multiple files
------------------------------------
------------------------------------
------------------------------------
MFA Delete - Enable
  to prevent accidental deletions
-----
Adding MFA protection to API operations:
  administrator configures an AWS MFA device
  administrator creates policies for the users that include a Condition element 
    that checks whether the user authenticated with an AWS MFA device
  user calls one of the STS API operations that support the MFA parameters:
    AssumeRole 
    GetSessionToken
------------------------------------
------------------------------------
------------------------------------
------------------------------------
Encryption
------------------------------------
SSE-C
  encryption happen in AWS and you have full control over keys
  mandates use of HTTPS when uploading/downloading
------------------------------------
SSE-KMS
  managed, you control rotation policy
  use your own master key
  --------------
  associate-kms-key
   Associates the specified KMS key with either one log group in the account, 
     or with all stored CloudWatch Logs query insights results in the account
  --------------
  automatically rotates AWS-managed keys every year
  --------------
  how to encrypt LARGE (111 GB) object:
    create IAM permissions for and call GenerateDataKey API, that returns a plaintext key 
      and an encrypted copy of a data key
    Use a plaintext key to encrypt the data
  --------------
  use GenerateDataKeyWithoutPlaintext when you need the encrypted version of the data key
    as this operation omits the plaintext data key,
  -----
  Use GenerateDataKey when you need both the plaintext and encrypted versions 
    of the data key
  --------------
  use generate-data-key when you need to encrypt data immediately 
    and have the plaintext key available. 
  Use generate-data-key-without-plaintext when you need to store the encrypted data key 
   for later use or when you want to ensure that the plaintext key is 
   not exposed to certain components of your system.
  --------------
  KMS Encryption
    maximum data size 4kb
    request limit is 10,000 requests per second
    Generate symmetric encryption keys with KMS
    -----
    KMS stores the Customer Managed Key
      receives data from the clients,
      which it encrypts and sends back
------------------------------------
SSE-S3
  is simpler provides basic encryption
SSE-KMS
  offers more advanced key management and control features
  -----
  Choose SSE-S3 for simplicity and ease of use, or SSE-KMS for security and customization
------------------------------------
Data Key Caching
  reuse data keys instead of generating a new one for each encryption operation
  improve performance, reduce latency, decrease costs, minimizing requests to KMS
------------------------------------
Client-Side
  you do encryption, full control
  you send encrypted data to AWS
------------------------------------
Secrets Manager
  can do third-party key rotation with a Lambda function
  The Lambda function used for rotation requires an IAM execution role 
    with permissions to access the secret 
  -----
  is INSIDE CloudHSM
  -----
  manage and ROTATE application secrets, passwords, API keys, and other sensitive data
  Access to the secret is controlled by policies attached to the secret itself.
------------------------------------
------------------------------------
------------------------------------
Fargate
  run containers on AWS serverless
  no need to provision any infrastructure
  cheaper for running TASKs, since charged by task, not infrastructure
  use EFS volumes for persistent cross-AZ shared access to the data volumes 
    configured for the container tasks
------------------------------------
ECR
  fully managed container registry
  store, manage, share and deploy container images
  buildspec file
  can use post_build phase after Docker push to ECR
  -----
  Command to pull Docker Images:
    $(aws ecr get-login --no-include-email)
    docker pull 1234567890.dkr.ecr.eu-west-1.amazonaws.com/demo:latest
    ^^^ this may be outdated ^^^
    -----
    this is suggested command now:
    aws ecr get-login-password | docker login -u AWS --password-stdin <your-registry-url>
  ------------------------------------
  Launch Types
    EC2, Fargate
  ------------------------------------
  Task Roles
    when you ant to call other services like S3, SQS
------------------------------------
------------------------------------
------------------------------------
Elastic Beanstalk
  uses CloudFront under the hood
  -----
  include a env.yaml manifest in the root of your application source
    to configure: env name, solution stack & env links when creating your environment
  -----
  LifeCycle Policies can allow you to delete old versions automatically
  Supports cloning
  -----
  deployments run in IMMUTABLE or TRAFFIC SPLITTING mode will LOSE BURST BALANCES
  -----
  To reduce the length of time resolving dependencies on all X# of target EC2 instances
    bundle the dependencies IN the SOURCE code during the BUILD stage of CodeBuild
  -----
  To enable expose an HTTPS endpoint instead of an HTTP
  and add in-flight encryption between your clients and your web servers:
    create a config file in the .ebextensions folder to configure the Load Balancer
  -----
  To migrate an app to a different account:
    Create a saved configuration in Team A's account and download it to your local machine
    Make the account-specific parameter changes, upload to the S3 bucket in Team B's account
    From Elastic Beanstalk console, create an application from 'Saved Configurations'
  -----
  For redirecting http traffic to https:
    Configure your EC2 instances to redirect HTTP traffic to HTTPS
    Open up port 80 & port 443
    Assign an SSL certificate to the Load Balancer
  -----
  Has Two Modes - WORKER and WEB SERVER
    ----------
    Worker Environment - cron.yaml
      to decouple tasks from environment, 
        used a dedicated worker env to offload background tasks
      decoupled from the main web application
      setup includes using a cron.yaml file to define the cron jobs
      worker environment runs a daemon process that reads messages from the SQS queue
    -----
    Web Server - 
      Single Instance Mode, one instance, one Elastic IP
  -----
  To set a configuration mechanism that automatically applies settings for you:
    Include config files in .ebextensions/ at the root of your source code
  -----------------
  Resources created as part of your .ebextensions is part of your Elastic Beanstalk
    template -- and will get DELETED if the environment is terminated
  -----------------
  Deployment Types:
  -----------------
    Immutable
        zero downtime & minimal impact
        Cost-effectiveness: Medium-High
        maintains FULL capacity
        quick, safe rollback
        changes applied to new resources (instances)
        new version serves traffic along with old until the new instances pass health checks
        avoid introducing unintended changes or bugs into production
        attackers cannot exploit vulnerabilities in existing resources
        retains clear audit trail of changes & ensures compliance
  -----------------
    Blue-Green
        zero downtime & minimal impact
        creates a separate environment with new instances running the new application 
        -----
        once verified, traffic is routed to the new environment
        provides a clean slate for each deployment
        -----
        creates new instances with the new application version and 
          terminates the old instances
        -----
        then swap CNAMEs (via Route 53) of the two environments to redirect traffic
          to the new version instantly
  -----------------
    In Place
        maintain existing instances
        results in downtime
        updates all existing instances with the new application version
  -----------------
    Rolling
        maintain existing instances
        reduction in performance
        Cost-effectiveness: High
        new instances with the new app version gradually replace the existing instances
  -----------------
    Rolling with additional Batches
        No downtime
        Cost-effectiveness: Medium
        split into batches
        each batch is deployed before moving on to the next one
  -----------------
    All at Once
        requires downtime
        Cost-effectiveness: Low
        replaces all instances with the new version simultaneously
  -----------------
    Traffic Splitting
        reduction in performance
        Cost-effectiveness: Medium
        route a portion of traffic to new environment while old environment remains active
        enables gradual rollouts and canals for testing and verification
        replaces failed instances with ones running the app ver of most recent succeessful
------------------------------------
------------------------------------
------------------------------------
CloudFormation
  -----
  nested stacks to increase maintainability of templates:
  AWSTemplateFormatVersion: '2010-09-09'
  Resources:
    MyFirstNestedStack:
      Type: 'AWS::CloudFormation::Stack'
      Properties ....

    MySecondNestedStack:
      DependsOn: MyFirstNestedStack
  -----
  cfn-init is a helper script that initializes and configures EC2 instances
  -----
  Change Sets: a preview summary of changes that will be made to a stack when you update it
  -----
  Stack  Sets: to create, update, or delete stacks across multiple accounts and regions 
  -----
  when a resource in a stack CANNOT be created:
    the already created resources are deleted, and the creation terminates
  -----
  aws cloudformation package 
    command is used to package CF templates that reference local resources, 
      such as a Lambda function or API Gateway definition,
      then uploads these local resources to an Amazon S3 bucket
      then updates the template to reference these resources in S3 instead of local paths
  -----
  Custom Resources - automates  stack deletion
  Always uploads teplates to S3
  Resources are mandatory
  ----------
  The optional Conditions section contains statements that define the circumstances 
    under which entities are created or configured
  ----------
  PARAMETERS CANNOT be associated with Condition
  ----------
  Zip and Upload Templates:
    cloudformation package 
    cloudformation deploy 
  -----
  ZipFile Parameter - CloudFormation Template
    Add function source inline in the ZipFile parameter of the "AWS::Lambda::Function"
    to provide the Node.js code inline within the template
  -----
  Pseudo Parameters
    AWS::Region    -- The AWS Region in which the stack is being created
    AWS::AccountId -- The AWS account ID of the account creating the stack
    AWS::StackId   -- The ID of the stack being created
    AWS::NotificationARNs -- A list of ARNs for notifications related to the stack
  -----
  AllowedValues
    optional property that can be used with string and comma-delimited list
    specifies a set of allowed values for a parameter
    helps validate user input and ensures that only valid values are used in the stack
  -----
  Fn::GetAtt
    returns the value of an attribute from a resource in the template
  -----
  !Ref
    returns the value of the specified parameter or resource
  -----
  !Sub
    substitutes variables in an input string with values that you specify
  -----
  AWS::Region "pseudo parameter"
    the pseudo parameter returns a string of the Region where the resource is being created
  -----
  get AMI info of EC2s across regions: 
    !FindInMap [ MapName, TopLevelKey, SecondLevelKey]
  -----
  DescribeImages - ec2 api call to retrieve a list of AMIs
  -----
  !ImportValue
    returns the value of an output exported by another stack
  -----
  Exported Output Values - in Outputs section
    must have unique names within a single Region
  -----
  Outputs:
  S3BucketName:
    Value: !Ref MyS3Bucket
    Export:
      Name: !Sub '${AWS::StackName}-S3BucketName'
  ----------
  CloudFormation currently supports the following parameter types:
    String – A literal string
    Number – An integer or float
    List<Number> – An array of integers or floats
    CommaDelimitedList – An array of literal strings that are separated by commas
    ------
    AWS::EC2::KeyPair::KeyName – An Amazon EC2 key pair name
    AWS::EC2::SecurityGroup::Id – A security group ID
    AWS::EC2::Subnet::Id – A subnet ID
    AWS::EC2::VPC::Id – A VPC ID
    ------
    List<AWS::EC2::VPC::Id> – An array of VPC IDs
    List<AWS::EC2::SecurityGroup::Id> – An array of security group IDs
    List<AWS::EC2::Subnet::Id> – An array of subnet IDs
------------------------------------
------------------------------------
------------------------------------
SQS 
  scales automatically
  cannot change queue type after creation
  delay queues - max 15 mins
  max retention 14 days
  default visibility of a message is 30sec
  max size 256 kb
  max retrieval at one time - 10 messages
  no limit on message in queue
  ------
  messages will be delivered one or more times, and delivery order is indeterminate
  ------
  to store data in encrypted queues, Enable SQS KMS encryption
  ------
  Fan Out method
  ------
  MessageGroupId
    allows ordering
    messages that belong to the same message group are always processed one by one
  ------
  MessageDedulpicationId
    prevents delivery of same ID for 5 minutes, 365 days
  ------
  CreateQueue   
  ------
  DelayQueues, DelaySeconds
    postpone the delivery of new messages
  ---------------
  Short Polling - default
    sends the response right away, even if the query found no messages
    fast response times, can tolerate occasional empty responses
  -----
  Long Polling
   reduce the number of empty responses
   potentially lowers costs
   maximum long polling wait time is 20 seconds
   sends a response after it collects at least one available message
   up to the maximum number of messages specified in the request
  ------
  ReceiveMessageWaitTimeSeconds
    wait time in seconds for long polling requests
    can help reduce empty responses
  ------
  MaxNumberOfMessages
    maximum number of messages to receive in a single batch
  ------
  ReceiveMessage API
    to set MaxNumberOfMessages to grater than the default of 1
  ------
  Visibility Timeout
    time a message is invisible to subsequent receives
    prevents other consumers from receiving and processing a message
    default 30sec, min 0, max 12hr
    use ChangeMessageVisibility action to extend a message's visibility timeout
  ---------------
  Dead Letter Queue (DLQ)
    lambda will add when event fails all processing attempts
    lambda will add when function invocation is asynchronous
  ---------------
  SQS Extended Client
    2GB max
    Integration with Amazon S3
------------------------------------
------------------------------------
------------------------------------
Kinesis
  shards allow 1mb in, 2mb out - increase shards for more capacity
  partition keys associated with shards wil org data
  one EC2 per shard
  enable Server Side Encryption in Kinesis Streams for data at rest
  ------
  Enhanced Fanout Kinesis
    receive records with dedicated throughput of up to 2 MB of data per second per shard
    improves the read performance and scalability of Kinesis consumers
  ------
  Producer Library KPL
    simplifies producer application development
    allowing developers to achieve HIGHER FASTER write throughput to a Kinesis data stream
  ------
  Kinesis Agent
    optimal way of sending LOG DATA from the EC2 instances to Kinesis Data Streams 
    install and configure it on each of the instances
  ------
  Kinesis Data Streams
    Requires manual scaling and shard management
    for ingesting and storing large-scale data streams
    real-time analytics, complex processing, or temporary data storage
    massively scalable, GB per second
    cannot subscribe to SNS
    Encryption in flight with HTTPS endpoint
  ------
  Kinesis Analytics - serverless -  Apache Flink
    for real-time analytics of streams of data
    easiest way to transform and analyze streaming data in real-time with Apache Flink
  ------
  Data Firehose (delivery of data)
    does not store data
    -----
    can ingest from thousands of sources
    -----
    custom data transformation using Lambda
    -----
    easiest way to reliably load streaming data into lakes, stores, and analytics services
    -----
    can also batch, compress, transform, and encrypt your data streams before loading
    -----
    Use Kinesis Firehose to ingest data and Kinesis Data Analytics to generate 
      leaderboard scores and time-series analytics
  ------
  Differences:
    Data Streams is for analytics and insights
    Firehose is for data delivery
    -----
    Scaling
      Data Streams requires manual scaling and shard management
      Firehose automatically scales to handle large volumes of data
    -----
    Data Storage:     
      Data Streams stores data for a specified retention period
      Firehose does not store data and directly delivers it to specified destinations
    -----
    Integration: 
      Data Streams requires integration with other AWS services for data storage or analytics
      Kinesis Firehose integrates directly with services
        S3, Redshift, and Elasticsearch for immediate data delivery
      ElastiCache is NOT a supported destination for Data Firehose
  ------
    Is Supported
     ElasticSearch Service (ES), Redshift, S3
  ------
  ProvisionedThroughputExceeded Exception
    hot partition
    retry with an EXPONENTIAL backoff
    Increase the number of SHARDS to provide enough capacity
------------------------------------
------------------------------------
------------------------------------
CloudTrail
  use the console to view logs less than 90 days old
  use Athena to analyze older logs stored in S3
  the S3 bucket owner needs to be the  object owner to get the object access logs
------------------------------------
------------------------------------
------------------------------------
X-Ray - xray
  X-Ray daemon listens for traffic on UDP port 2000,
  -----
  aws put-trace-segments - to upload segment documents to AWS X-Ray
  -----
  X-Ray creates a map of services used by your application with trace data
    You can use the trace data to drill into specific services or issues
  ----- 
  This data provides a view of connections between services in your application and
    aggregated data for each service, including average latency and failure rates.
  -----
  cross-account debugging and tracing data and visualize it in a centralized account
  -----
  to have a unified account to view all the traces on EC2 instances and AWS accounts
    1. Create a role in the target unified account
       allow roles in each sub-account to assume the role
    2. Configure the X-Ray daemon to use an IAM instance role
  -----
  deploy the X-Ray daemon agent as a SIDECAR CONTAINER
  -----
  to run on DOCKER you need the correct IAM task role for the X-Ray container 
    it needs an IAM role permissions to view
  -----
  EC2 needs daemon running on it when using CodeDeploy
  'instrument' the application code
  -----
  X-Ray Sampling
    to obtain tracing trends while reducing costs with minimal disruption
    needs IAM role permissions to view
    control the amount of data that you record
    modify sampling behavior on the fly without modifying or redeploying your code
  -----
  X-Ray daemon uses the AWS SDK to upload trace data to X-Ray
    On Amazon EC2, the daemon uses the instance's instance profile role automatically
  -----
  X-Ray Annotations 
    to refine and filter results
    simple key-value pairs that are indexed for use with filter expressions
  -----
  AWS_XRAY_DAEMON_ADDRESS
    check to ensure that the daemon is correctly discovered on ECS
------------------------------------
------------------------------------
------------------------------------
Lambda
  max time 15 mins
  10240 MB - max size for TMP space
  Lambda@Edge - for global deployment
    use edge-function to redirect requests for cache misses
  -----
  aws lambda update-function-code command is used to update the code of a Lambda function
  -----
  specify the path to a .zip file containing the updated code using the --zip-file
    file path should be prefixed with fileb://
  or specify the S3 bucket/key where the .zip file is: --s3-bucket --s3-key
  -----
  set permissions for the Execution Role to sending log data to CloudWatch
  -----
  Unable to import module fix:
    install locally, choose current dir as target, re-zip, re-upload
  -----
  When you connect a function to a VPC, Lambda creates an elastic network interface
   for each combination of security group and subnet in your function's VPC configuration
  -----
  event source mappings
   Instead of using ARNs for the Lambda function in event source mappings
   you can use an alias ARN
  -----
  Event Object
    first argument passed to the handler function and contains information about the event
      that triggered the Lambda function. 
    contains details such as HTTP requests, S3 object uploads, or other AWS service events
  -----
  Context Object
    offers information about the invocation, function, and execution environment such as:
      function’s ARN, log stream name, log group name, Request Identifier, 
      memory limit in MB, and remaining time in milliseconds before the execution times out
  -----
  register function with a Target Group to use with an ALB
  -----
  an asynchronous invocation event exceeds the maximum age or fails all retry attempts,
    Lambda discards it. Or sends it to dead-letter queue if you have configured one
  -----
  Configure Application Auto Scaling to manage Lambda provisioned concurrency on a schedule
  -----
  to bundle your Lambda function to add the dependencies?
    Put the function and the dependencies in one folder and zip them together
  -----
  to use resources for current env, use environment variables for the Lambda functions
  -----
  when developer invokes the API method, it returns an "Internal server error" 500: 
    setup a resource-based AWS Identity and Access Management (IAM) policy 
    so that it grants invoke permission to API Gateway.
  -----
  to avoid latency bottlenecks when you expect large increase in requests,
    configure App Auto Scaling to manage provisioned concurrency
  -----
  Error: Memory Size: 10,240 MB Max Memory Used => function ran out of RAM
  -----
  Aliases
    maintain 1 version number
    acts as a pointer to a specific function version
    you can update an alias to point to the different function versions
  -----
  Versions
    creates a new version of your function each time that you publish the function
    by publishing a version of your function, you can store your code and configuration
      as a separate resource that cannot be changed
  -----
  Roll Back
  slowly shifting incoming traffic over to the new versions to easily roll back 
    to the old versions if any issues are detected
  -----
  Use AWS Lambda aliases to route different percentages of the incoming traffic 
    stages will include prod, test, and dev 	
  ------------------------------
  To deploy a container image to Lambda, container image must implement Lambda Runtime API
  -----
  AWS Lambda service doesn't support Lambda funcs using multi-architecture container images
  ------------------------------
  does NOT support WINDOWS runtimes
  -----
  When a Lambda function is invoked while a request is still being processed, 
    another instance is allocated, which increases the function's concurrency
  -----
  Concurrency Limit
    1000 concurrent executions per account across all functions in a region
  -----
  Reserved Concurrency (not provisioned concurrency)
    no other function can use that concurrency
    limits the maximum concurrency for the function
    applies to the function as a whole, including versions and aliases
  -----
  Provisioned Concurrency
    keeps the desired number of lambda runtimes always ready to process the request 
    which results in faster responses as there is no cold start
  -----
  use dead letter queue to save failed messages to sns
  -----
  to reduce average runtime of a long function
    Deploy the function with its memory allocation set to the maximum amount
  -----
  use weighted load balance to test versions
  keep DB connection strings outside of function handler
  -----
  For Use In A CloudFormation Template
    put functions and dependencies in one folder and zip together, 
    upload to S3, then refer the object in "AWS::Lambda::Function" block
  -- OR --
    write the AWS Lambda code inline in CloudFormation in the AWS::Lambda::Function block
    ensure there are no third-party dependencies
------------------------------------
------------------------------------
  Lambda Authorizer
    control access using a 3rd party authorization mechanism
    -----
    can also reference authentication data in a DynamoDB table
    -----
    Lambda authorizer is an API Gateway feature that uses a Lambda function 
      to control access to your API.
------------------------------------
------------------------------------
  SNS does not require "Event Source Mapping"
  -----
  event source mapping: must occur on the Lambda side to associate with DynamoDB stream
  -----
  use Destinations to send results of async code to an SQS queue
  -----
  does not natively support C++
  -----
  use lambda layers to compile dependencies once
  -----
  to optimize CPU bound function, increase function memory
  -----
  use /tmp for less than 512 mB storage, but execution context/data is not guaranteed 
------------------------------------
------------------------------------
  Lambda deployments
   Canary: 
    traffic is shifted in two increments
    first interval, set in minutes
    remaining traffic is shifted in the second increment
   Linear: 
    traffic is shifted in equal increments with equal minutes between each increment
    uses predefined increments
   All-at-once:
    All traffic is shifted at once.
------------------------------------
------------------------------------
------------------------------------
Lambda Custom interceptors
  allows you to execute code before and/or after the execution of a Lambda function
    to implement common logic, such as logging, authentication, 
    or caching, in a centralized and reusable manner
------------------------------------
------------------------------------
------------------------------------
DynamoDB 
  max item size 400 kb
  ------
  web identity federation - secure for signing requests to dyanamodb API
  ------
  ProvisionedThroughputExceededException
    indicates that your request rate is too high and exceeds the 
      provisioned read or write capacity of your table
    your hash keys are not evenly distributed. 
  ------
  uses optimistic concurrency control
  uses conditional writes for consistency
  Supports conditional writes preventing multiple users from modifying the same item
  ------
  AWS Support can increase:
    number of tables per account
    number of provisioned throughput units per account
  ------
  A hash index is part of a primary key that uniquely identifies each item in a table
  A range index is part of a composite primary key, includes both a hash key and a range key
  -----
  consumes more throughput compared to eventually consistent reads, because guranteed updates
  ------
  key value store
  does not support WebSockets protocol
  commonly used for storing session data
  RCU, WCU
    are decoupled
    are spread across all table partitions
  use DAX cluster for higher readability
  -----
  Global Tables
    provide an ACTIVE/ACTIVE replication architecture 
    allow you to replicate data seamlessly across multiple AWS Regions
  -----
  Built-in backup methods
    On-demand
    Point-in-time recovery
  ------
  Transactions API
    solution with MINIMUM IAM permissions 
      dynamodb:UpdateItem
      dynamodb:GetItem
    -----
    free to enable, pay for read/writes only
    -----
    visible in CloudWatch
    -----
    read/write can manage complex business workflows
    -----
    can group: put, update, delete, conditionCheck & submit as a single transaction
    -----
    use DynamoDB Transactions to make all-or-nothing changes to multiple items 
      both within and across tables
    -----
    TransactWriteItems or TransactGetItems
      ensures that either ALL SUCCEED or NONE do
      maintains data consistency
      ensuring transactional integrity
    -----
    BatchWriteItem
      If one operation fails, the others can still succeed
      handle up to 16MB of data per request, consisting of up to 25 item PUT or DELETE
  ------
  both partion key and sort key need to be unique
  ------
  TTL - time to live
    --max-tems  & --starting-token
    -----
    allows you to define a per-item expiration timestamp,
    indicating when an item is no longer needed
  ------
  Streams 
    records cannot be sent to SQS
    captures a time-ordered sequence of item-level modifications in any DynamoDB table
    stored for up to 24 hours
    view the data items as before and after they were modified, in near real-time
  -------
  WCU:
    item size does round up
    Item size: 1 KB, Write request rate: 10 WPS
    WCUs required: 1 KB Ă— 10 WPS = 10 WCUs
  -------
  RCU: 
  -------  

--------------
The questions will give you:
--------------
S  item size,
R  read rate (per second)
--------------
strongly
S / 4kb (round up) = X
X * R = total
--------------
eventually divides the total by 2 - it doesnt need as many capacity units:
X * R = (total / 2)
--------------


  ------------------
  ProjectionExpression
    allows certain subset of attributes from an item
  ------
  filter expression
    determines which items within the Query results should be returned to you
    all other results are discarded
    applied after Query finishes, but before the results are returned
  ------------------
  Scan operations
  ------------------
  Parallel Scans
    increase performance to retrieve items
    returns data to the application in 1 MB increments
    logically divide a table or secondary index into multiple segments
      with multiple application workers scanning the segments in parallel
  ------
  Sequential Scans
    reads items from a table or secondary index in a sequential manner
    starts from the beginning of the table or index
  ------------------
  ------------------
  Primary Key 
    uniquely identifies each item in the table
    no two items can have the same key
    create one or more secondary indexes on a table
  ------
  Secondary Index
    lets you do queries against the primary key
  ------
  LSI - Local Secondary Index
    query your data by the PRIMARY & ALTERNATE key
    index has the same partition key as the base table
    different sort key
    uses the RCU and WCU of the main table, 
      so you can't provision more RCU and WCU to the LSI
  ------
  GSI - Global Secondary Index
    to query a table using an attribute NOT part of the Primary Key
    index with partition key and sort key
    can be DIFFERENT than BASE TABLE
    To avoid throttling, the provisioned write capacity for a global secondary index 
      should be equal or greater than the write capacity of the base table
  ------
  When you create a DynamoDB table, in addition to the table name
    you must specify the primary key of the table. 
------------------------------------
------------------------------------
------------------------------------
API Gateway
  -----
  exposed REST GET methods are in the gateway, not Lambda function
  similarly, CORS is enabled in the method in Gateway (not in the bucket/resource)
  -----
  To enable CloudWatch Logs for all or only some of the methods, 
    you must also specify the ARN of an IAM role that enables API Gateway to write 
    information to CloudWatch Logs on behalf of your user
  -----
  cannot integrate with cloudshell
  -----
  resource policies are JSON policy documents used to control access to your APIs
  -----
  not supported - Security Token Service (STS)
  -----
  to test new versions without causing any disturbance
    create a dev stage on the API Gateway API
    have the developers point the endpoints to the development stage
  -----
  stage variables
    key-value pairs, act like environment variables
    not intended to be used for sensitive data, such as credentials
    enable caching for stages that have the same payload, and few changes
  -----
  supports swagger/openAPI export as code
  supports websockets
  supports Standard IAM roles & policies, Lambda Authorizer, Cognito User Pools
  Mapping Templates
    mask fields in output data returned by a Lambda function
  caching is defined per Stage, with default TTL of 300 seconds
  -----
  Create and manage APIs to back-end systems on EC2, AWS Lambda, or any public web service
  Can call Lambda function to create front door of serverless app
  Can be configured to send data to Kinesis Data Stream
---------------------
  API Gateway Caching
    offers unauthenticated read access to daily updated statistical information
    via Amazon API Gateway and AWS Lambda
---------------------
  Usage Plans
    expose public APIs for the application-specific functionality
    -----
    To create an API key and a usage plan in AWS API Gateway:
      Use the AWS CLI command to create an API key
      aws apigateway create-api-key --name "MyApiKey" --description "My API Key"
      aws apigateway create-usage-plan ... 
      aws apigateway create-usage-plan-key ... 
--------------------
  Mapping Templates
    scripts written in Velocity Template Language (VTL)
    used with Lambda functions, AWS services, or HTTP endpoints
    allow you to manipulate, modify data exchanged between API Gateway and its integrations
------------------------------------
------------------------------------
------------------------------------
Cognito
  offers MFA. including email, SMS text messages, and time-based one-time passwords (TOTP)
  SAML, users can sign in through Google, Facebook, and Amazon.
  ------------
  User Pools
    to SIGN UP and SIGN IN to your web or mobile app through Amazon Cognito or third party
      such Facebook, Amazon, Google or Apple
    YES - JWT token handling
    cannot directly integrate with CloudFront distribution, use on the Load Balancer instead
  ------------
  Identity Pools
    for creating TEMPORARY, limited-privilege AWS credentials to perform 
      API calls to services like S3 and DynamoDB
    provide temporary AWS credentials for users who are guests (unauthenticated) 
      and for users who have been authenticated and received a token.
    also can use social sign in
    NO - JWT token handling
  ------------
  Sync - 'push synchronization feature'
    enables cross-device syncing of application-related user data
    synchronize user profile data across mobile devices and web applications
      without requiring a custom-built backend
------------------------------------
------------------------------------
------------------------------------
SAM - Serverless Application Model
  sam build
  sam package
  sam deploy
  ------------
  supports swagger (now openApi) definitions either INLINE or from a SWAGGER FILE reference
  ------------
  To package a SAM app, use "sam package" command from the AWS SAM CLI. 
  this creates a .zip file of your code and dependencies and uploads it to S3
  SAM enables encryption for all files stored, then returns a copy of your template,
    replacing references to local artifacts with S3 location of where the command 
    uploaded the artifacts.
  ------------
  Order of development:
  Develop the SAM template locally => upload template to S3 => deploy to the cloud
-----
  "Transform" section in at top of template indicates it is a (SAM) template
  ------------
  Resource types:
   Api - Represents an Amazon API Gateway REST API
   Function - Lambda function
   LayerVersion - Lambda layer version
   SimpleTable - Amazon DynamoDB table
   EventSourceMapping
   ApiGatewayRestApi
   ApiGatewayResource
   ApiGatewayMethod
   ApiGatewayIntegration
   XRay - Represents an AWS X-Ray tracing configuration for a Lambda function
  ------------
  Required
    Transform
    Resources
  Optional
    Globals
    Conditions
    Mappings
    Metadata
------------------------------------
------------------------------------
------------------------------------
AppSync
  flexible APIs facilitate secure, scalable mobile & web apps
  publish and subscribe to REAL-TIME events over serverless WebSockets
  access, manipulate, and combine data from multiple sources 
    through a single GraphQL API endpoint
------------------------------------
CDK
  create app from CDK template -> add code -> build app -> synth stacks -> deploy
------------------------------------
------------------------------------
------------------------------------
Step Functions (State Machine)
  Step Functions coordinate and manage components of a task driven workflow
  All work in your state machine is done by tasks
  serverless, function orchestrator, to sequence Lambda functions and services
  visual workflow to orchestrate Lambda funcs, ec2, API Gateway and more
  used for Asynchronous integration between components
  ------
  ErrorEquals: error names to match, use the States.ALL wildcard to catch all errors
  ResultPath: handle the error output, use it to add the error output to the input, 
    or overwrite the input with the error output
  ----------
  Standard Workflows
    long-running, durable, and auditable workflows that can be completed within five minutes
    supporting an execution start rate of over 2K executions per second
    can run for up to a year
    never run more than once
  ----------
  Express Workflows
    for workloads with high event rates and short duration
    fast, event-driven workflows
    do not support activities, job-run (.sync), and Callback patterns
    maximum duration of five minutes
    rapid execution and cost efficiency
    at-least-once model, an execution could potentially run more than once
  ----------
Task Types:
  Lambda Task: Invokes an AWS Lambda function to perform a specific task.
  Activity Task: Calls an external activity, worker process or a mobile app
  Service Task: Calls a supported AWS service Amazon SNS, Amazon DynamoDB, or Amazon SQS
  Choice Task: decision based on input data and transitions to a specific state
  Map Task: Iterates over an input array and performs a specific task for each element
  Parallel Task: Executes multiple tasks concurrently and combines their results
  Wait Task: Pauses execution of state machine specified amount of time or condition is met
  Succeed Task: Marks the end of execution with a success
  Fail Task: Marks the end of execution with failure
  Retry Task: retry policy for handling runtime errors
  -----
  TimeoutSeconds - defines the maximum task duration before the task is considered failed
  HeartbeatSeconds - defines the maximum interval a task will wait for a heartbeat signal
  ----------
  Troubleshooting
    enable CloudWatch Logs
    Catch, Retry, ErrorFallback
      "ErrorEquals": ["States.ALL"]
    -----
    ResultPath in catch. statement: 
      controls the combination of input & result that is passed to the state output
      "ResultPath": "$.error"
------------------------------------
------------------------------------
------------------------------------
CodeBuild
  fully managed CI service that compiles, tests,
    and produces software packages ready to deploy
  -----
  caching dependencies in S3 (do not keep as part of the source code)
  ----
  to encrypt output artifacts, specify KMS key to use
  -----
  scales automatically, no need to do anything for scaling or for parallel builds
  -----
  set TIMEOUTS to prevent a build running too long
  -----
  to troubleshoot, enable S3 and CloudWatch Logs integration, includes:
    total, failed, successful and duration of builds
  -----
  to avoid resolving dependencies with every build, cache dependencies on S3
  -----
  CodeBuild Agent
    used to troubleshoot by running CodeBuild locally with the Agent
  -----
  to use encryption at end of build artifact - Specify the KMS key in the buildspec.yml:
  artifacts:
    type: S3 
    encryption: true
    kms-key-id: 'arn:aws:kms:REGION:ACCOUNT_ID:key/KEY_ID'
------------------------------------
------------------------------------
------------------------------------
SAR - Serverless Application Repository
  managed, prebuilt, store, share, reusable apps
  no cloning, building, packaging or publishing
------------------------------------
------------------------------------
------------------------------------
CodeCommit
  use GIT credentials generated from IAM to migrate cloned repos over HTTPS
  CodeCommit repositories ARE encrypted in TRANSIT and at REST
  ----------
  IAM username and password credentials cannot be used to access CodeCommit
    SSH Keys, GIT credentials and AWS access keys are FINE
------------------------------------
------------------------------------
------------------------------------
Certificate Manager
  provisions X.509 certs for TSL/SSL
  compatible with IoT/Core/GreenGrass
------------------------------------
------------------------------------
------------------------------------
Amazon EMR Elastic Map Reduce
  platform service that processes large-volume datasets using shared 
   computing frameworks such as Apache Hadoop and Apache Spark.
------------------------------------
------------------------------------

Setting up HTTPS Listener on ALB
  SSL Termination
    LB terminates the SSL/TLS encryption, off-loading tasks from your servers
  -----
  Preserves Source IP
    LB preserves the client’s IP address, allows your backend apps to see original client IP
  -----
  SNI support
    enables serving multiple secure websites (with diff certs) using a single TLS listener
  -----
  Certificate management
    manage server certs using ACM or IAM
------------------------------------
Server Name Indication SNI
  allows both to load multiple SSL certs on one listener
  expose multiple HTTPS apps, each with an SSL cert, all on one listener
------------------------------------
AppSpec Hooks Order
-----
  Start: The initial event in the deployment process.
  ApplicationStop: Stops the application.
  DownloadBundle: Downloads the application bundle.
-----
  BeforeInstall: Runs before the installation of the application.
  Install: Installs the application.
  AfterInstall: Runs after the installation of the application.
  ApplicationStart: Starts the application.
  ValidateService: Validates the application service.
-----
  BeforeBlockTraffic: Runs before blocking traffic to the application.
  BlockTraffic: Blocks traffic to the application.
  AfterBlockTraffic: Runs after blocking traffic to the application.
-----
  BeforeAllowTraffic: Runs before allowing traffic to the application.
  AllowTraffic: Allows traffic to the application.
  AfterAllowTraffic: Runs after allowing traffic to the application.
  End: The final event in the deployment process.
------------------------------------
GraphQL
  built with AppSync to enable query MULTIPLE DBs (not possible with REST)
------------------------------------
------------------------------------
------------------------------------
Scenario Questions:
-------------------
Q. The .aws/credentials file is set up with the user's IAM user name and password. 
The developer runs the code and receives this error message:
  An error occurred (InvalidAccessKeyId)
A. (SDKs) require an access key ID and a secret access key to make programmatic calls to AWS
-------------------
To give on-prem server access to AWS services, create a new IAM user with programmatic access
-------------------
In the application server, create the credentials file at ~/.aws/credentials with the access keys of the IAM user.
-------------------
functions returns Access denied errors. 
Upon investigation, the developer discovered that the Lambda function is using the AWS SDK to make API calls: how to fix?
----
-- Use the aws configure command with the --profile parameter to add a named profile with the sandbox AWS account’s credentials.
-- Run the function using sam local invoke with the --profile parameter
------------------------------------
------------------------------------
------------------------------------
Systems Manager Parameter Store
  can store data such as passwords, database strings, and license codes
  -----
  enables automatic rotation of the database credentials
  -----
  can be used for Lambda functions to pull connection strings to connect to a RDS DB
  -----
  manage configuration externally, securely and
    have it load dynamically into the application at runtime
  -----
  helps to AVOID REDEPLOY by storing the configuration externally
  -----
  can be used for Lambda functions to pull connection strings to connect to a RDS DB
  -----
  CloudTrail is integrated with SSM Parameter Store to capture API calls made to the store
  -----
  Hierarchical Unique Paths
    Environment-specific configurations
    Store separate configurations for diff envs (dev, prod, staging) by creating unique paths
    Enforces consistency
    Groups make it easier to find and retrieve them
------------------------------------
------------------------------------
------------------------------------
A NAME
  maps a domain name to an IP address
----------
CNAME (Canonical Name)
  an alias for another domain name
  always point to another domain name, never to an IP address
  useful when multiple subdomains need to point to the same IP address
----------
Alias
  non-standard DNS record that maps a domain name to the IP address of a 
  load balancer, server, or another resource
  supported by fewer providers
  return the IP address directly, reducing the lookup time
----------
PTR - Pointer Record - reverse DNS 
  also known as a reverse DNS record lookup
  maps an IP address to a domain name
------------------------------------
------------------------------------
------------------------------------
IAM 
  to grant access to a repository for a separate team, who operate inside a different
  isolate AWS account within the organization....
    Create a new cross account role with repository access 
    & provide the role ARN to the marketing team
  -----
  when needed, use the IAM Resource ID as a unique Identifier
  -----
  can be used as a certificate manager ONLY when supporting HTTPS connections
    in a Region that is not supported by ACM (AWS Certificate Managers
  -----
  to test controls use the CLI --dry-run option
  -----
  BILLING and COST MANAGEMENT
    You need to activate IAM user access to the Billing and Cost Management console
    for all the users who need access
  -----
  Service Role
    a role that you attach to the EC2 instance to give temporary security credentials
    to applications running on the instance
  -----
  IAM Access Analyzer
    helps identify security issues
    can identify unused IAM roles and remove them without disrupting any service
    -----
    helps you identify resources in your org and accounts, such as S3 buckets or IAM roles
      that are shared with an external entity
  -----
  IAM Database Authentication
    not supported by ORACLE
    no need to make a DB user for each dev
    supports MySQL, PostGreSQL
  -----
  IAM Trust policy
    resource policy that the IAM service supports
  -----
  IAM policy variables
    set up member access to user-specific folders
------------------------------------
------------------------------------
------------------------------------
CodeDeploy - deployment service
  -----
  CodeDeploy provides TWO deployment type options for On-Premises:
    In-place deployments to on-premises servers
    Blue/green deployments to ECS
  ----------
  appspec.yml - deployment steps
    root directory
    map source files in your application revision to their destinations on the instance
    specify custom permissions for deployed files
    specify scripts to be run on each instance at various stages of the deployment process
  ----------
  buildspec.yml - build steps
    root directory
    programmatically define your build steps
  ----------
  controls deployment steps
  rapidly release new features
  avoid downtime during deployment
  for EC2, Fargate, Lambda & on-prem servers
  ----------
  Agent
    creates deployment group ID folders
    manages app revisions, log files
    allows cleanup and retention of deployment history
  ----------
  Deployment Groups
    ensures applications get deployed to different sets of EC2 instances at different times
      allowing for a smooth transition
    contains settings and configurations used during the deployment
    rollbacks, triggers, and alarms can be configured for any compute platform
    contains individually tagged instances, EC2 instances in Auto Scaling groups
------------------------------------
------------------------------------
------------------------------------
CodeBuild
  for failed build troubleshooting, check 'project build history'
  -----
  fully managed CI service that compiles, tests,
    and produces software packages ready to deploy
  -----
  caching dependencies in S3 (do not keep as part of the source code)
  ----
  to encrypt output artifacts, specify KMS key to use
  -----
  scales automatically, no need to do anything for scaling or for parallel builds
  -----
  set TIMEOUTS to prevent a build running too long
  -----
  to troubleshoot, enable S3 and CloudWatch Logs integration, includes:
    total, failed, successful and duration of builds
  -----
  to avoid resolving dependencies with every build, cache dependencies on S3
  -----
  CodeBuild Agent
    used to troubleshoot by running CodeBuild locally with the Agent
  -----
  to use encryption at end of build artifact - Specify the KMS key in the buildspec.yml:
  artifacts:
    type: S3 
    encryption: true
    kms-key-id: 'arn:aws:kms:REGION:ACCOUNT_ID:key/KEY_ID'
------------------------------------
------------------------------------
------------------------------------
CodePipeline - continuous delivery service
  fast and reliable application and infrastructure updates
  automates the build, test, and deploy phases of a release when there is a code change
  cannot DEPLOY applications
  -----
  Use a Lifecycle Policy to delete unused old app versions
  -----
  to automate sending notifications on state changes
    setup CloudWatch Events Rule that uses CodePipeline as an event source
  -----
  Source Actions: retrieve code changes from a source repository
  Build Actions
  Test Actions
  Deploy Actions: such as AWS Elastic Beanstalk, AWS CodeDeploy, or Amazon ECS.
  Approval Actions: manual approval before next stage, can be for code reviews
  Invoke Actions
------------------------------------
AWS requires approximately 5 weeks of usage data to generate budget forecasts
------------------------------------
Envelope Encryption
  significant performance benefits
  -----
  reduces network load, only the request & delivery of the smaller data key go over network
  -----
  KMS also has an upper limit of 4 KB for the data payload
------------------------------------
Encryption SDK 
  To encrypt 1 MB, you need to use the Encryption SDK & 
    pack the encrypted file with the lambda function
  -----
  provides end-to-end protection for your data in transit and at rest.
  -----	
------------------------------------
------------------------------------
------------------------------------
Reusing SSH keys in your AWS Regions:
  Generate a public SSH key (.pub) file from the private SSH key (.pem) file.
  Set the AWS Region you wish to import to.
  Import the public SSH key into the new Region.
------------------------------------
Data Pipeline
  web service that enables regular, dependable data processing between 
  various AWS computing, storage, and on-premises data sources
  can be used to deploy ?? dev, qa, then prod
------------------------------------
STS - Security Token Service
  generates temporary security credentials, also known as security tokens
    that have a limited life (default is 15 minutes, but can be configured up to 36 hours)
  -----
  error: decode-authorization-message
  -----
  use AssumeRole API to get short lived credentials
  -----
  AWS Security Token Service (STS) is used by API Gateway for logging data to CloudWatch logs
   Hence, AWS STS has to be enabled for the Region that you're using
------------------------------------

Network ACL (firewall)
  denies all inbound and outbound traffic until you add rules 
  Add a rule to the Network ACLs to allow outbound traffic on ports 1024 - 65535

------------------------------------
Pilot Light is a Disaster Recovery Mode
  Primary region, Secondary Region, Data replication
  runs core services in a standby mode in a secondary region
  ASGs deploy new instance when disaster occurs
------------------------------------