Thursday, November 26, 2020

AZR - Function App

 Azure Functions provides "compute on-demand" - and in two significant ways.

First, Azure Functions allows you to implement your system's logic into readily available blocks of code. These code blocks are called "functions". Different functions can run anytime you need to respond to critical events.

Second, as requests increase, Azure Functions meets the demand with as many resources and function instances as necessary - but only while needed. As requests fall, any extra resources and application instances drop off automatically.

Azure Functions has 3 pricing plans

  • Consumption plan: Azure provides all of the necessary computational resources. You don't have to worry about resource management, and only pay for the time that your code runs.
    -Pay only when your function(s) are running
    -Scale out/in as and when required 

  • Premium plan: You specify a number of pre-warmed instances that are always online and ready to immediately respond. When your function runs, Azure provides any additional computational resources that are needed. You pay for the pre-warmed instances running continuously and any additional instances you use as Azure scales your app in and out.

  • App Service plan: Run your functions just like your web apps. If you use App Service for your other applications, your functions can run on the same plan at no additional cost.

          Session Affinity


  



Wednesday, November 25, 2020

AWS - S3

S3 : Simple Storage Service 
S3 (Simple Storage Service) : S3 is a simple service interface on internet that can be used to store and retrieve any amount of data, from any where, at any time on web.
S3 has designed to make web-scaling easier.
S3 provides highly scalable, reliable, fact, inexpensive data storage infrastructure on web. Amazon it self uses to run it's own global network of websites.

Terminology

Versioning
  • Once versioning is on on a bucket it can't be disabled but it can be suspended
  • S3 stores all versions even you delete an object. So space consumed by a file with versioning on is summation of all versions of that file. Don't on versioning for big size files get changed frequently, until a life cycle not configured
  • Integrates with life-cycle rules
  • Versioning's MFA (Multi Factor Authentication) capability adds additional security layer. It asks for token before deleting.
  • Versioning must be on on both source and destination for enabling Cross Region Replication
  • You can not configure replication between buckets in same region
  • Existing files of bucket not replicated when you configure Cross Region Replication on a bucket, until you do not update existing files. New upload or updated files replicated.
  • Deleting specific version or 'delete marker' in source bucket not replicated to destination bucket
  • Multiple and multi level cross region replication not supported
    • If you configured cross region replication from bucket 1 to bucket 2 and bucket 2 to bucket 3, then when you add or update a file in bucket 1 will be replicated to bucket 2 but not to bucket 3. When you add or update files to bucket 2 manually then only it will be replicated to bucket 3
    • You can not configure cross region replication from a bucket to multiple buckets, like bucket A to bucket B and bucket A to bucket C

AWS highly recommend that you choose Create new role to have Amazon S3 create a new IAM role for you. When you save the rule, a new policy is generated for the IAM role that matches the source and destination buckets that you choose. The name of the generated role is based on the bucket names and uses the following naming conventionreplication_role_for_source-bucket_to_destination-bucket.

Life Cycle Management
  • Life Cycle Management can be used in conjunction with Versioning (can be used with or without versioning enabled)
  • Whole bucket or specific folder(s) can be transitioned 
  • Any object can be transition to S3 IA (S3 Infrequent Access Storage) after reaching 128 kb of file size and minimum 30 days of creation date
  • Any object can be archived to Glacier after reaching 30 days of S3 IA (or 60 days of creation days)
  • Can be deleted permanently from Glacier after 90 days (Transition to Glacier cost minimum for 90 days) 
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html

Bucket Policy: Below given is a sample bucket policy, Allows GetObjects method for source url from cloudfront.


Deleting Multiple objects from S3 Bucket: The Multi-Object Delete operation enables us to delete multiple objects from a bucket using a single HTTP request. If we know the object keys that we want to delete, then this operation provides a suitable alternative to sending individual delete requests, that reduce per-request overhead.


POST /?delete HTTP/1.1 Host: bucketname.s3.amazonaws.com Authorization: authorization string Content-Length: Size Content-MD5: MD5 <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>Key</Key> <VersionId>VersionId</VersionId> </Object> <Object> <Key>Key</Key> </Object> ... </Delete>

Access Control List ACL: In ACL you can configure access for your account, other accounts, public access and S3 log delivery groups on Read/Write objects and Read/Write bucket permissions.


S3 Cors Configuration: Below given is sample Cross origin resource sharing configuration on a bucket.



Transfer Acceleration : You can enable transfer acceleration on a S3 bucket, but it has additional charges.



Labs

Lab 1: S3 Cors configuration - Access a image from another S3 bucket using website url.
Lab 2: S3 Versioning - Store multiple versions, delete a version. delete object and restore a version.
Lab 3: S3 Cross Region Replication - Create multiple buckets, configure Cross Region Replication with multiple scenarios
Lab 4: S3 Life Cycle Management - Configure with old and new console

AWS - S3 Transfer Acceleration

Transfer Acceleration uses Cloud Front network to accelerate your upload to S3.
Instead of uploading data directly to S3 bucket, It sends data to nearest Edge Location then Edge Location sends data to S3.

Transfer acceleration can be leveraged to read and write objects to S3 bucket by using provided url after enabling this feature on S3 bucket.



How to use this feature?
You need to unable this feature for S3 bucket then Amazon provides you a distinct url for the bucket.
If you upload using this url, it uses Transfer Acceleration.
Url Syntax: bucketName.s3-accelerate.amazonaws.com



AWS - S3 Security and Encryption

S3 Security
Newly created bucket is private by default. So objects of buckets can't be accessed outside until you change security settings

S3 Bucket security can be configured with below 2 options
  • Bucket Policies: Bucket policies are applicable to whole bucket
  • Access Control List: After creating access control list you can apply it at object level in bucket.
    You can provide below access to your account, others accounts and Everyone as well.
    Read, Write, Read Permissions, Write Permissions


S3 bucket can be configured to log all request made to the bucket. That can be used for audit as and when required.

S3 Encryption

There are 2 types of encryption
  • In transit: When you access your bucket and send information to/from using SSL/TLS (https)
  • At Rest
    • Server Side Encryption

      • S3 Managed Keys (SSE-S3): Objects are encrypted with a unique key, and Amazon encrypt the key it self by a master key and regularly rotate the master key. Unique key is a AES-256 bit encryption key, Amazon handles it by its own.
      • AWS Key Management Service (SSE-KMS): Its similar to S3 managed keys but it comes with some additional benefits and additional charges and need additional permissions to use these keys.
        Additional benefit to this key is that, it provide audit trail on "who and when your keys are getting used".
        You can create your customize keys for your region or S3.
      • Server side encryption with customer provided keys (SSE-C):
        • We managed keys 
        • Amazon manages encryption and decryption
    • Client Side Encryption: You encrypt the data at client side before uploading to S3 

Wednesday, November 18, 2020

AZR - Virtual Machine Scale Set

(Azure Scale Set is like Autoscaling group in AWS)

Azure virtual machine scale sets enable you to create and manage a group of identical, load-balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule.
Scale sets provide high availability to your applications and allow you to centrally manage, configure, and update a large number of VMs.
With virtual machine scale sets, you can build large-scale services for areas such as compute, big data, and container workloads.


Why Scale Set
  • Easy to create & maintain multiple VMs: Easy scaling of hundreds of VMs because all created by the same OS image.
  • Networking: Uses 'Load Balancers' with Basic layer-4 traffic distribution and 'Azure Application Gateway' with advanced layer-7 traffic distribution.
    • Load Balancers: Used for IP/port based mapping with VMs
    • Azure Application Gateway: Used for URL based mapping with VMs




  • Provides high availability and application resiliency: With the help of multiple VMs using Availability set or Availability zones.
  • Support Spot Instances: You can set Spot price that is a maximum price per hour US $ you want to pay for an instance. Azure allot instance if your set price is greater than the platform price at that time.

  • Auto Scaling: Provide auto scaling


  • Large Scale handling: 
    • Scale sets support up to 1,000 VM instances. If you create and upload your own custom VM images, the limit is 600 VM instances. 
    • For the best performance with production workloads, use Azure Managed Disks.
  • Support additional disk for VMs


Features
  1. Control it like IaaS, scale it like PaaS: Deploy Virtual Machine Scale Sets using Azure Resource Manager (ARM) templates with support for Windows and Linux platform images, as well as custom images and extensions.
  2. Run Cassandra, Cloudera, Hadoop, MongoDB, and Mesos
  3. Quickly scale your big compute and big data applications
  4. Attach additional data disks as per your application requirement

CI/CD - Safe DB Changes/Migrations

Safe DB Migrations means updating your database schema without breaking the running application and without downtime . In real systems (A...