Thursday, November 26, 2020

AZR - Function App

 Azure Functions provides "compute on-demand" - and in two significant ways.

First, Azure Functions allows you to implement your system's logic into readily available blocks of code. These code blocks are called "functions". Different functions can run anytime you need to respond to critical events.

Second, as requests increase, Azure Functions meets the demand with as many resources and function instances as necessary - but only while needed. As requests fall, any extra resources and application instances drop off automatically.

Azure Functions has 3 pricing plans

  • Consumption plan: Azure provides all of the necessary computational resources. You don't have to worry about resource management, and only pay for the time that your code runs.
    -Pay only when your function(s) are running
    -Scale out/in as and when required 

  • Premium plan: You specify a number of pre-warmed instances that are always online and ready to immediately respond. When your function runs, Azure provides any additional computational resources that are needed. You pay for the pre-warmed instances running continuously and any additional instances you use as Azure scales your app in and out.

  • App Service plan: Run your functions just like your web apps. If you use App Service for your other applications, your functions can run on the same plan at no additional cost.

          Session Affinity


  



Wednesday, November 25, 2020

AWS - S3

S3 : Simple Storage Service 
S3 (Simple Storage Service) : S3 is a simple service interface on internet that can be used to store and retrieve any amount of data, from any where, at any time on web.
S3 has designed to make web-scaling easier.
S3 provides highly scalable, reliable, fact, inexpensive data storage infrastructure on web. Amazon it self uses to run it's own global network of websites.

Terminology

Versioning
  • Once versioning is on on a bucket it can't be disabled but it can be suspended
  • S3 stores all versions even you delete an object. So space consumed by a file with versioning on is summation of all versions of that file. Don't on versioning for big size files get changed frequently, until a life cycle not configured
  • Integrates with life-cycle rules
  • Versioning's MFA (Multi Factor Authentication) capability adds additional security layer. It asks for token before deleting.
  • Versioning must be on on both source and destination for enabling Cross Region Replication
  • You can not configure replication between buckets in same region
  • Existing files of bucket not replicated when you configure Cross Region Replication on a bucket, until you do not update existing files. New upload or updated files replicated.
  • Deleting specific version or 'delete marker' in source bucket not replicated to destination bucket
  • Multiple and multi level cross region replication not supported
    • If you configured cross region replication from bucket 1 to bucket 2 and bucket 2 to bucket 3, then when you add or update a file in bucket 1 will be replicated to bucket 2 but not to bucket 3. When you add or update files to bucket 2 manually then only it will be replicated to bucket 3
    • You can not configure cross region replication from a bucket to multiple buckets, like bucket A to bucket B and bucket A to bucket C

AWS highly recommend that you choose Create new role to have Amazon S3 create a new IAM role for you. When you save the rule, a new policy is generated for the IAM role that matches the source and destination buckets that you choose. The name of the generated role is based on the bucket names and uses the following naming conventionreplication_role_for_source-bucket_to_destination-bucket.

Life Cycle Management
  • Life Cycle Management can be used in conjunction with Versioning (can be used with or without versioning enabled)
  • Whole bucket or specific folder(s) can be transitioned 
  • Any object can be transition to S3 IA (S3 Infrequent Access Storage) after reaching 128 kb of file size and minimum 30 days of creation date
  • Any object can be archived to Glacier after reaching 30 days of S3 IA (or 60 days of creation days)
  • Can be deleted permanently from Glacier after 90 days (Transition to Glacier cost minimum for 90 days) 
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html

Bucket Policy: Below given is a sample bucket policy, Allows GetObjects method for source url from cloudfront.


Deleting Multiple objects from S3 Bucket: The Multi-Object Delete operation enables us to delete multiple objects from a bucket using a single HTTP request. If we know the object keys that we want to delete, then this operation provides a suitable alternative to sending individual delete requests, that reduce per-request overhead.


POST /?delete HTTP/1.1 Host: bucketname.s3.amazonaws.com Authorization: authorization string Content-Length: Size Content-MD5: MD5 <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>Key</Key> <VersionId>VersionId</VersionId> </Object> <Object> <Key>Key</Key> </Object> ... </Delete>

Access Control List ACL: In ACL you can configure access for your account, other accounts, public access and S3 log delivery groups on Read/Write objects and Read/Write bucket permissions.


S3 Cors Configuration: Below given is sample Cross origin resource sharing configuration on a bucket.



Transfer Acceleration : You can enable transfer acceleration on a S3 bucket, but it has additional charges.



Labs

Lab 1: S3 Cors configuration - Access a image from another S3 bucket using website url.
Lab 2: S3 Versioning - Store multiple versions, delete a version. delete object and restore a version.
Lab 3: S3 Cross Region Replication - Create multiple buckets, configure Cross Region Replication with multiple scenarios
Lab 4: S3 Life Cycle Management - Configure with old and new console

AWS - S3 Transfer Acceleration

Transfer Acceleration uses Cloud Front network to accelerate your upload to S3.
Instead of uploading data directly to S3 bucket, It sends data to nearest Edge Location then Edge Location sends data to S3.

Transfer acceleration can be leveraged to read and write objects to S3 bucket by using provided url after enabling this feature on S3 bucket.



How to use this feature?
You need to unable this feature for S3 bucket then Amazon provides you a distinct url for the bucket.
If you upload using this url, it uses Transfer Acceleration.
Url Syntax: bucketName.s3-accelerate.amazonaws.com



AWS - S3 Security and Encryption

S3 Security
Newly created bucket is private by default. So objects of buckets can't be accessed outside until you change security settings

S3 Bucket security can be configured with below 2 options
  • Bucket Policies: Bucket policies are applicable to whole bucket
  • Access Control List: After creating access control list you can apply it at object level in bucket.
    You can provide below access to your account, others accounts and Everyone as well.
    Read, Write, Read Permissions, Write Permissions


S3 bucket can be configured to log all request made to the bucket. That can be used for audit as and when required.

S3 Encryption

There are 2 types of encryption
  • In transit: When you access your bucket and send information to/from using SSL/TLS (https)
  • At Rest
    • Server Side Encryption

      • S3 Managed Keys (SSE-S3): Objects are encrypted with a unique key, and Amazon encrypt the key it self by a master key and regularly rotate the master key. Unique key is a AES-256 bit encryption key, Amazon handles it by its own.
      • AWS Key Management Service (SSE-KMS): Its similar to S3 managed keys but it comes with some additional benefits and additional charges and need additional permissions to use these keys.
        Additional benefit to this key is that, it provide audit trail on "who and when your keys are getting used".
        You can create your customize keys for your region or S3.
      • Server side encryption with customer provided keys (SSE-C):
        • We managed keys 
        • Amazon manages encryption and decryption
    • Client Side Encryption: You encrypt the data at client side before uploading to S3 

Wednesday, November 18, 2020

AZR - Virtual Machine Scale Set

(Azure Scale Set is like Autoscaling group in AWS)

Azure virtual machine scale sets enable you to create and manage a group of identical, load-balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule.
Scale sets provide high availability to your applications and allow you to centrally manage, configure, and update a large number of VMs.
With virtual machine scale sets, you can build large-scale services for areas such as compute, big data, and container workloads.


Why Scale Set
  • Easy to create & maintain multiple VMs: Easy scaling of hundreds of VMs because all created by the same OS image.
  • Networking: Uses 'Load Balancers' with Basic layer-4 traffic distribution and 'Azure Application Gateway' with advanced layer-7 traffic distribution.
    • Load Balancers: Used for IP/port based mapping with VMs
    • Azure Application Gateway: Used for URL based mapping with VMs




  • Provides high availability and application resiliency: With the help of multiple VMs using Availability set or Availability zones.
  • Support Spot Instances: You can set Spot price that is a maximum price per hour US $ you want to pay for an instance. Azure allot instance if your set price is greater than the platform price at that time.

  • Auto Scaling: Provide auto scaling


  • Large Scale handling: 
    • Scale sets support up to 1,000 VM instances. If you create and upload your own custom VM images, the limit is 600 VM instances. 
    • For the best performance with production workloads, use Azure Managed Disks.
  • Support additional disk for VMs


Features
  1. Control it like IaaS, scale it like PaaS: Deploy Virtual Machine Scale Sets using Azure Resource Manager (ARM) templates with support for Windows and Linux platform images, as well as custom images and extensions.
  2. Run Cassandra, Cloudera, Hadoop, MongoDB, and Mesos
  3. Quickly scale your big compute and big data applications
  4. Attach additional data disks as per your application requirement

Tuesday, October 20, 2020

AWS - Security Token Service (STS)

Sources from where users can come to access AWS services

  1. Federations
  2. Federations with Mobile
  3. Cross Account Access 

Federations : Grouping of users of multiple domains like IAM, Facebook, Google etc.
Identity Broker :  An AWS service used to connect an user to a federation.
Allows to take an user from point X and join it to point Y.
Identity Store : Services having their won users db, like Facebook, Google
Identity : An user


Steps to Remember :
  1. Create Identity Broker that will connect to organisation's LDAP directory and AWS STS.
  2. Identity Broker first connect to organisation's LDAP to verify user then it will connect to AWS Security Token Service (STS).
  3. Call to STS
    Scenario 1:
    Identity Broker calls getFedrationToken function with IAM credentials, IAM policy, duration (1-36 hrs: validity of new token) and a policy that contains which permissions to be assigned.
    Scenario 2:
    If Identity Broker get an IAM role associated with user from LDAP then Identity Broker calls STS and returned token will contains permissions based on the role's permissions.
  4. STS returns a temporary token with a Access Key, Secrete Access Key, a Token and its Duration (Lifetime of token).
  5. Then Application uses this token to call S3 bucket.
  6. S3 confirms permission from IAM and allow application to access bucket. 

Tuesday, September 29, 2020

Azure - Queues

Azure Queue Storage provides cloud messaging between application components. In designing applications for scale, application components are often decoupled, so that they can scale independently. Queue storage delivers asynchronous messaging for communication between application components, whether they are running in the cloud, on the desktop, on an on-premises server, or on a mobile device. Queue storage also supports managing asynchronous tasks and building process workflows.

It is a FIFO approach

Storage Account > Queue > Messages

Important Classes & Methods

Class: CloudStorageAccount
Method: CreateCloudBlobClient

Class: CloudQueueClient

Class:CloudQueue
Method: CreateIfNotExist
Method: PeekMessge
Method: UpdateMessge
Method: DeleteMessge

---Dequeue--
Method:GetMessage
Method:GetMessages - Read all visible messages or no. of messages you passed as parameter of the queue.

Message visibility to other clients, time is 30 sec by default. While Fetching message with GetMessage or updating with UpdateMessage you can change default visibility time and set as you wish, with passing a TimeSpan object as a parameter.

A message must be processed by one client only 




Dequeue a message
GetMessage   >  Process Message > Delete Message

GetMessage: Fetch message and block that message to other clients that means it will not visible to other clients for visibility time.
You need to process and delete this message before visible time finishes. Because after that, the message will be visible to other clients and another client may block that.

PeekMessage: It returns first available message of queue and do not block that message like GetMessage method.
It means parallel other clients may also read the same message. 

Class: CloudQueueMessage
Method: SetMessageContent
Property: Id
Property: PopupReceipt

Saturday, September 19, 2020

Azure - Azure Storage Table (AZ)

Azure tables are ideal for storing structured, non-relational data. Common uses of Table storage include:
  • Storing TBs of structured data capable of serving web scale applications
  • Storing datasets that don't require complex joins, foreign keys, or stored procedures and can be denormalized for fast access
  • Quickly querying data using a clustered index
  • Accessing data using the OData protocol and LINQ queries with WCF Data Service .NET Libraries

    Retreive Entity
    TableOperation TO = TableOperation.Retreive(PartitionKey, Rowkey);
    TableResult TR = TableEmp.Execute(TO);
    EmpEntity emp = TR.Result;

    Update Entity
    Update proprties of emp
    TableOperation TO = TableOperation.Replace(emp);
    TableEmp.Execute(TO)

    Delete Entity
    TableOperation TO = TableOperation.Delete(Entity)
    TableEmp.Execute(TO)


    Optimization Techniques

  1. Read First: Read first the entity using Partition name + Row key
  2. Multiple Keys: Keep multiple keys, if data is duplicating no worries
  3. Compound Key: You can make Row key as a compound key
    Ex. If you store 2 values (Id and Email) in Row key, you can search with any of the mob. or email, this is a compound key. Id_<Id> and Email_<Email>

    PartitionKey
    RowKey
    EmpName
    Employee
    Id_1001
    Megha
    Employee
    Id_1002
    Renuka
    Employee
    Tomar
    Employee
    Mukesh
  4. Avoid unnecessary tables: Try to keep all related entities in one table separated by Partition key. Makes transactions smooth (commit/rollback)
    Ex. Emp, EmpDetails
  5. Inter Partition Pattern: Keeping multiple type values in row key
    Keeping multiple values to divide search load, like people searching with email id will search with a key like "Email_ %"

    Compound Key example (point no.3) is an Inter Partition pattern example.
  6. Intra Partition Pattern: Dividing search by using multiple Partition key is Intra Partition Pattern.

    PartitionKey
    RowKey
    EmpName
    EmployeeId
    1001
    Megha
    EmployeeId
    1002
    Renuka
    EmployeeEmail
    Tomar
    EmployeeEmail
    Mukesh
     
  7. Delete Partition Pattern: This enables bulk delete. When you delete data based on the partition key.
    Ex. you can delete any month data in a single operation.

    PARTITIONKEY

    ROWKEY

    EMPNAME

    EMPLOYEE-JAN20
    1001
    Megha
    EMPLOYEE-JAN20
    1002
    Renuka
    EMPLOYEE-JAN20
    1003
    Tomar
    EMPLOYEE-FEB20
    1004
    Mukesh
    EMPLOYEE-FEB20
    1005
    Kailash

  8. Large Entity Pattern: In case you are storing image/binary data you can use blog to store 
  9. Long table Pattern: In case you have large no. of columns in your entity

Azure - Azure Storage (AZ)

Azure categories storage items in 4 categories
  1. File    
    Used for files storage like text file, word file, pdf file, etc.
  2. Blob 
    Used for the binary data store like an image file or library files etc.
  3. Table
    Used to store key-value pairs
  4. Queue
    Used to store queue messages. It works in a FIFO manner.


Account Kind of Storage
  1. Storage (General Purpose v1)
    A general-purpose and used for legacy deployable (stuff build before 2014) that can be used to store file, blob, table and the queue.
  2. StorageV2 (General Purpose v2)
    Recommended as it has the latest features and option to choose Access Tier as well
    A general-purpose and used to store file, blob, table and the queue.
  3. Blob Storage
    Storage accounts with premium performance characteristics for block blobs and append blobs. Recommended for scenarios with high transaction rates, or scenarios that use smaller objects or require consistently low storage latency.


Replication or Data Redundancy
There are multiple options available for your requirements of Durability and High Availability
  1. LRS(Locally Redundant Storage)
    Stores 3 copies of your data locally in a single physical location synchronously of the primary region.
    Cheapest option
    Not recommended for applications required high availability
  2. ZRS(Zone-Redundant Storage)
    Copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability. with economic rates.
    Durability: 99.99999999999 (11 9's)
  3. GRS(Geo Redundant Storage)
    Copies 3 local copies synchronously using LRS of your data in the primary region and in a different geo-location asynchronously in the secondary region. you can think it as Geo Locally Redundant Storage.
    Durability: 99.999999999999 (12 9's)
  4.  GZRS(Geo-Zone-Redundant Storage)
    Copies data in 3 Azure Availability zones synchronously in the primary region and in different Geo location asynchronously in the secondary region.
    Durability: 99.9999999999999999 (16 9's)
  5. RA-GRS(Read Access Geo Redundant Access) Not supported currently
    Copies 3 local copies synchronously using LRS of your data in the primary region and in a different geo-location asynchronously in the secondary region. you can think it as Geo Locally Redundant Storage with reading access to secondary region data.
    Secondary region data is available to read in case your primary region unavailable.
  6. RA-GZRS(Read Access Geo Zone Redundant Access) Not supported currently
    Copies data in 3 Azure Availability zones synchronously in the primary region and in different Geo location asynchronously in the secondary region. This replication avail read access to secondary region data. You can access secondary region data in case of primary unavailable.
    Durability: 99.9999999999999999 (16 9's)
Performance
This section basically define disk type which would use to store data
  • Standard: Data backed into magnetic HDD drives, it offers cheap rates.  
  • Premium: Data backed into solid-state drives SDD, provides high IOPS rate with 99.9% SLA.

Access Tier

  • Hot: Can be used to store frequently accessed data. 
  • Cool: Can be used if data access is infrequent. 
  • Archive: Can be used to store data that accessed rarely. Only for blob.
    Can't be set on the storage level
    Can set at the blob level 

Premium Performance
Hot tier
Cool tier
Archive tier
Availability
99.90%
99.90%
99%
Offline
Availability
(RA-GRS reads)
N/A
99.99%
99.90%
Offline
Usage charges
Higher storage costs, lower access and transaction cost
Higher storage costs, lower access, and transaction costs
Lower storage costs, higher access, and transaction costs
Lowest storage costs, highest access, and transaction costs
Minimum object size
N/A
N/A
N/A
N/A
Minimum storage duration
N/A
N/A
30 days1
180 days
Latency
(Time to first byte)
Single-digit milliseconds
milliseconds
milliseconds
hours2


Saturday, September 12, 2020

Azure - Some Basic Concepts of Azure

Some Basic Terminologies 

SAAS: Software as a service
PAAS: Platform as a service
IAAS: Infrastructure as a service


2 O's of Cloud:
  1. On-Demand
  2. Out Sourced


Resource Group: A logical grouping of resources.
Location: The location on which you create a resource group is metadata of resources, not actual resources.

Ex. You are creating a website for hr management than you can create a resource group HR-Management and keep all the resources you would create for this website in this group.


Deployment: There are 4 options available in VS.

  1. Deploy on FTP
  2. Deploy in a local directory
  3. Deploy on Azure: You can deploy the site directly over Azure.
    If you will use an Azure profile, no need to provide credential every time you would deploy


App Service Editor: This is the online VS Code tool that you can use to edit files at the cloud.


Monday, June 15, 2020

AZR - Availability Zone

(Same like Availability zone in AWS)

Availability Zones is a high-availability offering that protects your applications and data from datacenter failures. Availability Zones are unique physical locations within the Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking.
To ensure resiliency, there's a minimum of three separate zones in all enabled regions.
The physical separation of Availability Zones within a region protects applications and data from datacenter failures.
Zone-redundant services replicate your applications and data across Availability Zones to protect from single-points-of-failure.
With the Availability Zones, Azure offers industry best 99.99% VM uptime SLA.

Availability zones are subscription-based, which means AZ1 in a specific region of a subscription might be different to AZ1 of the same region in a different subscription.




Availability Zone Support available by Oct 19



Availability set Support
America
Europe
Asia Pacific
Central US
East US
East US 2
West US 2
France Central
North Europe
UK South
West Europe
Japan East
Southeast Asia
Australia East
Compute
Linux Virtual Machines
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Windows Virtual Machines
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Virtual Machine Scale Sets
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Azure App Service Environments ILB
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Azure Kubernetes Service
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Storage
Managed Disks
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Zone-redundant Storage
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Networking
Standard IP Address
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Standard Load Balancer
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
VPN Gateway
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
ExpressRoute Gateway
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Application Gateway(V2)
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Azure Firewall
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Databases
Azure Data Explorer
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
SQL Database
Y
Y
Y
Y(Preview)
Y
Y(Preview)
Y
Y
Y
Y
Y
Azure Cache for Redis
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Azure Cosmos DB
Y
Y
Y
Y
Y
Y
Y
Analytics
Event Hubs
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Integration
Service Bus (Premium Tier Only)
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Event Grid
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Identity
Azure AD Domain Services
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y


CI/CD - Safe DB Changes/Migrations

Safe DB Migrations means updating your database schema without breaking the running application and without downtime . In real systems (A...