Wednesday, August 10, 2022

Resolving Web Addresses

Some important concepts

  • DNS: Domain Name System, translates human readable domain names (for example, www.Azure.com) to machine readable IP addresses (for example, 20.43.132.131)

  • Some top level domain examples: 
    • .com
    • .gov
    • .gov.uk
    • .co.in
    • .com.au

  • Some domain registrars:
    • Amazon
    • GoDaddy
    • 123-reg.co.uk

  • Start of Authority (SOA) : SOA is a set of data that provides critical resources for the Domain Name System that helps to validate domains on the Internet. It contains information of
    • Administrator
    • Server 
    • time-to-live etc.

  • NS Record: Name Server Record tells the Internet where to go to find out a domain's IP address. A domain often has multiple NS records which can indicate primary and secondary nameservers for that domain. Without properly configured NS records, users will be unable to load a website or application.

    Example: 
example.comrecord type:value:TTL
@NSns1.exampleserver.com21600

Note that NS record never points to CNAME record

  • A Record: A record is fundamental record type provides the associated IP address for the domain name.  Example:  www.Azure.com --> 20.43.132.131

  • TTL (Time-to-live): Either resolving server or local user machine cache IP info related to domain name for the TTL. So any change in IP for a domain takes the time mentioned in TTL to take effect on internet.

  • Canonical Name Record (CNAME): CNAME is just used to point one name to another.
    Example: example.com to www.example.com, ultimately it point same IP address.

  • Alias Record: It works same as CNAME to referencing other name. the difference is a CNAME can't be used for naked domain name. 
    Example: used for cloudfront, load balancers, S3 buckets those configured as websites.

Thursday, August 26, 2021

AWS - Dynamo DB Primary Keys, Indexes & Streams

Two types of Primary Keys 

Single Attribute: Single attribute key, called Partition Key or Hash Key. (Unique Id)
Its passed to an internal function that returns the partition name (physical location) where actually item stored.

Composite Key: Multiple attributes key, combination of Partition Key and Sort Key. Partition key decides physical location to store the item and Sort key decide order to store in location.
In this scenario, multiple items could have same partition key with different Sort key.

Two types of Indexes*

Local Secondary Index: 
  • It has same partition key value but different sort key value
  • Can be created only when creating table, can't be added after table creation
  • It can't be deleted
    Ex. User id + his posted threads on a forum 
  • A local secondary index lets you query over a single partition, as specified by the partition key value in the query.
  • When you query a local secondary index, you can choose either eventual consistency or strong consistency.
Global Secondary Index: 
  • It can have different partition key value and different sort key value
  • It can be created with table and can be added later, after table creation
  • It can be deleted
  • A global secondary index lets you query over the entire table, across all partitions.
  • Queries on global secondary indexes support eventual consistency only.
Streams
Stream used to capture any kind of modification in to table data (like CDC in sql)
  • New Item Insert: It capture an image of whole item including all of its attribute 
  • Update on an Item: It captures before and after images of the modified attributes of item.
  • Delete an Item: It capture whole item image before delete.
Stream holds the changes/data for 24 hours then it has deleted from stream.

Stream used with creating triggers for events
A lambda function can be created to trigger events. When ever an insert/update/delete happen lambda function would got triggered.
  • Save data in replica table in another region (DR)
  • Triggering Email for insert/update/delete
    Ex. Send welcome mail to new registered user
Elastic Cache: Elastic cache can be used in conjunction with Dynamo DB to achieve high performance .
Elastic cache provides Redis and Memcahed services.
Query results cached in Elastic cache retrieve faster from application.

Amazon DynamoDB Accelerator (DAX): It is a fully-managed, highly-available, in-memory caching service for DynamoDB.

DAX is a DynamoDB-compatible caching service that enables you to benefit fast in-memory performance for demanding applications. DAX addresses three core scenarios:

1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads  from single-digit milliseconds to microseconds.

2. DAX is a service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.

3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Capacity Planning: Option available for capacity planning of your DynamoDB Table
If you choose 'On Demand' Read/Write capacity mode, you can not set Read/write throughput and Auto scaling.
Because AWS now manage all for you with some additional charges.


Auto Scaling - Below given are default values if you opt auto scaling for your DynamoDB Table.
on reaching 70% and above it will start to scale 5 to 40000 read/write units.


    

Wednesday, April 7, 2021

AWS - Snowball

Snowball is Amazon's data import/export solution in/out Amazon cloud, in case of large amount of data.


Amazon provides Snowball appliances you load/save data and send it to Amazon to load/save data in cloud. Once data transfer completes & verified Amazon performs software erasure of the appliance to clean the data from device.

Benefits:
It comes with multiple layers of security with 80 and 50 TB capacity.
  • Simple 
  • Fast
  • Secured 
  • Encrypted with AES-256
  • Cost saving (one-fifth of high speed internet)
  • Tamper resistant 
  • Industry standard Trusted Platform Module (TPM): End to end full chain of custody for your data


Types of Snowball
  1. Snowball:  80 and 50 TB storage
  2. Snowball Edge: Same kind of normal Snowball with 100 TB storage + Computation capability
    • Computation capabilities
    • Can run Lambda functions
    • Its a kind of cloud on premise
    • It can create one level of tier between local system and cloud. In case of poor connections/offline on remote locations (offices) system/application can still work   
  3. Snowmobile: Its a 45 foot long ruggedized(shock proof) shipping container with 100 PB data transfer capacity.

It can be used to move massive volumes of data to Amazon. That can include video libraries, images database or even complete data center migration.

Key Points
  1. Snow ball can import data from S3 and export data to S3. If your data stored in Glacier then you need to retrieve data from Glacier to S3 then S3 to Snowball.

Tuesday, April 6, 2021

Weak & Strong Reference

Weak Reference: If we references of some other dlls/assemblies in our program and those assemblies are not signed then the reference to those assembly called Weak Reference.
Why? : A same name assembly with same namespace and class names can be replaced with original one. Our program can't identify the Fake assembly.
If we sign the assembly then our program also check the signed key of the reference assembly.

Strong Reference: Reference a signed assembly in our program is Strong Reference.

Monday, March 8, 2021

AWS - General Design Principals (Serverless)

Take in consideration below given General design principal while designing serverless application

  1. Simple & Singular: Keep functions single purpose and simpler
  2. Design for concurrent requests not total requests: Leverage concurrency  
  3. Orchestrate application with State machine: Use Step functions for big process not with underlying functions chaining
  4. Do not share anything : As lifespan of function is short, you can't rely on storing and sharing in runtime memory of function
  5. Do not write hardware dependent code: As its short lived and get changed 
  6. Design for failure & care for duplicates: For failures make sufficient retries, as well as ensure that it would not generate duplicates   

Saturday, March 6, 2021

AWS - Serverless Layers

 There are 6 layers of serverless application architecture

  • Compute Layer
    • Lambda
    • API Gateway
    • Step functions

  • Data Layer
    • Database
    • S3
    • App sync
      API provides data combining from multiple sources
    • Elastic search 
      Fast search on multiple data sources

  • Messaging & Streaming Layer
    • SNS
    • Kinesis streams (realtime data loading and processing)
    • Kinesis data firehose
      Captures, transforms, and loads streaming data into Kinesis Data Analytics, Amazon S3, Amazon Redshift, and Amazon ES

  • User Management & Identity
    • Cognito
      you can easily add user sign-up, sign-in, and data synchronization to serverless applications. Amazon Cognito user pools provide built-in sign-in screens and federation with Facebook, Google, Amazon, and Security Assertion Markup Language (SAML)

  • Edge Layer
    • Cloud Front

  • System Monitoring & Deployment
    • Cloud watch
    • X-Ray
      Analyze and debug serverless applications by providing distributed tracing and service maps to easily identify performance bottlenecks by visualizing a request end-to-end.
    • Serverless Application Model (AWS SAM)
      An extension of AWS CloudFormation that is used to package, test, and deploy serverless applications. The AWS SAM CLI can also enable faster debugging cycles when developing Lambda functions locally

Tuesday, December 1, 2020

AZ - Cosmos DB

Today’s applications are required to be highly responsive and always online. To achieve low latency and high availability, instances of these applications need to be deployed in data centers that are close to their users. Applications need to respond in real-time to large changes and make this data available to users in milliseconds.

Azure Cosmos DB is Microsoft's globally distributed, multi-model database service. With a click of a button, Cosmos DB enables you to elastically and independently scale throughput and storage across any number of Azure regions worldwide. You can elastically scale throughput and storage, and take advantage of fast, single-digit-millisecond data access using your favorite API.
Cosmos DB provides comprehensive service level agreements (SLAs) for throughput, latency, availability, and consistency guarantees, something no other database service offers.

Databases Supported in CosmosDB
  1. SQL
  2. MongoDB
  3. Cassandra
  4. Azure Tables
  5. Gremlin (Graph) 



Key Benefits 

  1. Global Distribution of Data: Cosmos DB enables you to build highly responsive and highly available applications worldwide. Cosmos DB transparently replicates your data wherever your users are, so your users can interact with a replica of the data that is closest to them.
    Cosmos DB allows you to add or remove any of the Azure regions to your Cosmos account at any time, with a click of a button. Cosmos DB will seamlessly replicate your data to all the regions associated with your Cosmos account while your application continues to be highly available.
  2. Highly Available: 99.999%
  3. Scalable Throughput and Storage
  4. Low Latency
    It guarantees less than 10 ms response time throughout the world
  5. Five consistency choices

    • Strong: This means that reads are guaranteed to see the most recent write
      Clients get the old value from all read regions until and unless a new committed value to write region synced to all read regions.
      After the point of time when it confirmed from all regions that value gets synced clients start to receive the new value.
      Ensures order of data client receives 
    • Bounded staleness: Most frequently chosen by globally distributed applications expecting low write latencies but total global order guarantees
      All read regions get synced at a specified time lag, till that time all-regions provide old value, after the specified point of time all regions provide new value.
      Ensures order of data client receives 
    • Session: Session consistency is most widely used consistency level both for single region as well as, globally distributed applications
      In the case of the distributed DB across regions, the client gets the value whatever present on the DB of the region it has started the session.
      When a new value updated in write region client of any read region will get that value when that will sync in clients DB region(where the client made his session).
      However, a client who committed new value will start to get new value from the time he committed

      Ensures order of data client receives
    • Consistent Prefix: Guarantees that reads never see out of order writes
      High performance like eventual
      Ensures order of data client receives
    • Eventual: The weakest form of consistency wherein a client may get the values which are older than the ones it had seen before, over time
      Does not ensure order of data client receives
  6. Schema & Index management
    Keeping database schema and indexes in-sync with an application’s schema is especially painful for globally distributed apps. With Cosmos DB, you do not need to deal with schema or index management.
  7. Battle Tested: Microsoft's mission-critical applications use it
  8. Global Presence: 54+ regions globally
  9. Secured: Data is encrypted at rest and in motion
  10. Fully Managed: You no need to worry about managing deployments to multi-data centers its taken care by Azure with licensing you opt.  
  11. Sparks: You can run Spark directly on data stored in Cosmos DB. This capability allows you to do low-latency, operational analytics at global scale without impacting transactional workloads operating directly against Cosmos DB



AWS - CloudFormation

Cloud Formation is a service used to build your infrastructure(AWS resources) in an automated way with a script file across regions as well as for multiple accounts.


A Stack is a collection of AWS resources that you can manage as a single unit. In other words, you can create, update, or delete a collection of resources by creating, updating, or deleting stacks. All the resources in a stack are defined by the stack's AWS CloudFormation template.

You need to create a template that describes all the AWS resources that you want (like EC2 instances or Amazon RDS DB instances, S3 buckets), and AWS CloudFormation takes care of provisioning and configuring those resources for you.
You don't need to individually create and configure AWS resources and figure out what's dependent on what, AWS CloudFormation handles all of that.



You can write cloudFormation script in Json or YAML.


Points to be Remember
  1. By default automatic rollback feature is enabled.
  2. You are charged if errors occured while launching CloudFormation and its rollback.
  3. CloudFormation is free, you pay for resources you use like EC2, S3 buckets.
  4. Stacks can wait using "WaitCondition" for applications to be provisioned.
  5. Fn:GetAtt can be used to get output data like instance ip, ELB IP, S3 bucket name etc.
  6. Route53 supported for new as well as existing hosted zones.
  7. Aliases, 'A' record can be created (DNS settings)
  8. IAM creation an assignment supported

Benefits
  • Modelling Infrastructure 
AWS CloudFormation allows you to model your entire infrastructure in a text file. It helps you to standardize infrastructure components used across your organization, enabling configuration compliance and faster troubleshooting.

  • Quickly Replicate the infrastructure
AWS CloudFormation provisions your resources in a safe, repeatable manner, allowing you to build and rebuild your infrastructure and applications, without having to perform manual actions or write custom scripts. CloudFormation takes care of determining the right operations to perform when managing your stack, and rolls back changes automatically if errors are detected.

When you use AWS CloudFormation, you can reuse your template to set up your resources consistently and repeatedly. Just describe your resources once and then provision the same resources over and over in multiple regions.

  • Easily Control and Track Changes to Your Infrastructure
Whenever you need updates in AWS resources, like changing EC2 instance size or changing maximum no of instances.
If you do it by ClodFormation and face any error in execution of new script it rollbacks and restores previous one.

You source control CloudFormation text files, so you have track of changes in the file and you can restore check any previous versions as well.  

Thursday, November 26, 2020

AZR - Function App

 Azure Functions provides "compute on-demand" - and in two significant ways.

First, Azure Functions allows you to implement your system's logic into readily available blocks of code. These code blocks are called "functions". Different functions can run anytime you need to respond to critical events.

Second, as requests increase, Azure Functions meets the demand with as many resources and function instances as necessary - but only while needed. As requests fall, any extra resources and application instances drop off automatically.

Azure Functions has 3 pricing plans

  • Consumption plan: Azure provides all of the necessary computational resources. You don't have to worry about resource management, and only pay for the time that your code runs.
    -Pay only when your function(s) are running
    -Scale out/in as and when required 

  • Premium plan: You specify a number of pre-warmed instances that are always online and ready to immediately respond. When your function runs, Azure provides any additional computational resources that are needed. You pay for the pre-warmed instances running continuously and any additional instances you use as Azure scales your app in and out.

  • App Service plan: Run your functions just like your web apps. If you use App Service for your other applications, your functions can run on the same plan at no additional cost.

          Session Affinity


  



Wednesday, November 25, 2020

AWS - S3

S3 : Simple Storage Service 
S3 (Simple Storage Service) : S3 is a simple service interface on internet that can be used to store and retrieve any amount of data, from any where, at any time on web.
S3 has designed to make web-scaling easier.
S3 provides highly scalable, reliable, fact, inexpensive data storage infrastructure on web. Amazon it self uses to run it's own global network of websites.

Terminology

Versioning
  • Once versioning is on on a bucket it can't be disabled but it can be suspended
  • S3 stores all versions even you delete an object. So space consumed by a file with versioning on is summation of all versions of that file. Don't on versioning for big size files get changed frequently, until a life cycle not configured
  • Integrates with life-cycle rules
  • Versioning's MFA (Multi Factor Authentication) capability adds additional security layer. It asks for token before deleting.
  • Versioning must be on on both source and destination for enabling Cross Region Replication
  • You can not configure replication between buckets in same region
  • Existing files of bucket not replicated when you configure Cross Region Replication on a bucket, until you do not update existing files. New upload or updated files replicated.
  • Deleting specific version or 'delete marker' in source bucket not replicated to destination bucket
  • Multiple and multi level cross region replication not supported
    • If you configured cross region replication from bucket 1 to bucket 2 and bucket 2 to bucket 3, then when you add or update a file in bucket 1 will be replicated to bucket 2 but not to bucket 3. When you add or update files to bucket 2 manually then only it will be replicated to bucket 3
    • You can not configure cross region replication from a bucket to multiple buckets, like bucket A to bucket B and bucket A to bucket C

AWS highly recommend that you choose Create new role to have Amazon S3 create a new IAM role for you. When you save the rule, a new policy is generated for the IAM role that matches the source and destination buckets that you choose. The name of the generated role is based on the bucket names and uses the following naming conventionreplication_role_for_source-bucket_to_destination-bucket.

Life Cycle Management
  • Life Cycle Management can be used in conjunction with Versioning (can be used with or without versioning enabled)
  • Whole bucket or specific folder(s) can be transitioned 
  • Any object can be transition to S3 IA (S3 Infrequent Access Storage) after reaching 128 kb of file size and minimum 30 days of creation date
  • Any object can be archived to Glacier after reaching 30 days of S3 IA (or 60 days of creation days)
  • Can be deleted permanently from Glacier after 90 days (Transition to Glacier cost minimum for 90 days) 
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html

Bucket Policy: Below given is a sample bucket policy, Allows GetObjects method for source url from cloudfront.


Deleting Multiple objects from S3 Bucket: The Multi-Object Delete operation enables us to delete multiple objects from a bucket using a single HTTP request. If we know the object keys that we want to delete, then this operation provides a suitable alternative to sending individual delete requests, that reduce per-request overhead.


POST /?delete HTTP/1.1 Host: bucketname.s3.amazonaws.com Authorization: authorization string Content-Length: Size Content-MD5: MD5 <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>Key</Key> <VersionId>VersionId</VersionId> </Object> <Object> <Key>Key</Key> </Object> ... </Delete>

Access Control List ACL: In ACL you can configure access for your account, other accounts, public access and S3 log delivery groups on Read/Write objects and Read/Write bucket permissions.


S3 Cors Configuration: Below given is sample Cross origin resource sharing configuration on a bucket.



Transfer Acceleration : You can enable transfer acceleration on a S3 bucket, but it has additional charges.



Labs

Lab 1: S3 Cors configuration - Access a image from another S3 bucket using website url.
Lab 2: S3 Versioning - Store multiple versions, delete a version. delete object and restore a version.
Lab 3: S3 Cross Region Replication - Create multiple buckets, configure Cross Region Replication with multiple scenarios
Lab 4: S3 Life Cycle Management - Configure with old and new console

AWS - S3 Transfer Acceleration

Transfer Acceleration uses Cloud Front network to accelerate your upload to S3.
Instead of uploading data directly to S3 bucket, It sends data to nearest Edge Location then Edge Location sends data to S3.

Transfer acceleration can be leveraged to read and write objects to S3 bucket by using provided url after enabling this feature on S3 bucket.



How to use this feature?
You need to unable this feature for S3 bucket then Amazon provides you a distinct url for the bucket.
If you upload using this url, it uses Transfer Acceleration.
Url Syntax: bucketName.s3-accelerate.amazonaws.com



AWS - S3 Security and Encryption

S3 Security
Newly created bucket is private by default. So objects of buckets can't be accessed outside until you change security settings

S3 Bucket security can be configured with below 2 options
  • Bucket Policies: Bucket policies are applicable to whole bucket
  • Access Control List: After creating access control list you can apply it at object level in bucket.
    You can provide below access to your account, others accounts and Everyone as well.
    Read, Write, Read Permissions, Write Permissions


S3 bucket can be configured to log all request made to the bucket. That can be used for audit as and when required.

S3 Encryption

There are 2 types of encryption
  • In transit: When you access your bucket and send information to/from using SSL/TLS (https)
  • At Rest
    • Server Side Encryption

      • S3 Managed Keys (SSE-S3): Objects are encrypted with a unique key, and Amazon encrypt the key it self by a master key and regularly rotate the master key. Unique key is a AES-256 bit encryption key, Amazon handles it by its own.
      • AWS Key Management Service (SSE-KMS): Its similar to S3 managed keys but it comes with some additional benefits and additional charges and need additional permissions to use these keys.
        Additional benefit to this key is that, it provide audit trail on "who and when your keys are getting used".
        You can create your customize keys for your region or S3.
      • Server side encryption with customer provided keys (SSE-C):
        • We managed keys 
        • Amazon manages encryption and decryption
    • Client Side Encryption: You encrypt the data at client side before uploading to S3 

Wednesday, November 18, 2020

AZR - Virtual Machine Scale Set

(Azure Scale Set is like Autoscaling group in AWS)

Azure virtual machine scale sets enable you to create and manage a group of identical, load-balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule.
Scale sets provide high availability to your applications and allow you to centrally manage, configure, and update a large number of VMs.
With virtual machine scale sets, you can build large-scale services for areas such as compute, big data, and container workloads.


Why Scale Set
  • Easy to create & maintain multiple VMs: Easy scaling of hundreds of VMs because all created by the same OS image.
  • Networking: Uses 'Load Balancers' with Basic layer-4 traffic distribution and 'Azure Application Gateway' with advanced layer-7 traffic distribution.
    • Load Balancers: Used for IP/port based mapping with VMs
    • Azure Application Gateway: Used for URL based mapping with VMs




  • Provides high availability and application resiliency: With the help of multiple VMs using Availability set or Availability zones.
  • Support Spot Instances: You can set Spot price that is a maximum price per hour US $ you want to pay for an instance. Azure allot instance if your set price is greater than the platform price at that time.

  • Auto Scaling: Provide auto scaling


  • Large Scale handling: 
    • Scale sets support up to 1,000 VM instances. If you create and upload your own custom VM images, the limit is 600 VM instances. 
    • For the best performance with production workloads, use Azure Managed Disks.
  • Support additional disk for VMs


Features
  1. Control it like IaaS, scale it like PaaS: Deploy Virtual Machine Scale Sets using Azure Resource Manager (ARM) templates with support for Windows and Linux platform images, as well as custom images and extensions.
  2. Run Cassandra, Cloudera, Hadoop, MongoDB, and Mesos
  3. Quickly scale your big compute and big data applications
  4. Attach additional data disks as per your application requirement

Tuesday, October 20, 2020

AWS - Security Token Service (STS)

Sources from where users can come to access AWS services

  1. Federations
  2. Federations with Mobile
  3. Cross Account Access 

Federations : Grouping of users of multiple domains like IAM, Facebook, Google etc.
Identity Broker :  An AWS service used to connect an user to a federation.
Allows to take an user from point X and join it to point Y.
Identity Store : Services having their won users db, like Facebook, Google
Identity : An user


Steps to Remember :
  1. Create Identity Broker that will connect to organisation's LDAP directory and AWS STS.
  2. Identity Broker first connect to organisation's LDAP to verify user then it will connect to AWS Security Token Service (STS).
  3. Call to STS
    Scenario 1:
    Identity Broker calls getFedrationToken function with IAM credentials, IAM policy, duration (1-36 hrs: validity of new token) and a policy that contains which permissions to be assigned.
    Scenario 2:
    If Identity Broker get an IAM role associated with user from LDAP then Identity Broker calls STS and returned token will contains permissions based on the role's permissions.
  4. STS returns a temporary token with a Access Key, Secrete Access Key, a Token and its Duration (Lifetime of token).
  5. Then Application uses this token to call S3 bucket.
  6. S3 confirms permission from IAM and allow application to access bucket. 

Tuesday, September 29, 2020

Azure - Queues

Azure Queue Storage provides cloud messaging between application components. In designing applications for scale, application components are often decoupled, so that they can scale independently. Queue storage delivers asynchronous messaging for communication between application components, whether they are running in the cloud, on the desktop, on an on-premises server, or on a mobile device. Queue storage also supports managing asynchronous tasks and building process workflows.

It is a FIFO approach

Storage Account > Queue > Messages

Important Classes & Methods

Class: CloudStorageAccount
Method: CreateCloudBlobClient

Class: CloudQueueClient

Class:CloudQueue
Method: CreateIfNotExist
Method: PeekMessge
Method: UpdateMessge
Method: DeleteMessge

---Dequeue--
Method:GetMessage
Method:GetMessages - Read all visible messages or no. of messages you passed as parameter of the queue.

Message visibility to other clients, time is 30 sec by default. While Fetching message with GetMessage or updating with UpdateMessage you can change default visibility time and set as you wish, with passing a TimeSpan object as a parameter.

A message must be processed by one client only 




Dequeue a message
GetMessage   >  Process Message > Delete Message

GetMessage: Fetch message and block that message to other clients that means it will not visible to other clients for visibility time.
You need to process and delete this message before visible time finishes. Because after that, the message will be visible to other clients and another client may block that.

PeekMessage: It returns first available message of queue and do not block that message like GetMessage method.
It means parallel other clients may also read the same message. 

Class: CloudQueueMessage
Method: SetMessageContent
Property: Id
Property: PopupReceipt

Saturday, September 19, 2020

Azure - Azure Storage Table (AZ)

Azure tables are ideal for storing structured, non-relational data. Common uses of Table storage include:
  • Storing TBs of structured data capable of serving web scale applications
  • Storing datasets that don't require complex joins, foreign keys, or stored procedures and can be denormalized for fast access
  • Quickly querying data using a clustered index
  • Accessing data using the OData protocol and LINQ queries with WCF Data Service .NET Libraries

    Retreive Entity
    TableOperation TO = TableOperation.Retreive(PartitionKey, Rowkey);
    TableResult TR = TableEmp.Execute(TO);
    EmpEntity emp = TR.Result;

    Update Entity
    Update proprties of emp
    TableOperation TO = TableOperation.Replace(emp);
    TableEmp.Execute(TO)

    Delete Entity
    TableOperation TO = TableOperation.Delete(Entity)
    TableEmp.Execute(TO)


    Optimization Techniques

  1. Read First: Read first the entity using Partition name + Row key
  2. Multiple Keys: Keep multiple keys, if data is duplicating no worries
  3. Compound Key: You can make Row key as a compound key
    Ex. If you store 2 values (Id and Email) in Row key, you can search with any of the mob. or email, this is a compound key. Id_<Id> and Email_<Email>

    PartitionKey
    RowKey
    EmpName
    Employee
    Id_1001
    Megha
    Employee
    Id_1002
    Renuka
    Employee
    Tomar
    Employee
    Mukesh
  4. Avoid unnecessary tables: Try to keep all related entities in one table separated by Partition key. Makes transactions smooth (commit/rollback)
    Ex. Emp, EmpDetails
  5. Inter Partition Pattern: Keeping multiple type values in row key
    Keeping multiple values to divide search load, like people searching with email id will search with a key like "Email_ %"

    Compound Key example (point no.3) is an Inter Partition pattern example.
  6. Intra Partition Pattern: Dividing search by using multiple Partition key is Intra Partition Pattern.

    PartitionKey
    RowKey
    EmpName
    EmployeeId
    1001
    Megha
    EmployeeId
    1002
    Renuka
    EmployeeEmail
    Tomar
    EmployeeEmail
    Mukesh
     
  7. Delete Partition Pattern: This enables bulk delete. When you delete data based on the partition key.
    Ex. you can delete any month data in a single operation.

    PARTITIONKEY

    ROWKEY

    EMPNAME

    EMPLOYEE-JAN20
    1001
    Megha
    EMPLOYEE-JAN20
    1002
    Renuka
    EMPLOYEE-JAN20
    1003
    Tomar
    EMPLOYEE-FEB20
    1004
    Mukesh
    EMPLOYEE-FEB20
    1005
    Kailash

  8. Large Entity Pattern: In case you are storing image/binary data you can use blog to store 
  9. Long table Pattern: In case you have large no. of columns in your entity

Azure - Azure Storage (AZ)

Azure categories storage items in 4 categories
  1. File    
    Used for files storage like text file, word file, pdf file, etc.
  2. Blob 
    Used for the binary data store like an image file or library files etc.
  3. Table
    Used to store key-value pairs
  4. Queue
    Used to store queue messages. It works in a FIFO manner.


Account Kind of Storage
  1. Storage (General Purpose v1)
    A general-purpose and used for legacy deployable (stuff build before 2014) that can be used to store file, blob, table and the queue.
  2. StorageV2 (General Purpose v2)
    Recommended as it has the latest features and option to choose Access Tier as well
    A general-purpose and used to store file, blob, table and the queue.
  3. Blob Storage
    Storage accounts with premium performance characteristics for block blobs and append blobs. Recommended for scenarios with high transaction rates, or scenarios that use smaller objects or require consistently low storage latency.


Replication or Data Redundancy
There are multiple options available for your requirements of Durability and High Availability
  1. LRS(Locally Redundant Storage)
    Stores 3 copies of your data locally in a single physical location synchronously of the primary region.
    Cheapest option
    Not recommended for applications required high availability
  2. ZRS(Zone-Redundant Storage)
    Copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability. with economic rates.
    Durability: 99.99999999999 (11 9's)
  3. GRS(Geo Redundant Storage)
    Copies 3 local copies synchronously using LRS of your data in the primary region and in a different geo-location asynchronously in the secondary region. you can think it as Geo Locally Redundant Storage.
    Durability: 99.999999999999 (12 9's)
  4.  GZRS(Geo-Zone-Redundant Storage)
    Copies data in 3 Azure Availability zones synchronously in the primary region and in different Geo location asynchronously in the secondary region.
    Durability: 99.9999999999999999 (16 9's)
  5. RA-GRS(Read Access Geo Redundant Access) Not supported currently
    Copies 3 local copies synchronously using LRS of your data in the primary region and in a different geo-location asynchronously in the secondary region. you can think it as Geo Locally Redundant Storage with reading access to secondary region data.
    Secondary region data is available to read in case your primary region unavailable.
  6. RA-GZRS(Read Access Geo Zone Redundant Access) Not supported currently
    Copies data in 3 Azure Availability zones synchronously in the primary region and in different Geo location asynchronously in the secondary region. This replication avail read access to secondary region data. You can access secondary region data in case of primary unavailable.
    Durability: 99.9999999999999999 (16 9's)
Performance
This section basically define disk type which would use to store data
  • Standard: Data backed into magnetic HDD drives, it offers cheap rates.  
  • Premium: Data backed into solid-state drives SDD, provides high IOPS rate with 99.9% SLA.

Access Tier

  • Hot: Can be used to store frequently accessed data. 
  • Cool: Can be used if data access is infrequent. 
  • Archive: Can be used to store data that accessed rarely. Only for blob.
    Can't be set on the storage level
    Can set at the blob level 

Premium Performance
Hot tier
Cool tier
Archive tier
Availability
99.90%
99.90%
99%
Offline
Availability
(RA-GRS reads)
N/A
99.99%
99.90%
Offline
Usage charges
Higher storage costs, lower access and transaction cost
Higher storage costs, lower access, and transaction costs
Lower storage costs, higher access, and transaction costs
Lowest storage costs, highest access, and transaction costs
Minimum object size
N/A
N/A
N/A
N/A
Minimum storage duration
N/A
N/A
30 days1
180 days
Latency
(Time to first byte)
Single-digit milliseconds
milliseconds
milliseconds
hours2


CI/CD - Safe DB Changes/Migrations

Safe DB Migrations means updating your database schema without breaking the running application and without downtime . In real systems (A...