Two types of Primary Keys
Single Attribute: Single attribute key, called Partition Key or Hash Key. (Unique Id)
Its passed to an internal function that returns the partition name (physical location) where actually item stored.
Composite Key: Multiple attributes key, combination of Partition Key and Sort Key. Partition key decides physical location to store the item and Sort key decide order to store in location.
In this scenario, multiple items could have same partition key with different Sort key.
Two types of Indexes*
Local Secondary Index:
Single Attribute: Single attribute key, called Partition Key or Hash Key. (Unique Id)
Its passed to an internal function that returns the partition name (physical location) where actually item stored.
Composite Key: Multiple attributes key, combination of Partition Key and Sort Key. Partition key decides physical location to store the item and Sort key decide order to store in location.
In this scenario, multiple items could have same partition key with different Sort key.
Two types of Indexes*
Local Secondary Index:
- It has same partition key value but different sort key value
- Can be created only when creating table, can't be added after table creation
- It can't be deleted
Ex. User id + his posted threads on a forum - A local secondary index lets you query over a single partition, as specified by the partition key value in the query.
- When you query a local secondary index, you can choose either eventual consistency or strong consistency.
Global Secondary Index:
- It can have different partition key value and different sort key value
- It can be created with table and can be added later, after table creation
- It can be deleted
- A global secondary index lets you query over the entire table, across all partitions.
- Queries on global secondary indexes support eventual consistency only.
Streams
Stream used to capture any kind of modification in to table data (like CDC in sql)
- New Item Insert: It capture an image of whole item including all of its attribute
- Update on an Item: It captures before and after images of the modified attributes of item.
- Delete an Item: It capture whole item image before delete.
Stream holds the changes/data for 24 hours then it has deleted from stream.
Stream used with creating triggers for events
A lambda function can be created to trigger events. When ever an insert/update/delete happen lambda function would got triggered.
A lambda function can be created to trigger events. When ever an insert/update/delete happen lambda function would got triggered.
- Save data in replica table in another region (DR)
- Triggering Email for insert/update/delete
Ex. Send welcome mail to new registered user
Elastic Cache: Elastic cache can be used in conjunction with Dynamo DB to achieve high performance .
Elastic cache provides Redis and Memcahed services.
Query results cached in Elastic cache retrieve faster from application.
Amazon DynamoDB Accelerator (DAX): It is a fully-managed, highly-available, in-memory caching service for DynamoDB.
DAX is a DynamoDB-compatible caching service that enables you to benefit fast in-memory performance for demanding applications. DAX addresses three core scenarios:
1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads from single-digit milliseconds to microseconds.
2. DAX is a service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Capacity Planning: Option available for capacity planning of your DynamoDB Table
If you choose 'On Demand' Read/Write capacity mode, you can not set Read/write throughput and Auto scaling.
Because AWS now manage all for you with some additional charges.
Auto Scaling - Below given are default values if you opt auto scaling for your DynamoDB Table.
on reaching 70% and above it will start to scale 5 to 40000 read/write units.
Elastic cache provides Redis and Memcahed services.
Query results cached in Elastic cache retrieve faster from application.
Amazon DynamoDB Accelerator (DAX): It is a fully-managed, highly-available, in-memory caching service for DynamoDB.
DAX is a DynamoDB-compatible caching service that enables you to benefit fast in-memory performance for demanding applications. DAX addresses three core scenarios:
1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads from single-digit milliseconds to microseconds.
2. DAX is a service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Capacity Planning: Option available for capacity planning of your DynamoDB Table
If you choose 'On Demand' Read/Write capacity mode, you can not set Read/write throughput and Auto scaling.
Because AWS now manage all for you with some additional charges.
Auto Scaling - Below given are default values if you opt auto scaling for your DynamoDB Table.
on reaching 70% and above it will start to scale 5 to 40000 read/write units.
No comments:
Post a Comment