## Notes
- 3 copies of data in 3 AZs
- Rows are *Items* and Columns are *Attributes*
- Provides intelligent partitioning and scaling
## Secondary Indexes
### Local Secondary Index
- Supports *strongly* and *eventually* consistent reads
- Can only be created with initial table (table must be deleted in order to make modifications)
- Only composite keys
- 10GB or less per partition
- Must share PK with base table (???)
### Global Index
- Only *eventually* consistent--cannot support *strongly* consistent
- Can create/modify/delete them any time
- Supports Simple **and** Composite keys
- No size restriction
- Has own capacity settings (doesn't share with base table)
## Scans
- Should be avoided
- Returns all Attributes by default
- Is sequential; can be parallelized
## Query
- Needs Composite Keys to be queried
- Default: Eventually Consistent; can be Strongly consistent
## Capacity Modes
### Provisioned
- For predictable and steady-state workloads
- Best practice
- Enable *Auto Scaling* with ceiling and floor
### On-Demand
- Pay per request
- For unpredictable workloads
## Calculating Reads and Writes
### RCUs
- one *strong* read or two *eventually consistent* read
- Item 4 KB in size
#### Examples for Strong
- 50 reads at 40 KB = 40/4 * 50 = 500 RCUs
- 10 reads at 6 KB = 8/4 * 10 = 25 RCUs
- 33 reads at 17 KB = 20 / 4 * 33 = 5 * 33 = 165
#### Examples for Eventual
- 50 reads, 40 KB items = 40 / 4 * 50 / 2 = 10 * 25 = 250 RCUs
- 11 reads, 9KB items = 12 / 4 * 11 / 2 = 3 * 11 / 2 = 33 / 2 = 17 RCUs
- 14 reads, 24 KB items = 24 / 4 * 14 / 2 = 6 * 7 = 42 RCUs
### WCUs
- One write per second
- Item up to 1 KB
## DynamoDB Accelerator
- Fully managed, in-memory, write-through cache for DynamoDB, runs in a cluster
- Best for
- Fastest response times
- Read a small number of items more frequently
- Read-intensive
- Not ideal for
- Strongly consistent read
- Write intensive
## Transactions
- Are all-or-nothing
## Global Tables