You are producing data to AWS Kinesis using AWS Lambda which sits behind an API Gateway. Data
represents a clickstream from the users navigating your website. What in case you want to make
sure your Kinesis stream can scale over time due to increased volume, what would you need to do?
(select two)
Correct answers - "The partition key must take a great number of different values & You need
to add shards" : Kinesis Data Streams segregates the data records belonging to a stream into
multiple shards. It uses the partition key that is associated with each data record to determine
which shard a given data record belongs to. Partition keys are Unicode strings with a maximum
length limit of 256 bytes. A stream is composed of one or more shards, each of which provides a
fixed unit of capacity. Each shard can support up to 5 transactions per second for reads, up to
a maximum total data read rate of 2 MB per second and up to 1,000 records per second for writes,
up to a maximum total data write rate of 1 MB per second (including partition keys). The data
capacity of your stream is a function of the number of shards that you specify for the stream.
The total capacity of the stream is the sum of the capacities of its shards.
Incorrect:
"You need to enable auto-scale" - You don't turn on a setting instead you can utilize other
services to work together
"The partition key must only take few values" - You will need different partition keys to route
to different shards