Amazon Kinesis
External
- https://aws.amazon.com/kinesis/
- Amazon Kinesis Whitepaper https://d0.awsstatic.com/whitepapers/whitepaper-streaming-data-solutions-on-aws-with-amazon-kinesis.pdf
- https://docs.aws.amazon.com/kinesis/
- Amazon Kinesis Data Streams Developer Guide https://docs.aws.amazon.com/streams/latest/dev/introduction.html
Internal
Overview
Kinesis acts as a highly available conduit to stream messages between data producers and data consumers.
Concepts
Stream
A Kinesis data stream is a named set of shards. Streams can be created from the AWS Management Console, with AWS CLI and via the Kinesis Data Stream API.
Stream Name
The name space if defined by the AWS Account and AWS region: for the same account, streams with the same name can exist in different regions.
Shard
Shards are identified in a stream by their partition key. They also automatically get a Shard ID, which can be obtained with AWS CLI describe-stream command.
Shard Iterator
A shard iterator represents the position of the stream and shard from which the consumer will read.
Record
Units of data stored in a stream. Records are made up of a sequence number, partition key and data blob. After the data blob is stored in a record, Kinesis does not inspect, interpret or change it in any way.
Data Blob
The data blob is the payload of data contained within a record.
Partition Key
The partition key is used to identify different shards in a stream, and allow a data producer to distribute data across shards.
Sequence Number
Unique identifiers for records inserted into a shard. They increase monotonically, and are specific to individual shards.
Producer
The producers can continually push data to Kinesis Data Streams.
Consumer
The consumers may process data in real time. A Kinesis Data Stream consumer can be a custom application running on Amazon EC2 or an Amazon Kinesis Data Firehose delivery stream.