Performance Concepts: Difference between revisions
Line 97: | Line 97: | ||
* Coordinated omission. Coordinate omission usually makes something that you think is a response time metric only represent a service time component. | * Coordinated omission. Coordinate omission usually makes something that you think is a response time metric only represent a service time component. | ||
</font> | </font> | ||
Revision as of 01:12, 1 August 2024
External
- Gil Tene on How NOT to Measure Latency https://www.infoq.com/presentations/latency-response-time https://www.youtube.com/watch?v=lJ8ydIuPFeU
Internal
Load
Load is a statement of how much stress a system is under. Load can be numerically described with load parameters.
Load Parameters
A load parameter is a numerical representation of a system's load. For example, in case of a web server, an essential load parameter is the number of requests per second (RPS) or queries per second (QPS). For a database, it could be the ratio of reads to writes. For a cache, it is the miss rate. Understanding load parameters of a specific system is important during the system design phase. An architecture that scales well for a particular application is built around assumptions on load parameters, and requires an understanding of which operations will be common and which will be rare.
Load Testing
Coordinated Omission
The coordinated omission problem is a systematic error introduced by some load testing tools, which do not produce correct data in presence of a certain type of failure of the system under test. Vegeta is one of the load tools that avoids the coordinated omission problem.
Tools
Written in Go
(in the descending orders of stars on GitHub):
- https://github.com/grafana/k6
- Vegeta
- https://github.com/getanteon/anteon
- https://github.com/codesenberg/bombardier
- Fortio
- https://github.com/rogerwelin/cassowary
Performance
The performance of the system is described by performance metrics.
Performance Metrics
Resource Consumption
CPU, memory, disk I/O.
Latency (Response Time)
The latency, or response time, is the minimum time required to get any form of response from a service, even if the work to be done is nonexistent (Martin Fowler, Patterns of Enterprise Applications Architecture). Another definition is the length of time for something interesting to happen. The latency can be measured practically as the time between a client finishing sending a request and fully receiving a response. This measured interval includes the time the request travels over the network from the client to the backend, the time the request is awaiting service in the backend queue, the service time and the time it takes to travel back to the client. The name comes from the fact that, from the client perspective, once the request is fully submitted, it is latent, awaiting service. The latency is an important parameter for an on-line system, such as a web site or a mobile application backend.
Latency and response time are often used synonymously, but some authors argues that they are not synonymous (DDIA).
Latency is especially relevant in remote systems, because the time spent propagating the request over the network, and the response back is not negligible, and in many cases, it constitutes the majority of the measured time. Some monitoring system describe the request time as the time the backend takes to process the request, and in this case the travel time is not accounted for.
One single response time value is not that relevant for a system, it makes more sense to think of response time as a distribution of values that can be measured over significant intervals of time. There is no one single value for latency, a latency dataset is made of a large number of data points, measured over an interval of time, and trying to characterize such a dataset with only one number ("the average latency is 100 ms") is in most cases misleading. For a system that works well, over the measurement time interval most requests are usually reasonably fast, but there are occasional outliers, that take much longer. This can be caused by the fact that the requests in question are intrinsically more expensive, but it could also be that the additional latency is introduced by infrastructure-related factors: context switch, TCP packet loss and retransmission, garbage collection pause, page fault, etc.
The challenge is to come with an expressive enough characterization of the latency for all requests, and over time, to be useful. Showing a P95 percentile value simply throws away 5% of the worst-case data, which should be investigated first, so for troubleshooting, do not throw away the max values. The max values show you how "bad the bad stuff is".
Standard deviation does not have any meaning for a dataset that describes latency. It is not relevant. Latency must be measured in the context of load, measuring the latency without load is misleading.
Average Response Time
The arithmetic mean: given n requests values, add up all the values and divide by n. This is not a very good metric because this not reflect the "typical" response time, it does not tell you how many users actually experienced the delay.
Median Response Time
The median response time for an interval is the response time of the request for which 50% of the requests are faster, and 50% of the requests are slower. The median is also known as the 50th percentile or P50.
Response Time Percentiles
nth percentile, or quantile (ex: 99th, abbreviated P99) is the response time threshold at which n% (99%) of requests are faster than the particular threshold (and (100-n)% are slower). DDIA Cap 1 Reliable, Scalable and Maintainable Applications → Scalability → Describing Performance.
Also see:
Articles and Talks on Latency
To Process:
- Your Load Generator is Probably Lying to You - Take the Red Pill and Find Out Why https://highscalability.com/your-load-generator-is-probably-lying-to-you-take-the-red-pi/
- Everything You Know About Latency Is Wrong by Tyler Treat https://bravenewgeek.com/everything-you-know-about-latency-is-wrong/
Throughput
Throughput is the rate at which something can be produced, consumed or processed, in a time unit. Throughput is usually relevant in case of batch processing systems, such as Hadoop, where it describes the number of records that can be processed per second.
Saturation
Define saturation. Identify where the saturation point is. Don't run a system at saturation or over.
Scalability
Scalability is a measure of how adding resources (usually hardware) affects performance and describes the ability of a system to cope with increased load. Also see:
HDR Histogram
Queueing Theory
TODO:
- https://en.wikipedia.org/wiki/Queueing_theory
- Response Time in Queueing Theory.
- Service Time in Queueing Theory.
Response time and service time diverge as saturation becomes worse.
Organizatorium
- xth percentile (quantiles) - the value of the performance parameter at which x% of the request are better; https://www.vividcortex.com/blog/why-percentiles-dont-work-the-way-you-think
- Tail latency amplification. See: Jeffrey Dean and Luiz André Barroso: "The Tail at Scale" https://cacm.acm.org/magazines/2013/2/160173-the-tail-at-scale/fulltext
- Don't censor bad data, don't throw away data selectively.
- Never average percentiles.
- Coordinated omission. Coordinate omission usually makes something that you think is a response time metric only represent a service time component.