Business Scenario-Based Performance Diagnostics Procedures
Internal
Assumptions
Load Generator
We need a load generator that is capable of the following:
- Generate load on behalf of multiple application users, concurrently, in such a way that each simulated user is capable of performing an interactive session setup, interact repeatedly within the application by executing a set of arbitrary business scenarios in a loop, and then end the session.
- Insert arbitrary data in the previously recorded requests that are being replayed as part of the load tests. At minimum, the load generator must be able to insert custom headers with pre-defined (constant) values. More sophisticated analysis require additional capabilities, such as inserting unique identifiers per request, iteration, inserting request sequence numbers, etc..
NeoLoad qualifies.
Load Test Environment
- The load test environment components (load balancers, application servers, databases, etc.) run on physical or virtualized hardware that is equivalent with the hardware the application will be run on in production.
- The load test environment is isolated from external influences. For example, if the load test environment runs on virtualized hardware, the underlying resources must be allocated to the environment and not shared with other jobs; if this condition is not met, the load test results can be skewed in subtle and some cases undetectable ways.
- The application is configured similarly to production.
Business Scenario Recording
This step consists in the recording of the business scenarios whose performance will be measured under load. This step is highly dependent on the tool used to record the interaction with the application and it usually consists in intercepting the traffic with a proxy. The load generation tool records the HTTP request details, stores them and used the stored data to replay the application traffic.
Multiple business scenarios can be recorded during the same session, they can be post-processed and individually replayed or combined later.
Business Scenario Post-Processing
The business scenarios recorded by the load generation tool must go through a post-processing phase, where metadata and markers are added to the scenarios. These pieces of data will be used by the performance data processing tools.
Mark the Beginning and the End of the Interesting Scenarios
The scenarios to be measured must be "annotated" by configuring the load generator to send specific HTTP headers with the first and the last request of the scenario.
Business-Scenario-Start-Marker
The first request of the scenario must include a custom HTTP header named Business-Scenario-Start-Marker. The value of the header must be a string designating the scenario type, for example "Read bank account value". The load test will consist in sending multiple scenarios of the same type, in a loop, and after data processing we will be able to tell whether the specific scenario is within the required performance limits. Shorter values (i.e. "TypeA", "TypeB", etc.) are preferred, because ultimately it translates in sending less bytes over the network, though the amount of traffic generated because of the marker headers is most likely going to be insignificant relative to the amount of application traffic.
Business-Scenario-Stop-Marker
The last request of the scenario must include a custom HTTP header named Business-Scenario-Stop-Marker. The value of the header must be the same string used when marking the start of the scenario with Business-Scenario-Start-Marker.
If the end of the scenario is not explicitly marked, the data processing utilities will assume a scenario ends when the next Business-Scenario-Start-Marker header carrying the same scenario type string is encountered. Assuming each iteration contains just one scenario of a specific type, this default behavior will produce in most cases invalid data, effectively equating a scenario with the whole iteration.
Target Environment Preparation
Target environment preparation operations are usually performed once, when the target test environment is built. The configuration, tools and utilities created at this step will be repeatedly used for each load test executed within the environment.
Component Life Cycle
In order to eliminate artifacts introduced by application or environment state modified by previous load tests, the environment should be capable to stop and re-start all its active elements (load balancers, application servers, databases, caches) between load test runs.
In case the tests are executed in an environment managed by em, the best way of restarting environment is to restart the environment's VMs via em commands. This way all services are automatically restarted and initialized. Alternatively, the VM can be restarted manually.
If restarting the VMs is not practical, at least application servers must be restarted between load test runs.
Instrumentation
As an one-time operation, the active elements of the environment must be instrumented to collect performance data. Various metric sources and data collection strategies can be used:
Log Management
In most cases, various logs are directly used as timed event sources. Even if that is not the case, they contain useful context data, so they must be collected and archived after each load run. Automation of this process is part of the test environment preparation.
Load Test Execution
This step will be executed repeatedly, for different versions or configurations that need to be performance tested.
Restart the Environment
All components of the test environment should be restarted prior to a load test run for reasons explained in the "Component Life Cycle" section.
pt restart
The command will stop the environment components, clean logs and temporary data directories, where necessary, and restart the environment components, leaving the environment ready for testing.
Apply the Load
Collect and Archive The Raw Data
pt collect
The command will collect all relevant configuration, logs and test-related data and archive them in the data repository, using a unified format known to all data processing and report generation utilities.
Data Processing and Report Generation
events < ./access_log.log --input-format-file=./access_log.def business-scenario --stats Counters business scenarios: 7743 (3 different states) CLOSED_NORMALLY: 7371, duration min/avg/max: 38/13397/33304 ms, reqs/scenario min/avg/max: 30/36.73/37 CLOSED_EXPLICITLY: 342, duration min/avg/max: 1270/11662/20310 ms, reqs/scenario min/avg/max: 4/20.82/37 CLOSED_BY_START_MARKER: 30, duration min/avg/max: 5343/18836/32691 ms, reqs/scenario min/avg/max: 48/51.03/52 faults: 134355 (3 different types) NO_JSESSIONID_COOKIE: 1017 NO_ACTIVE_BUSINESS_SCENARIO: 132502 MISSING_ITERATION_ID: 836 other events: 1 HTTP requests: 412922 HTTP sessions: 434