Business Scenario-Based Performance Diagnostics Procedures: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
 
(48 intermediate revisions by the same user not shown)
Line 9: Line 9:
We need a load generator that is capable of the following:
We need a load generator that is capable of the following:


* Generate load on behalf of multiple application users, concurrently, in such a way that each user should be capable of performing an interactive [[Business_Scenario-Based_Performance_Concepts#Session|session]] setup, interact repeatedly within the application by executing a set of arbitrary [[Business_Scenario-Based_Performance_Concepts#Business_Scenario|business scenarios]] in a loop, and then end the session.
* Generate load on behalf of multiple application users, concurrently, in such a way that each simulated user is capable of performing an interactive [[Business_Scenario-Based_Performance_Concepts#Session|session]] setup, interact repeatedly within the application by executing a set of arbitrary [[Business_Scenario-Based_Performance_Concepts#Business_Scenario|business scenarios]] in a loop, and then end the session.
* Insert arbitrary data in the previously recorded requests that are being replayed as part of the load tests. At minimum, the load generator must be able to insert custom headers with pre-defined (constant) values. More sophisticated analysis require additional capabilities, such as inserting unique identifiers per [[Business_Scenario-Based_Performance_Concepts#Request|request]], [[Business_Scenario-Based_Performance_Concepts#Iteration|iteration]], inserting [[Business_Scenario-Based_Performance_Concepts#Request_Sequence_Number|request sequence numbers]], etc..


* Insert arbitrary data in the requests it had previously recorded and it is replaying as part of the load tests. At minimum, the load generator must be able to insert arbitrary constant strings as custom header values in specific requests in the run sequence. More sophisticated analysis require additional capabilities, such as inserting unique identifiers per [[Business_Scenario-Based_Performance_Concepts#Request|request]], [[Business_Scenario-Based_Performance_Concepts#Iteration|iteration]], inserting [[Business_Scenario-Based_Performance_Concepts#Request_Sequence_Number|request sequence numbers]], etc..
[[NeoLoad]] qualifies.


[[NeoLoad]] is one of products.
==Load Test Environment==
 
* The load test environment components (load balancers, application servers, databases, etc.) run on physical or virtualized hardware that is equivalent with the hardware the application will be run on in production.
* The load test environment is isolated from external influences. For example, if the load test environment runs on virtualized hardware, the underlying resources must be allocated to the environment and not shared with other jobs; if this condition is not met, the load test results can be skewed in subtle and some cases undetectable ways.
* The application is configured similarly to production.


=Business Scenario Recording=
=Business Scenario Recording=


This step consists in the recording of the business scenario whose performance will be measured under load. This step is highly dependent on the tool used to record the interaction with the application and it usually consists in intercepting the traffic with a proxy. The load generation tool records the HTTP request details, stores them and used the stored data to replay the application traffic.
This step consists in the recording of the business scenarios whose performance will be measured under load. This step is highly dependent on the tool used to record the interaction with the application and it usually consists in intercepting the traffic with a proxy. The load generation tool records the HTTP request details, stores them and used the stored data to replay the application traffic.
 
Multiple business scenarios can be recorded during the same session, they can be post-processed and individually replayed or combined later.
 
=Business Scenario Post-Processing=


=Business Scenario "Annotation"=
The business scenarios recorded by the load generation tool must go through a post-processing phase, where metadata and markers are added to the scenarios. These pieces of data will be used by the performance data processing tools.


==Mark the Beginning and the End of the Interesting Scenarios==
==Mark the Beginning and the End of the Interesting Scenarios==
Line 36: Line 45:


=Target Environment Preparation=
=Target Environment Preparation=
Target environment preparation operations are usually performed once, when the target test environment is built. The configuration, tools and utilities created at this step will be repeatedly used for each load test executed within the environment.


==Component Life Cycle==
==Component Life Cycle==


In order to eliminate artifacts introduced by the state modified as result of previous load tests, the environment should be capable to stop and re-start all its active elements (load balancers, application servers, databases, caches) between load test runs.  
In order to eliminate artifacts introduced by application or environment state modified by previous load tests, the environment should be capable to stop and re-start all its active elements (load balancers, application servers, databases, caches) between load test runs.  


In case the tests are executed in an environment managed by [[em]], the best way of restarting environment is to restart the environment's VMs. This way all services are restarted and initialized. Alternatively, the VM can be restarted manually.  
In case the tests are executed in an environment managed by [[em]], the best way of restarting environment is to restart the environment's VMs via <tt>em</tt> commands. This way all services are automatically restarted and initialized. Alternatively, the VM can be restarted manually.  


If restarting the VMs is not practical, at least the component application servers must be restarted.
If restarting the VMs is not practical, at least application servers must be restarted between load test runs.


==Instrumentation==
==Instrumentation==


As an one-time operation, the active elements of the environment must be instrumented to collect performance data. Various metric sources and data collection can be implemented:
As an one-time operation, the active elements of the environment must be instrumented to collect performance data. Various metric sources and data collection strategies can be used:
 
===WildFly <tt>access_log</tt>===
 
Configuration example (use <tt>& quot;</tt> instead of "):
 
<pre>
<subsystem xmlns="urn:jboss:domain:undertow:1.1">
    ...
    <host name="default-host" alias="localhost">
        ....
        <location name="/WebAccess" handler="WebAccess" />
        <access-log pattern="&quot;%I&quot; %h %u [%t] &quot;%r&quot; &quot;%q&quot; %s %b %D %{i,Business-Scenario-Start-Marker} %{i,Business-Scenario-Stop-Marker} %{i,Business-Scenario-Request-Sequence-ID} %{i,Business-Scenario-Iteration-ID} %{c,JSESSIONID}"/>
        ....
    </host>
</subsystem>
</pre>
 
More info: [[Undertow WildFly Subsystem Configuration - access-log|WildFly <tt>access_log</tt>]]
 
===WildFly Request Start Time Recording===
 
By default, the request start time is not recorded, so the request duration cannot be calculated. To enable request start time recording, follow this procedure:
 
<blockquote style="background-color: #f9f9f9; border: solid thin lightgrey;">
:[[Undertow_WildFly_Subsystem_Configuration#record-request-start-time|Enable request start time recording]]
</blockquote>
 
===<tt>httpd</tt> logs===


* [[Undertow WildFly Subsystem Configuration - access-log|WildFly <tt>access_log</tt>]]
* [[Httpd_Logging_Configuration|<tt>httpd</tt> logs]]
* [[Httpd_Logging_Configuration|<tt>httpd</tt> logs]]


=Data Collection=
==Log Management==
 
In most cases, various logs are directly used as timed event sources. Even if that is not the case, they contain useful contextual data, so they must be collected and archived after each load run. Automation of this process is part of the test environment preparation.
 
=Load Test Execution=
 
This step will be executed repeatedly, for different versions or configurations that need to be performance tested.


=Load Test and Data Generation=
==Reset the Environment==


=Data Collection and Pre-Processing=
All components of the test environment should be restarted prior to a load test run for reasons explained in the "[[#Component_Life_Cycle|Component Life Cycle]]" section.
 
<pre>
pt reset
</pre>
 
The command will stop the environment components, clean logs and temporary data directories, where necessary, and restart the environment components, rendering the environment ready for testing.
 
==Apply the Load==
 
==Collect and Archive The Raw Data==
 
<pre>
pt collect
</pre>
 
The command will collect all relevant configuration, logs and test-related data and archive them in the data repository, using a unified format known to all data processing and report generation utilities.


=Data Processing and Report Generation=
=Data Processing and Report Generation=
See events User Manual:
<blockquote style="background-color: #f9f9f9; border: solid thin lightgrey;">
:[[Events_User_Manual#Business_Scenario_Data_Processing|"events" User Manual - Business Scenario Data Processing]]
</blockquote>

Latest revision as of 05:25, 17 May 2016

Internal

Assumptions

Load Generator

We need a load generator that is capable of the following:

  • Generate load on behalf of multiple application users, concurrently, in such a way that each simulated user is capable of performing an interactive session setup, interact repeatedly within the application by executing a set of arbitrary business scenarios in a loop, and then end the session.
  • Insert arbitrary data in the previously recorded requests that are being replayed as part of the load tests. At minimum, the load generator must be able to insert custom headers with pre-defined (constant) values. More sophisticated analysis require additional capabilities, such as inserting unique identifiers per request, iteration, inserting request sequence numbers, etc..

NeoLoad qualifies.

Load Test Environment

  • The load test environment components (load balancers, application servers, databases, etc.) run on physical or virtualized hardware that is equivalent with the hardware the application will be run on in production.
  • The load test environment is isolated from external influences. For example, if the load test environment runs on virtualized hardware, the underlying resources must be allocated to the environment and not shared with other jobs; if this condition is not met, the load test results can be skewed in subtle and some cases undetectable ways.
  • The application is configured similarly to production.

Business Scenario Recording

This step consists in the recording of the business scenarios whose performance will be measured under load. This step is highly dependent on the tool used to record the interaction with the application and it usually consists in intercepting the traffic with a proxy. The load generation tool records the HTTP request details, stores them and used the stored data to replay the application traffic.

Multiple business scenarios can be recorded during the same session, they can be post-processed and individually replayed or combined later.

Business Scenario Post-Processing

The business scenarios recorded by the load generation tool must go through a post-processing phase, where metadata and markers are added to the scenarios. These pieces of data will be used by the performance data processing tools.

Mark the Beginning and the End of the Interesting Scenarios

The scenarios to be measured must be "annotated" by configuring the load generator to send specific HTTP headers with the first and the last request of the scenario.

Business-Scenario-Start-Marker

The first request of the scenario must include a custom HTTP header named Business-Scenario-Start-Marker. The value of the header must be a string designating the scenario type, for example "Read bank account value". The load test will consist in sending multiple scenarios of the same type, in a loop, and after data processing we will be able to tell whether the specific scenario is within the required performance limits. Shorter values (i.e. "TypeA", "TypeB", etc.) are preferred, because ultimately it translates in sending less bytes over the network, though the amount of traffic generated because of the marker headers is most likely going to be insignificant relative to the amount of application traffic.

Business-Scenario-Stop-Marker

The last request of the scenario must include a custom HTTP header named Business-Scenario-Stop-Marker. The value of the header must be the same string used when marking the start of the scenario with Business-Scenario-Start-Marker.

If the end of the scenario is not explicitly marked, the data processing utilities will assume a scenario ends when the next Business-Scenario-Start-Marker header carrying the same scenario type string is encountered. Assuming each iteration contains just one scenario of a specific type, this default behavior will produce in most cases invalid data, effectively equating a scenario with the whole iteration.

Target Environment Preparation

Target environment preparation operations are usually performed once, when the target test environment is built. The configuration, tools and utilities created at this step will be repeatedly used for each load test executed within the environment.

Component Life Cycle

In order to eliminate artifacts introduced by application or environment state modified by previous load tests, the environment should be capable to stop and re-start all its active elements (load balancers, application servers, databases, caches) between load test runs.

In case the tests are executed in an environment managed by em, the best way of restarting environment is to restart the environment's VMs via em commands. This way all services are automatically restarted and initialized. Alternatively, the VM can be restarted manually.

If restarting the VMs is not practical, at least application servers must be restarted between load test runs.

Instrumentation

As an one-time operation, the active elements of the environment must be instrumented to collect performance data. Various metric sources and data collection strategies can be used:

WildFly access_log

Configuration example (use & quot; instead of "):

<subsystem xmlns="urn:jboss:domain:undertow:1.1">
    ...
    <host name="default-host" alias="localhost">
        ....
        <location name="/WebAccess" handler="WebAccess" />
        <access-log pattern=""%I" %h %u [%t] "%r" "%q" %s %b %D %{i,Business-Scenario-Start-Marker} %{i,Business-Scenario-Stop-Marker} %{i,Business-Scenario-Request-Sequence-ID} %{i,Business-Scenario-Iteration-ID} %{c,JSESSIONID}"/>
        ....
    </host>
</subsystem>

More info: WildFly access_log

WildFly Request Start Time Recording

By default, the request start time is not recorded, so the request duration cannot be calculated. To enable request start time recording, follow this procedure:

Enable request start time recording

httpd logs

Log Management

In most cases, various logs are directly used as timed event sources. Even if that is not the case, they contain useful contextual data, so they must be collected and archived after each load run. Automation of this process is part of the test environment preparation.

Load Test Execution

This step will be executed repeatedly, for different versions or configurations that need to be performance tested.

Reset the Environment

All components of the test environment should be restarted prior to a load test run for reasons explained in the "Component Life Cycle" section.

pt reset

The command will stop the environment components, clean logs and temporary data directories, where necessary, and restart the environment components, rendering the environment ready for testing.

Apply the Load

Collect and Archive The Raw Data

pt collect

The command will collect all relevant configuration, logs and test-related data and archive them in the data repository, using a unified format known to all data processing and report generation utilities.

Data Processing and Report Generation

See events User Manual:

"events" User Manual - Business Scenario Data Processing