Web Application Performance: Difference between revisions
No edit summary |
|||
(8 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
* [[Performance]] | * [[Performance]] | ||
* [[NeoLoad]] | |||
* [[Performance_Concepts#Load_Testing|Performance Concepts]] | |||
=Methodology= | =Methodology= | ||
Set up the performance testing environment. | Set up the performance testing environment. | ||
Start a diagram document, and Environment text document and a "Finding and Recommendations" document. | |||
==Platform== | ==Platform== | ||
Line 34: | Line 38: | ||
* <font color=red>Put in place monitoring for CPU, physical RAM, open file descriptors and other system resources.</font> | * <font color=red>Put in place monitoring for CPU, physical RAM, open file descriptors and other system resources.</font> | ||
<font color=red>TODO: https://home.feodorov.com:9443/wiki/Wiki.jsp?page=PerformanceTroubleshooting </font> | |||
<font color=red>TODO: https://home.feodorov.com:9443/wiki/Wiki.jsp?page=LoadTesting</font> | |||
=Get the Right Load= | |||
Use a load generator (JMeter, Load Runner, NeoLoad). | |||
Start with simple scenarios: | |||
* Login, read, logout | |||
* Login, update, logout | |||
* Login, create, logout. | |||
Identify typical business interaction scenarios. | |||
For each scenario, define performance metrics for a specific concurrent load: | |||
* Average response time per request. | |||
* Maximum acceptable response time - passing that would trigger test failure. | |||
Apply different loads: 1, 10, 100, 1000. | |||
What is the nominal throughput? | |||
Calculate the aggregated score, automatically, for each load. | |||
IF the test fails, the load framework should provide ways to investigate why it failed, on request level (layer cake). Automated ways of building the layer cake? | |||
We should consider measurements from the mass load and also from a completely separated individual probe (actual user or automated). | |||
The load framework should provide ways to mark individual requests or request categories. |
Latest revision as of 00:44, 1 August 2024
Internal
Methodology
Set up the performance testing environment.
Start a diagram document, and Environment text document and a "Finding and Recommendations" document.
Platform
Map it (Performance Testing Environment.odg)- get a diagram that shows:
- Hosts (names, IP addresses/subnet masks, external ports).
- Represent all network interfaces and their connectivity to various networks. It helps to understand how the hosts are interconnected - on the diagram.
- Represent processes
- Load Agent
- Proxy
- Application Server
- Database
- Document the procedures to stop/start the processes - some load tests may need the processes to be completed shut down and restarted. Usually there's an "Environment" document associate with the environment where all these procedures are documented.
- Encode log locations as aliases (al, jbl, etc.)
- If the target are Windows machines, it's a good idea to install Cygwin - it'll improve productivity and provide a lot of good tools.
- Annotate the environment diagram with the RAM and CPU amount on each host.
Web Proxy
Application Server
- Identify the application server configuration file and its location on disk.
- Represent the database connection pools and the connections to their respective databases on the diagram. Their min pool size and max pool size are also useful so represent them as [10-100] on the diagram. For JBoss, look for jboss:domain:datasources.
Database
Distribute in the Right Place
- Put in place monitoring for CPU, physical RAM, open file descriptors and other system resources.
TODO: https://home.feodorov.com:9443/wiki/Wiki.jsp?page=PerformanceTroubleshooting
TODO: https://home.feodorov.com:9443/wiki/Wiki.jsp?page=LoadTesting
Get the Right Load
Use a load generator (JMeter, Load Runner, NeoLoad).
Start with simple scenarios:
- Login, read, logout
- Login, update, logout
- Login, create, logout.
Identify typical business interaction scenarios.
For each scenario, define performance metrics for a specific concurrent load:
- Average response time per request.
- Maximum acceptable response time - passing that would trigger test failure.
Apply different loads: 1, 10, 100, 1000.
What is the nominal throughput?
Calculate the aggregated score, automatically, for each load.
IF the test fails, the load framework should provide ways to investigate why it failed, on request level (layer cake). Automated ways of building the layer cake?
We should consider measurements from the mass load and also from a completely separated individual probe (actual user or automated).
The load framework should provide ways to mark individual requests or request categories.