Thursday, 3 April 2014

What is Performance Testing?



Definition: 
                   In software engineering, performance test is a process to measure system's (Application, Network, Database or Devices) speed (responsiveness), stability and scalability under particular work load.

Core Performance Testing Activities:
 
                                      

      Core Performance Testing Activities

 1. Identify the Test Environment
 2. Identify Performance Acceptance Criteria
 3. Plan and Design Tests
 4. Configure the Test Environment
 5. Implement the Test Design
 6. Execute the Test
 7. Analyze Results, Report, and Retest


1. Identify the Test Environment: 
                     Identify the physical test environment. The physical environment includes hardware, software, and network configurations. Having a thorough understanding of the entire test environment at the outset enables more efficient test design and planning and helps you identify testing challenges early in the project.

2. Identify Performance Acceptance Criteria: 
                     
   Identify the response time, throughput, and resource utilization goals and constraints. In general,
·         Response time is a user concern, 
·         Throughput is a business concern, 
·         Resource utilization is a system concern. 

   Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics. 

3. Plan and Design Tests: 

 Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed.    

4. Configure the Test Environment: 

 Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary. 

5. Implement the Test Design: 

 Develop the performance tests in accordance with the test design. 

6. Execute the Test: 

Run and monitor your tests. Validate the tests, test data, and results collection. Execute validated tests for analysis while monitoring the test and the test environment. 

7. Analyze Results, Report, and Retest:

 Consolidate and share results data. Analyze the data both individually and as a cross-functional team. Re-prioritize the remaining tests and re-execute them as needed. When all of the metric values are within accepted limits, none of the set thresholds have been violated, and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration.

Few example of why performance test is conducted:

Validating that the application performs properly
Validating that the application conforms to the performance needs of the business
Finding, Analysing, and helping fix performance problems
Validating the hardware for the application is adequate
Doing capacity planning for future demand of the application


There are many different performance testing tools. Everyone claims their tools are the best. Here are my picks and this is what I think.

Commercial Performance Testing Tools:   
HP Performance Center:
                                                                                     HP LoadRunner:
                                                                                    Neotys Neoload:
Open Source Performance Testing Tools:
                                                                             Apache JMeter

Performance tuning

Performance tuning

                       Let us have some information about tuning concept to be introduced during performance testing. In my scenario, the number of users were supported by the application was only 18. Beyond this number all users were failing to logging with the "Database Related Error". The error that I was getting was "Couldn't create Database Instance". [looks like it is a Database related error]

When I started looking into this issue, we had few areas that we could look for the solution.
 They were:    1. Connection Pools
                         2. Tomcat (if tomcat is the Web Server)
                         3. OS system file
                         4. Hardware resources


I encountered this issue on Solaris box. I started finding the solution for this issue and below are areas I looked into:

1. I started looking at the Application Server Connection pools issue. We had few parameters in the web.xml file under the middle-tier where min_connex and max_connex parameter was defined related to the Database. These parameters were set to 2 and 5 respectively. The best way to start finding the solutions for any issue is the log that is being written. In the middle-tier log, out of 4, 3 connection pools were always idle. So, there was no problem at the middle-tier side.

2. I tried increasing the  tomcat memory  from 512MB to 1024MB in catalina.sh file. This didn't resolve the issue.

3. Looked into the  /etc/System file on the Solaris OS  and saw some parameters like maxusers, maxuproc (maximum user processes). Setting maxusers to a value will control max_nprocs and maxuprc. The algorithms are:
max_nprocs = 10 + (16 x maxusers)
maxuprc = max_nprocs – reserved_procs (default is 5)
Even this setting didn't resolve the issue.

4. last but not the least, I looked into the  System resources  and found that the root path (i.e. "/") doesn't have much disk space to allocate DB resources any more beyond 18 users. Since the Oracle was setup at "/" path and the Oracle folder wasn't assigned any more system resources, Oracle was utilizing the space allocated to "/". Due to in-sufficient disk space on "/" path, users were not able to make connection with the Oracle DB and hence was failing.
Increasing the root disk space eliminated the issue with the user login.


LoadRunner Key Components

LoadRunner Key Components:

Vugen - Virtual User Generator (VuGen) –records Vuser scripts that emulate the steps of real users using the application.

Controller - The Controller is an administrative center for creating, maintaining, and executingScenarios. The Controller assigns Vusers and load generators to Scenarios, starts and stops load tests, and performs other administrative tasks.

Analysis - LoadRunner Analysis uses the load test results to creategraphsandreportsthat are used to correlate system information, identify bottlenecks, and performance issues.

Load Generator - Load generators (also known as hosts) are used to run the Vusers that generate load on the application under test.

Agent Process –  It Establish a Communication b/w the LG and Controller.


What is correlation ?

What is correlation? Difference between automatic correlation and manual correlation?
Correlation :   It’s a Concept to Capture the dynamic values Which are generated from ServerSide.   
                                                          ( OR )
The values that are generated by the server/application at the run time and are used by the application to process the requests are correlated,we capture these values at run time and used them in the script.

Example –        Session ID, transaction id etc, these values are always generated by the server at run time. If we knew them before hand, we could have parameterised them in the script but as these are generated by an algorithm/random number at the run time, they have to be handled at run time only.
LoadRunner provides a function to capture these values, either you do manually or auto correlate, same function is used i.e. web_reg_save_param().

Correlation is 2-types There are 1) Auto Correlation  &  2) Manual Correlation

Auto Correlation
Loadrunner handles the process to capture the dynamic value in each run of script execution.  
If Auto Correlation is enabled, LoadRunner identifies the values that are generated at run time and correlate them. In this method all the handling will be done by Loadrunner itself, leaving almost no control for the user (if required). Also it correlates a lot other values that are not required to be correlated.
We’ve 2-ways of achieving Automatic Correlation.
a.       Enabled Correlation during recording.
b.      Scan for Correlation.
a. Enabled Correlation during recording :
·         Built in Correlation.
·         User Define Correlation
        Loadrunner maintains set of predefined rules called AutoCorrelation Library. If the dynamic values found among these rules,Loadrunner automatically inserts the  web-reg-save-param("Parameter Name”,”LB”,”RB”,LAST); in the script pane.
               1. Select  “Enabled Correlation during recording” . We can enable or disable a specific rule by selecting or clearing the Check box adjacent to the rule.
               2. Click on the new rule, we’ll pas LB, RB and Parameter name.
               3. We can also specify where to look for the dynamic value such as Header, Body, etc.
           web-reg-save-param();  will be written in the script wherever the dynamic value occurs within specified boundaries.
Scan for Correlation :Once you record the script, you’ve to replay the script than we need to use the “Scan for Correlation
Once you reply the script, Replay Log will be created.
When we do scan for Correlation the Vugen will compare the Recording Log with Replay Log and highlight the differences b/w them.
 We click on the correlation button on the right side of the log window to correlate the differences.
Automatic correlation cannot ensure that we’ve captured all the dynamic values (especially when may miss user specific data ). This is the main reason we cannot depend on Automatic Correlation.

Manual Correlation
          To ensure the Script is fully Correlated, we need to go for Mannual Correalion.
In the manual correlation, we’ve to identify the dynamic data that is generated by the server and correlate them by using function  web-reg-save-param("Parameter Name”,”LB”,”RB”,LAST);
We can identify the dynamic values by recording the script in 3-times.
                   1. Recording with user1 (business flow)
                   2. Recording with user2 (same as above)
                   3. Recording with user3 ( same step as above but different i/p values).
Compare the script …………..
 We can run the script after recording. Check the point of failure and if we find any value which is generated during run time. If we find any then we will check for the same step in Tree view. Select the step where the value has been generated. Go to Server response tab and search for that value.Now select the left and right boundaries for that value..Meaning some text from immediate left side and some from immediate right side.

Example:1 –  part_the-value;sessionid=543218;newvalue;
Here left boundary = part_the-value;sessionid=Right boundary = ;newvalue; So any value between these boundaries will be captured.

Example: 2  
web_reg_save_param(”SessionID”, “LB part_the-value;sessionid=”, “RB=;newvalue;”,LAST);
Now replace the value on the script with the parameter {SessionID}
This way we will have more control over the correlation.