Hyper Open Edge Cloud

ERP5 Practical Scalability Testing

This document explains ERP5 Practical Scalability Testing
  • Last Update:2016-11-10
  • Version:009
  • Language:en

ERP5 scalability testing

Agenda

  • How to write scripts to perform scalability testing for ERP5

This visual guide has been created for learning and for teaching scalability testing for ERP5. This visual guide is mostly useful to ERP5 developers who need to understand and performed scalability testing, to users who are willing to understand ERP5 and to marketing people who need to explain scalability testing for ERP5.

Readers should first have a quick look at the illustration on the upper part of each page then read the short text bellow the illustration carefully and associate each word written in bold to the corresponding item(s) in the illustration. For example, the term Creative Commons License is written in bold because it defines the license of the above illustration.

Copyright

You are free to copy, distribute, display, and perform the work under the following conditions: you must attribute the work in the manner specified by the author or licensor; you may not use this work for any commercial purposes including training, consulting, advertising, self-advertising, publishing, etc.; you may not alter, transform, or build upon this work.

For any reuse or distribution, you must make clear to others the license terms of this work. Any of these conditions can be waived if you get permission from the copyright holder through a commercial license or an educational license. For more information, contact

Overview of scalability testing framework

The main script to run benchmarks is runBenchmark from erp5.utils.benchmark package. erp5.utils.benchmark relies upon erp5.utils.test_browser as a stateful programming web browser in order to simulate users interactions.

runBenchmark allows to specify the number of times the suite is being repeated and also to specify a constant number of users or a range of users with a given number of times to repeat the suite (for each number of users when a range is specified). It is also possible to give a maximum average time which will stop the script when reached, therefore allowing to check how many users can be served at the same time within a respectable timeframe.

For each user, a process is spawned and write two files in the report directory, namely a result file containing raw results and a log file reporting error tracebacks if any and the status (e.g. minimum, maximum, average and standard deviation). These results can be later used to generate a reports by collecting all the results scattered in the user results file thanks to generateReport script.

Note that it is very important to repeat the benchmark suite as much as possible in order to get valid and relevant results, as explained in the first presentation about scalability testing..

In the next slides, we will explain how to write a suite benchmarking adding users and bugs, based on what has already been done in the previous presentation about performance testing.

 

Benchmark suite

As an example showing how to write scalability testing, we will benchmark adding persons and bugs.

So, we will create a directory containing three files, namely addPerson.py (same script as the example of the previous presentation about performance testing), addBug.py (similar to addPerson.py but with the bugs module) and userInfo.py (specifying usernames and passwords to be used to run the benchmark suite). These files will be described in further details in the next slides.

Adjust performance testing script

In the presentation about performance testing, a single performance testing script was written. We will now modify a bit this script in order to be run as part of a benchmarking suite.

Before doing anything, you should have installed erp5.utils.benchmark package by running the following command line:

easy_install -f 'http://www.nexedi.org/static/packages/source/' erp5.utils.benchmark

First of all, we get rid of everything but the main function as the number of iterations to be executed and the results is handled by arguments given to runBenchmark script.

Secondly, we rename the function to match the file name because the name of the script is given to runBenchmark which will expect to find a function with the same name as the script without the extension of course.

Thirdly, result and browser parameters are handled by erp5.utils.benchmark, where the former allows to store the result along with its label and the latter is a Browser instance (from erp5.utils.test_browser.browser module).

Benchmarking suites (1)

This is the complete script of the previous presentation after being modified to be run by runBenchmark.

Benchmarking suites (2)

This script is very similar to the previous one and only benchmark adding bugs.

Note that there is no need to take care of going back to the ERP5 homepage as this is performed at the beginning of each benchmark script, therefore any script can be later re-used as part of another benchmark suite.

Specifying users information

This file specifies available users to be used when running the benchmark suite.

The maximum number of users given to runBenchmark script must be less or equal to the number of users specified in this file.

This is important to avoid conflicts and being more realistic.

It is also possible to specify another users file to be used thanks to –-users-file command line arguments of runBenchmark.

Anyhow, this file must always contains user_tuple which is automatically imported by the script.

Simulating users

The first command line executes the benchmark suites, including addPerson and addBug, 10 times with 2 concurrent users.

The results and log files will be written into results-constant directory which will have been previously created.

The second command line executes the same benchmark suites, but with a range of users. It means that the benchmark suite will be executed 10 times with 1 user, then 10 times with 2 concurrent users and so on until 4 concurrent users. It is also possible to specify the stepping thanks to --users-range-increment command line options.

Generating a report

After running the benchmark suites, we can generate a nice report, from results scattered accross files in the results directory, which computes the minimum, average (and the standard deviation), and maximum for each operation (defined by result()) of the benchmark scripts.

Further readings

  • Performance testing presentation
  • Help of runBenchmark and generateReport scripts
  • erp5.utils.test_browser API documentation See README.txt in the source code

You can download tarballs of erp5.utils.test_browser and erp5.utils.benchmark packages on Nexedi website . You can then generate the documentation as explained in README.txt. There is also an example in examples/ directory. z3c.testbrowser and especially zope.testbrowser are extensively documented with examples on their respective project pages, and, as erp5.utils.test_browser follows exactly the same API, it is worth having a look.