Test the Performance and Scalability of Your Web Applications With Tsung

What is Tsung ?

The purpose of Tsung is to simulate users in order to test the scalability and performance of IP based client/server applications. You can use it to do load and stress testing of your servers.

(Definition coming from the Tsung website)

In this post, I will introduce the use of Tsung in order for you to stress test your web applications.

Why Tsung ?

Because it’s an Open-Source project and, to tell the truth, mainly because this application has been coded in Erlang which gives Tsung a little advantage on the other tools: it has the potential to simulate A LOT of concurrent requests … without crashing. That’s what we expect from a stress testing app, isn’t it?

Let’s start the installation

We will need the Perl Templating-Toolkit and the Gnu plotting utility in order to create nice HTML and graphical reports with the result data set.

So, back to your command prompt:

~$ sudo apt-get install gnuplot-nox libtemplate-perl libhtml-template-perl libhtml-template-expr-perl

And now the configuration and installation of Tsung:

~$ http://tsung.erlang-projects.org/dist/tsung-1.3.0.tar.gz
~$ tar -zxvf tsung-1.3.0.tar.gz
~$ cd tsung-1.3.0
~/tsung-1.3.0$ ./configure && make && sudo make install

To properly remove Tsung from your system:

~/tsung-1.3.0$ sudo make uninstall

First test setup

Tsung is configured through the tsung.xml file located in ~/.tsung

update : You will find some configuration files in the extracted tsung tarball under the examples directory : ~/tsung-1.3.0/examples/ (Thanks Zak).

1) First, we setup the server IP/host address

<!-- Server side setup -->
<servers>
  <server host="myserver" port="80" type="tcp"></server>
</servers>

2) Next, we define the load that we want our server to be exposed to.

Tsung simulates the user arrivals in the application. The example below describes 3 different phases (figures A and B).

The first one will last 15 minutes (the unit attribute can be set to ‘hour’, ‘minute’ or ‘second’) during which 1 user/second will start his session.

This phase will be followed by a more aggressive 30 second long one simulating the arrival of 8 (1/0.125) new users each second.

We will end this test with a third phase simulating the arrival of 1 user/second for 25 minutes.

<load>
  <!-- several arrival phases can be set: for each phase, you can set the mean inter-arrival time between new clients and the phaseduration -->
  <arrivalphase phase="1" duration="15" unit="minute">
    <users interarrival="1" unit="second"></users>
  </arrivalphase>
  <arrivalphase phase="2" duration="30" unit="second">
    <users interarrival="0.125" unit="second"></users>
  </arrivalphase>
  <arrivalphase phase="3" duration="25" unit="minute">
    <users interarrival="1" unit="second"></users>
  </arrivalphase>
</load>

3) Finally, we create one or more HTTP sessions. The easiest way to populate ours is to record them.

To achieve that we must first tell our browser to pass through a proxy listening to port 8090 (Firefox preferences – Advanced – Network – Settings).

When done, we start the Tsung recorder, record our http session and stop the recorder :

~/.tsung$ tsung recorder
~/.tsung$ tsung stop_recorder

We will end up with an xml representing the newly recorded session located in ~/.tsung/tsung_recorderyyyymmdd-HH:MM.xml.

Just copy-paste this session into the main tsung xml file under and set it’s usage probability:

<sessions>
  <session name='login_settings' probability='25' type='ts_http'>
    <request> <http ... /> </request>
    <thinktime random="true" value="4"/>
    <transaction name="Login">
      <request> ... </request>
        ...
    </transaction>
    <request> ... </request>
      ...
  </session>
  <session name='login_add_people' probability='75' type='ts_http'>
    ...
  </session>
  ...
</sessions>

For each session, we can adjust the think time (in seconds) to suit our needs as well as group some requests into a specific transaction in order to get particular stats about it (duration, etc …).

We are now ready to go

~/.tsung$ tsung start

… and wait … until Tsung comes back to us specifying the location of the result log.

For your first tests, it’s a good idea to have an interactive Erlang console opened on the server displaying some io messages (i.e. number of processes, service name, …) in order to keep track of what is really going on.

Generate the html and graph report

Tsung is logging data in the ~/.tsung/log/yyyymmdd-HH:MM folder. In order to use these info, we launch the tsung_stats.pl script which will produce a nice html summary report.

~$ cd .tsung/log/yyyymmdd-HH:MM
~$ /usr/local/lib/tsung/bin/tsung_stats.pl

The interpretation of results showed in the report.html page is more or less straightforward.

Just two remarks :

1) The most important indicators IMHO are the requests/sec and the total requests count. We reach the optimum load the system can handle when the number of requests begin to decline for the same amount of time.

So, basically, at first, it’s good to start setting up tests with slow arrival phases and incrementally increase the number of users in the system.

Along with the total of active sessions, the ratio req/sec goes up which is perfectly fine. There are just more requests to process. Until the server starts to suffer and begins to slowdown the delivery, increasing the time needed to process each request.

2) Another cool indicator is the number of simultaneous users actually rolling their session (=/= concurrent requests).

Below is an example (figure A) where you can see a typical overload situation. Around the 1000th sec, the concurrent users (8 new users/sec starting their 100 requests session) simply exceeds the limit of what the system could handle (don’t know yet the exact point of failure.

At that point, I just lost contact with Mochiweb and was receiving a lot of Mnesia transaction Messages.

I will update this post after having reproduced the scenario … and this time logging the errors on disk :|

The figures B shows the direct consequence of all this, a bunch of http error code 500 (internal server error) begun raise from that moment:

stress

(old AMD dual core with 2Gig ram – 140 requests/session with a mean think time of 2 sec)

My dev load test

Here is the result of a load test I have run against our web app BeeBole (Erlang back-end) hosted by my dev machine ;)

I have made it simple in order to have a relatively talking base of comparison.

The setup

  • AMD 4000+ 2 Gig Ram desktop
  • Ubuntu Ibex (Gnome, Htop and Erlang console running)
  • Nginx + Mochiweb + Mnesia (db populated with +/- 70K records)
  • 1 Erlang node – no load balancing
  • Tsung was launched from my laptop connected on the same network (wired)
  • +/- 100 dynamic requests/session (db CRUD, search based on pattern, …) with a mean think time between 2 requests of +/- 3 sec
  • Total duration 1 hour divided in 5 arrival phases (get the xml code)

The result

stress-results

3660 users run their session and a total of 506595 requests were processed successfully with a mean processing time of 15.52ms. Not bad!

Conclusion

Tsung is a rich and flexible stress tool with which you are free to build from very simple to quite complexe scenarios as well as simulate a ton of concurrent users.

You can do a lot more than described in this introductory article like playing with load balancing for example or simulating several different IP addresses and client agents (Mozilla, IE, Safari, …), even monitor your server.

The results I got so far, testing our Erlang back-end with different load config, just confirm what I already knew about the Nginx + Mochiweb + Mnesia stack : it’s just a fantastic first class services delivery machine.

Just try and test several arrival phases, compare the results and you will certainly have fun in stressing your web application.

Additional info

Comments are closed.