As I announced in my previous post
, Tellago Studios has released the SO-Aware Test Workbench
tool. And with this tool, WCF services Load Testing has been made easy. You can now use SO-Aware service repository’s service and testing configuration, and execute a load test in one of the 4 load test strategies with just a few clicks.
The “Simple” (or default) strategy will simulate a certain amount of clients, which will wait for a specified delay between calls to the service. This type of load tests are very common, and they are very good at providing average times, and the average amount of errors too. The graphic of this test will usually look like a horizontal line if the service is stable through the entire test.
One of the most interesting uses of this type of tests is to provide with some baseline information about your service performance. At the beginning of a sprint, or iteration you can run this test and record the results by exporting the data. After you finish your new features/refactoring, you execute the test with the same parameters, and compare the results with your baseline. You can then decide whether your changes to your service affected it’s performance, and you can do this very early in your development cycle.
Simulates a linear increase (or decrease) in the amount of clients hitting the service. The graphic of the results of this type of test is usually a line going from the initial amount of clients to the final amount of clients, and depending on the service performance, the average time might start to drop (actually increase it’s value) or improve (decreasing it’s value) reaching a certain amount of concurrency.
Notice how the thread count (yellow) and average time (blue) increases linearly, and the test count (green) decreases with time. When the concurrency level reached 30 (30 concurrent clients) notice how the performance dropped drastically, and then continued to degrade at the previous pace. Turns out that the WCF throttling configuration is set to a maximum of 30 concurrent calls, so when the service reaches that limit, it starts to queue the requests (making the average time bigger as well). As opposed to this, the previous example maintained the average and count pretty much constant, and the fact that it’s hard to see the different lines in the picture is because it had almost no variations (overlapping the dots).
The burst strategy simulates high peaks of traffic followed by a “quiet” period of time. The aim of this test is to see if tests start to fail or timeout, and whether the period of “quiet-ness” is enough for the service to recover. The graphic of this test as you might have guessed by now is of peaks separated of a certain amount of time (seconds). This is very easy to see in the “Passed vs Failed” chart that is below the “Results” chart we have been looking at:
The test is configured to stress the test with a concurrency level of 20 for 2 seconds (Burst 2000ms), and then stand still for 5 seconds (delay 5000ms). Notice in the second highlighted peak, how the service receives requests in the second number 6, and finishes the last request at second 10, and then requests start appearing again at second number 13. So if we take the 2 seconds of Burst plus 5 seconds of delay those 7 seconds are being illustrated from seconds 6 to 13. In this case, the service handled the load pretty well, since as you can see there were no failed tests, and it handled the load before the next Burst started.
I’m not a big fan of this last strategy, since it may be the more sophisticated one, but it’s also the harder to read. In this test, the number of clients will go from an initial value, to a maximum, then to a minimum, and finally go back to the initial value. This gives the impression of a saw tooth in the graphic.
It’s very easy to understand this test if you follow the yellow line which is the thread count. In this case it starts in 40 and has a variation of .90 (90%), so it reaches 76, then drops down to 4, and ends with 40. As for the data that it throws, it can be seen as a combination of a burst and linear strategies, meaning that you might be interested in recording at which point the service starts to drop the performance (similar to linear), but you are also interested in whether your service can recover to a drop on the peak of load (similar to burst).
As you can see load testing with SO-Aware Test Workbench is extremely easy, and yet the information you can gather from it’s results is very important. And because the service configuration is in SO-Aware, you can change your service configuration and see how that impacts your service performance (extremely useful for tweaking throttling and even binding configuration).