Stop Cheating on your Tests!

The following is a blog about load testing and performance testing posted by Dave Murphy.

"I suppose we could have used a less inflammatory title for our recent webinar. It makes it sound like testers have been purposely doing something wrong. Perhaps we could have titled the webinar “Now you can execute more accurate and informative tests!” But the folks in marketing were right, and the intriguing title attracted our largest group of attendees ever. For those of you who didn’t attend, or if you did and would like to review the messages, you can watch the webinar here. This was the first in SOASTA’s latest webinar series, “Cloud Testing – Rewriting the Rules of Performance Testing”. Future webinars include “Run More Tests and Find More Issues” on October 27th and “Test On Your Schedule across the Lifecycle” on November 15th.

In this webinar, Scott Barber, President and CTO of PerfTestPlus, joins SOASTA’s VP of Performance Engineering, Rob Holcomb to discuss what performance engineers have done in the past to measure performance and find and fix issues; and why some of those techniques no longer reflect best practices. The focus is on web and mobile testing and why the higher scale, more distributed and often complex nature of that traffic is not well served by traditional testing tools or techniques.

After an introduction by SOASTA’s Brad Johnson, Scott, in his inimitable style, speaks to great effect about the four most common ‘cheats’ that performance testers have leveraged to overcome the constraints of inflexible test hardware, poor tool scalability, expensive pricing models and the lack of real-time information while testing. Scott begins his presentation by talking about the practice of modifying think times, typically to overcome licensing and/or hardware limitations imposed by the high cost of traditional load testing. His primary assertion: the only way to simulate production…is to simulate production. Interestingly, during the webinar a question came up suggesting that we’re testing computers, not humans, so why is accurately simulating user activity so important. In response, it was noted that it has become clear that variance in use absolutely has an impact on what happens to the infrastructure.

The second point discussed is the common practice of extrapolating results from a staging environment to predict what will happen in production. Architectures can be complicated, and the impact of those differences along with the additional complexities of ‘the real world’ make extrapolation problematic, at best. The best way to validate that your production environment will handle expected load is to test in production as part of your overall test strategy. (For more on testing in production check out this SOASTA webinar). Modeling user flows incorrectly is the third point addressed by Scott, reinforcing the notion that we’re not functionally testing the application, but need to make sure we’re putting a realistic load on the back end.

Finally, Scott presents a very interesting problem to illustrate the challenges associated with measuring performance, and how it can be as much an art as a science. Rob follows Scott’s presentation and, using SOASTA’s CloudTest, illustrates how we can use modern tools to, well, stop cheating. We hope you enjoy(ed) the webinar."

A good article by Dave Murphy on performance testing and load testing.

http://www.soasta.com/2011/10/06/stop-cheating-on-your-tests/