What you as a tester know about the value of testing might be just the tip of an iceberg. If you know about testing, then you can probably articulate already the pitfalls of ignoring the dimension of performance. You’ll know that the purpose of the exercise is to mitigate risk of not meeting the NFRs of the system as defined by the business.

I’m not going to preach to the converted, but if you’re not sure of the purpose, then take a cursory look at the media for some high-profile examples of where NFRs haven’t been met.

At the end of the day it’s just testing, but are there any hidden depths to consider?

We’ve just completed an engagement with a client with whom we have a long-standing relationship. Our performance test culture has evolved along with the capabilities of their internal test team and the wider technical infrastructure and development community, from what was initially a ‘tick box’ exercise into something more akin to a BAU activity.

In a real sense we’re now treating performance like regression testing, developing the test harness & infrastructure to leave the door open for future-proofing their solution and staying ahead of the game for the next innovation coming down the line.

Performance testing is now part of their IT culture, driving the design & build of solutions and introducing new ways of working (e.g. scripting SOAP, web services, monitoring frameworks, build configuration for multiple instances, tuning & optimisation) BEFORE any of the (now routine) load testing is carried out to gain user acceptance and check KPIs to validate that the business can grow.

So – ‘just testing’ that the solution performs the way that it’s supposed to, to get that box ticked? Yes… maybe it’s still a requirement, but that particular activity is just the tip of a very large iceberg.