By Andy Buchanan, European Director, Web Business Unit, Empirix
By now, most people are familiar with the high-profile web disasters where a marketing promotion drives people to a website …and the website crashes.
Earlier this year, retailer The Gap offered a cash voucher as an apology to people who couldn’t get on the company’s website to place orders.
And recently, in July, four ‘Da Vinci’ websites crashed after rumours of photographic proof to support the ‘Da Vinci Code’ theories were posted online.
In an age where people are increasingly buying online – and many companies are encouraging them to do so – website performance needs to be treated with the same care and attention as a high-street storefront.
According to research group Gartner, bugs discovered after release of new website applications can not only spell doom for a project, but also cost 80 to 1,000 times more to fix than if they were found during pre-deployment testing.
Most major web projects usually involve some form of unforeseen delay that causes a knock-on delay in roll-out. Unfortunately the same projects tend to have deadlines that are immoveable.
The end-result is often that quality assurance testing is kept to a bare minimum resulting in potential loss of revenue and poor customer satisfaction. While the best testing should be quick and simple, it also needs to be thorough.
Adopting a ‘wait-and-see’ approach is dangerous, and risks turning something positive (a massive spike in people wanting to do something on your website) into a potential PR disaster.
So, how can marketing and technology work together to ensure this kind of nightmare scenario doesn’t happen?
In our experience, every Web application has at least one bottleneck – an element of hardware, software or bandwidth that places defining limits on data flow or processing speed. An application is only as efficient as its least efficient element, and bottlenecks directly hit performance and scalability.
These bottlenecks can only be identified and resolved one at a time.
They can be found throughout an organization’s Web application infrastructure at the system level (firewalls, routers, server hardware, etc.), at the Web server level (hit rate, CPU, etc.), at the application server level (page rate, memory, etc.) or at the database server level (queueing, connections, etc.).
This sounds like a recipe for a very long testing process, but there is a way to speed things up.
Throughput or concurrency?
Our experience has taught us that 80 per cent of all system and application problems arise through limitations in throughput capacity. Throughput is the measure of the flow of data that a system can support, measured in hits per second, pages per second or megabits per second (Mbit/s).
Just one-fifth of issues are down to concurrency – the number of independent users that a system can support.
So if most bottlenecks occur in the throughput, it makes sense for performance testing to focus most of its efforts there, rather than on levels of concurrent users, which has been the traditional focus.
A faster, simpler way to test
An initial focus on throughput testing saves time. To illustrate the point, imagine you are testing a system expected to handle 5,000 concurrent users, each spending an average of 45 seconds per page.
If the application has a bottleneck that will limit its scalability to 25 pages per second, a typical concurrency test would have found the problem at approximately 1,125 users, or 94 minutes into the test.
A throughput test would have uncovered the glitch in less than 60 seconds.
Very often, performance testing begins with overly complex scenarios that test too many components, which makes it easy for bottlenecks to hide.
By beginning with basic system-level testing, you can check performance before the Web application is even deployed.
We always recommend a modular approach. You start by testing the simplest possible test case and gradually build in complexity.
If the simplest test case works, testing moves on –and if the next stage fails you know where your bottleneck is. It sounds simple, but all too often, testing is done in reverse, starting with the most complicated and working back.
Any performance testing should begin with an assessment of the basic network infrastructure that supports the Web application. If the system can’t support the anticipated user load, even infinitely scalable application code will create a problem.
After checking that the system is up to the job, it is time to turn to the Web application itself. Again, the approach to testing should be to start with the simplest possible test case and then add complexity.
In a typical e-commerce application, that would mean testing the homepage first, then adding in pages and business functions until complete, real-world transactions are being tested – first individually and then in complex scenarios.
Once this has happened, the transactions can be put into scenario concurrency tests. Any concurrency test must reflect what users really do on the site.
For example, 50 per cent just browse, 35 per cent search, 10 per cent register and log-in and five per cent add to the shopping cart and make a purchase.
But the virtual users testing the site must execute the steps of those transactions using the same pacing that real-world visitors do.
Whether you conduct your performance testing in-house using automated tools or manually, or you rely on a managed service, it is important that it is done methodically and rigorously.
Testing is a bit like insurance: you usually realize it is not up to scratch when it is too late!
Check out 12ahead, our brand new platform
covering the latest in cutting-edge digital marketing and creative technology from around the globe.
12ahead identifies emerging trends and helps
you to understand how they can apply to modern-day companies.
We believe 12ahead can put you and your
business 12 months ahead of the competition. Sign up for a free trial today.