No business is immune to computer glitches or network connectivity problems. We saw this recently with the New York Stock Exchange (NYSE), when trading temporarily grinded to a halt due to ‘technical difficulties’. While the cause of the outage is not clear, when a high-profile company runs into problems that shut down its entire operations, it raises the question whether its digital applications were fully prepared.

For any go-to-market strategy, the utmost preparation is needed to mitigate the risk of application performance issues, such as a slow load or an outage. Digital marketers must understand their impact on site performance; for example, marketing campaigns often drive large volumes of traffic to a site, making it more likely to load slowly or crash. Also, while ‘heavy’ features – such as a Twitter plug-in – might look aesthetically pleasing, it could lead to a major performance hit or an outage and, as a result, the lead generating campaign may have the exact opposite effect.

Marketing must, therefore, work closely with the IT team, not least because testing applications and regularly monitoring performance both before and after deployment are a necessity. In particular, taking the following steps are critical to protect applications from poor performance.

Step one: Carry out full functional and performance end-to-end testing

Enterprises must carry out full functional end-to-end testing on an infrastructure that closely mimics production. With any marketing campaign, the only way companies can ensure their infrastructures are ready to handle the expected increase in traffic, is to test it with traffic. This traffic should represent real-user behaviour as accurately as possible, as this will enable companies to identify exactly what is causing an application to load slowly or crash; for example, it might be the simple case of removing a faulty image or marketing banner.

Also to be considered is that customers are increasingly using digital platforms such as mobile phones and tablets to search for a company or buy products online. Businesses must therefore ensure they carry out performance testing with the appropriate mix of mobile and desktop users. This will reveal the true load limits a company’s infrastructure can handle. If a company fails to take into account mobile or tablet users, they could completely misjudge the amount of traffic coming their way, which could jeopardise the success of the marketing campaign.

Step two: Test for disaster recovery

A disaster can occur while an application’s live in the ‘production’ stage, or during the ‘deployment’ process as it’s about to go live. Worryingly, a vast majority of companies don’t test for failures during the deployment process; they just cross their fingers and hope for the best.

Testing to see how your systems react during deployment, as well as during production, is vital to ensuring that sites get back up and running with minimal impact to the customer experience. Weeding out performance problems before the application goes live – such as a poorly performing third-party service – will reduce the risk of a bad customer experience. Only after a company is confident in its pre-production testing should the application be released into the production phase.

Step three: Use both real-user and synthetic monitoring

Once the application enters the production stage, companies must regularly monitor performance to ensure the application continues to load quickly and is always available for customers to access. Using a combination of real-user and synthetic monitoring is key, as companies can gain accurate insight into how an application is delivered to its users. Real-user monitoring allows companies to view the actual user experience, whereas synthetic monitoring mimics the behaviour of a typical user running a search, viewing a product, logging in or checking out. This will ensure that companies are alerted to anomalies that may cause problems, but are not issues yet – or at least not for the customer.

Step four: Don’t forget third-party services

An increasing number of sites and applications rely heavily on third-party services. Over 30 percent and as high as 90 percent of applications’ network traffic comes from third-party services. These services need to be properly vetted before adding them to an application, tested before deployment, and monitored once in production to help maintain a good customer experience. It’s like buying car insurance; you can lock your car and regularly change its oil and rotate it’s tires, but this will not necessarily prevent all road accidents, stop burglars, or prevent all car problems. Instead, it will help to minimise issues from occurring and help you to get back on the road when problems arise.

Ultimately, the NYSE’s technical difficulties are just another reminder that all companies should have testing and monitoring in place. This will help to prevent and minimise the impact of application performance problems, ensure leads are nurtured right from landing on the site to check out and, therefore, drive business profitability.

 

By Joram Cano, solutions consultant at Keynote, a Dynatrace company. 


Visit our website to see events that will help you keep up to speed on; Data protection, cyber security, digital marketing and business growth. View upcoming events here!


comments powered by Disqus