No matter what a provider might tell you, pre- and post-deployment testing remains critical. Here’s some guidance.
4C Director Dave Mailer regularly writes for No Jitter. Here is his most recent article.
I like to think that in over 30 years of helping clients deploy communications solutions, I’ve learned a thing or two. Many of these things have become second nature — I do them without thinking. But every now and then, I find value in stepping back and questioning why we do things the way we do, what’s important, and whether we could do anything differently.
So, I’ve decided to use my No Jitter posts this year to do just that. If you’d like to make any suggestions for future articles, let me know. I’m going to start with the importance of testing. I’ve focused on UC deployment testing, but the principles are sound for other IT projects as well.
With the continued trend toward cloud, we’re now increasingly purchasing communications as a service rather than a system. You may question the need for your own testing given that many cloud communications services already have thousands, if not millions, of users in live operations. If you listen to the suppliers, then you should know “it just works!” But my experience, painfully learned in some cases, is that just because something works for others doesn’t automatically mean it will work for you.
The world has changed dramatically from the days I spent deploying first-generation digital PBXs, in the 1980s. Back then, we tested every single aspect of the installation. We even sent out teams of floor walkers to test every individual phone, to make sure each could make and receive calls, put calls on hold, and had the right extension associated with it. This was particularly important for “big bang” changeovers of thousands of users at once. If the new service didn’t work, then our users were left in chaos — a single jumper error could result in hundreds of users getting the wrong telephone number. Today, that would be like everybody being able to log into other users’ accounts!
In the modern software-oriented world and with services accessible for a wide range of devices, testing every feature and function in every scenario is impossible. But those big-bang cutovers of old aren’t so common, either. Today, an implementation plan is much more likely to allot for migrating users over a period of time, with the new communications service deployed and working in parallel with the old system before that legacy gear is decommissioned. So if the new service doesn’t work, then users can always fall back on the old one — at least for a while.
But testing remains critical. I’ve learned to trust nobody. Every installation will have issues, and leaving them to be discovered in business-as-usual (BAU) operations will undermine user confidence in a service for years to come. The trick is to consider what to test, and then design a bespoke test plan that doesn’t repeat what others have already done but focuses on your users’ access, customisation, and configuration requirements. I suggest the following pointers:
- Be proportionate and focused in your testing. Don’t try to do everything. Consider what might go wrong and what really matters to your users and your business.
- Consider testing from the outset — build it into procurement specifications, statements of work, implementation plans, and change control.
- Don’t do your provider’s job. You are within reason to expect and require your provider to do sufficient testing and hand over a working service to you. But make sure it does this. Ask for the documented test results.
- Structure your testing in layers.
- System testing should test the big things, like connectivity and access, compatibility (e.g., with your desktop builds), resilience, and failover.
- User testing should focus on making sure the service satisfies use cases. Be realistic, but also be cautious about testing in BAU. Consider identifying and simulating any key use cases in advance.
- Consider whether stress/load testing is appropriate. You probably don’t need it for basic user services, but consider it an imperative for mission-critical functions such as contact centres.
- Don’t forget to test administration and reporting systems.
- It’s not just about the technology. Don’t forget to test your processes, like onboarding and support processes.
- Lastly, test disaster recovery (DR) and business continuity (BC). Are you confident that the service provider has backed up and can quickly restore your configuration? Consider a full DR test. This is very difficult to do once a service has gone live, and probably can’t be done in isolation. So build it in to a full-site or organisation-wide DR test in conjunction with your wider IT and operations colleagues.
Think in advance what your likely course of action will be if something fails in testing. Local hardware issues should be reasonably easy to rectify. Testing may highlight configuration issues. Hopefully these will also be straightforward to fix. But fundamental software / solution capability issues are more challenging. In such cases, you many need to change your operating processes or manage your user expectations rather than wait for a software fix or new feature that may never arrive.
Lastly, remember that just because your service has gone live the need for testing isn’t over. Consider what testing you’ll need to run as part of change control. Major software updates can deliver new functionality, but sometimes they break existing capability. Consider whether you should undertake a baseline test of critical functionality when upgrades take place.
In summary, effective testing has no substitute. You cannot abdicate responsibility to your suppliers and service providers. You need to retain control. But be proportionate; testing everything is impossible. Test the elements that really matter for your users.
My dream of utopia is to feel that testing was unjustified because the installation was flawless. It’s just not happened yet!