With the ongoing stampede to public cloud platforms, it is worth a clearer look at some of the factors leading to such rapid growth. Amazon, Azure, Google, and IBM and a host of other public cloud services saw continued strong growth in 2018 of 21% to $175B, extending a long run of robust revenue growth for the industry.
It is worth noting though that both more traditional SaaS and private cloud implementations also are expected to grow at near 30% rates for the next decade — essentially matching or even exceeding public cloud infrastructure growth rates over the same time period. The industry with the highest adoption rates of both private and public cloud is the financial services industry where adoption (usage) rates above 50% are common and even rates close to 100% are occurring versus median rates for all industries of 19%.
In my recent experience leading IT and Operations for Danske Bank (a large Nordic Bank), we completed a four-year infrastructure transformation program that migrated the entire application portfolio from proprietary dedicated server farms in five obsolete data centers to a modern private cloud environment in two data centers. Of course, the mainframe complex was migrated and updated as well. And we incorporate some SaaS and public cloud usage as well. The migration effort resulted in the elimination of nearly all of the infrastructure layer technical debt, reduced production incidents (by more than a 95%), and correspondingly improved resiliency, security, access management, and performance.
These are truly remarkable results that now enable Danske Bank to deliver superior service to our customers, as well as reduced infrastructure cost and improve time to market. But how do you get to private or multi-cloud successfully?
Quality over cost
First, it is critical to view cloud as a quality solution versus a cost solution. Any major infrastructure re-platforming should be primarily judged based on improved capabilities, increased quality and reduced risk. Infrastructure re-platforming, done primarily for cost rationale rarely delivers — especially considering that most corporations can achieve far better cost savings by taking the same investment and using it elsewhere (operations, digitalization, consolidation, etc). So start off the project with the right investment rationale — to improve time to market, reduce security vulnerabilities, eliminate technical debt, improve availability, etc. The project objectives are then more relevant and important to the corporation, and they require a higher level of design and execution quality to achieve. These imperatives will actually result in a more focused effort that will yield better results.
Second, the effort must be comprehensive. If you only do a portion of your server estate, and allow myriad legacy systems to remain, then you have not reduced your complexity (in fact, you may have actually increased it). This complexity, coupled with dated systems, is a major contributor to defects and issues that reduce security, availability and performance. Your architecture should incorporate a proper migration of all systems to the appropriate “flavor” of cloud platform.
Today’s legacy environments are often singular, meaning nearly every server is a custom implementation, with slightly, or even widely, varying configurations and platforms, each requiring custom maintenance and expert care to function and keep current. By architecting a comprehensive set of templates, perhaps 20 or even fewer, the administration and maintenance problems complexity is reduced dramatically.
My experience is that most legacy applications can be easily ported to a suitable server template in your private cloud. The remainder can be tougher, but it is important to work through them to deliver them on the new platform. This definition of the templates, and then the initial deliveries with proper middleware and database stacks, is often the toughest part of the engineering. It should be jointly done by your infrastructure and application engineering teams and piloted at the start of your migrations. But once defined and packaged, you have now reduced the complexity of the new environment enormously.
A third critical principle we followed was to minimize exceptions. This meant we would not just move old servers to the new centers but instead to set up a modern and secure “enclave” private cloud and then migrate all the old to the new. This enabled a far more secure network and a level of data protection than could be built in or overlaid in a typical legacy environment. Further, all applications would migrate to the new server patterns. The exceptions would be sunset with clear time lines.
Last, all new applications had to be built using approved cloud design templates from the start. Of course, this requires proper sponsorship and discipline to properly execute. But the reward is then greater. With far fewer exceptions, design is standardized, maintenance and administration can be made common and automated, security gaps and patch administrations becomes a far lesser task and problem. By minimizing exceptions, you greatly increase standardization, which then increases quality and reduces effort, as well as enabling automation and speed.
The right stuff
The final factor to success is that you must have the right engineers. Properly designing a private and multi-cloud environment requires a robust, multi-disciplined team of engineers. There is a great deal of excellent material and industry that improves the jumping off point, but the cloud still must be adjusted for your environment, and it still must be built by your team. Throughout the process, your engineers will be able to leverage the many proven implementations and industry experts, but in the end, they will need to solve problems unique to their environment and ensure they optimize for your organization. In our migration, we benefitted from a strong engineering bench and excellent technology managers that focused on solving problems with quality along the way. Ensuring you have an adequate and preferably strengthened team is critical to eventual success.
While it was certainly a lengthy and complex process, we were ultimately successful at Danske Bank. And the organization is now reaping the benefits of a fully modernized cloud environment with rapid server implementation times and lower long term costs. In fact, we benchmarked our private cloud environment and it proved to be 20% to 70% less expensive than comparable commercial offerings (including AWS and Azure). It’s a remarkable achievement indeed, but more important is that the new Danske Cloud2 environment provides the solid platform for the further digitalization of the bank.
Jim Ditmore recently completed 5+ years in Europe as COO, leading IT and Operations for Danske Banke. He has worked in IT for more than 30 years and enabled technology to become a competitive advantage at both large and medium shops. You can read more of Jim’s views on IT … View Full Bio