Scalable. Flexible. Efficient. It’s obvious why companies want to move their data and applications to the cloud. In this post, we’re going to look at the top cloud migration strategies to consider in 2020, what each one entails, and help you choose the best strategy for your enterprise’s specific needs.
The Importance of a Cloud Migration Strategy
According to recent research from Gartner, Inc., public cloud revenue will grow at nearly three times the rate of overall IT services through 2022, with the US accounting for over 50% of that growth. Zeroing in on the cloud migration services sector, a market worth $119B in 2019 is expected to reach roughly $450B by 2025.
Obviously, companies are migrating to the cloud for a reason. More accurately, they’re moving for many reasons. However, rather than focusing on the high-level benefits already covered at length, we’ll focus on the potential hurdles that prevent many companies from realizing their expected benefits.
- Downtime: Depending on the strategy and migration tools you use, there could be significant downtime while your data is moving from on-premise servers to the cloud. If long enough, that downtime can severely impact your operations and the customer experience.
- Data loss & inaccuracy: Your enterprise data is most susceptible to damage, loss, and security breaches while it’s mid-stream through a data migration.
- Inexperience: If your team is used to physical on-prem servers and have less familiarity with cloud platforms and services, the process can be extremely challenging.
- Legacy systems: Your existing systems won’t necessarily sync well with your cloud solution, possibly requiring significant rewriting projects. That makes choosing the right data virtualization solution especially crucial before your migration, saving you and your devops team countless hours of work and added expenses.
The good news, however, is that these possible pitfalls aren’t inevitable. In fact, along with diligent planning and the most effective tools, simply using an appropriate migration plan goes a long way in streamlining the cloud migration process and eliminating costly delays. So on that note, let’s take a look at the three most common cloud migration strategies.
1. Lift and Shift
Also known as rehosting, a lift-and-shift cloud migration strategy is exactly what it sounds like – an enterprise lifts its stack and shifts it from on-prem servers to cloud servers. It’s the fastest, most cost-effective of the different strategies since it essentially transfers an exact copy of your environment without making any major changes, but it also gives up some of the major benefits of moving to the cloud.
Some applications are better suited than others for this strategy. Systems running on virtual machines or in containers should be relatively easy to move. However, monolithic applications or systems with large databases running on bare metal can pose a challenge. These massive systems may not fit on the machine images offered by the cloud provider.
Since this strategy does not introduce any new technology, application and operations teams will be minimally affected. This approach also reduces data loss and inaccuracy – there is no conversion of data or fundamental change in data access management. Additionally, integrating back to the on-prem legacy systems should be straightforward, focusing on firewall rules, network bandwidth, and latency.
But there’s an opportunity cost involved when using a lift-and-shift strategy. While you’ve technically moved your data and applications to the cloud with minimal impact to your business users, you’re likely not taking full advantage of the cloud environment. Instead, you’re still using the same systems designed within the confines of your on-prem environment. Therefore, you’re probably not maximizing native cloud functionality, flexibility, or scalability.
Also, you’re likely not taking advantage of any cost savings, either. Lift-and-shift often means bringing your own licenses to the cloud, or BYOL. Software licensing costs can make up the majority of the ongoing operational costs for an application, meaning you’re still paying those costs by using a lift-and-shift strategy.
Think of a replatforming strategy as an extension of lift-and-shift, where you make minor adjustments to your technology landscape to better optimize it for the cloud. This could include integrating automation functions within your applications or other additional features to leverage the benefits of the cloud environment, all without having to completely overhaul your systems.
A common example of replatforming is moving an Oracle database from an on-prem server (or servers) to a managed offering like AWS RDS. This type of migration comes with some distinct benefits, including reduced management responsibilities. However, there are some definite drawbacks involved as well, particularly a lack of flexibility. Generally speaking, managed offerings do not provide many bells and whistles, and there are fewer software versions supported.
As you can imagine, there are a number of similarities between this strategy and lift-and-shift. Minimizing the technology changes will reduce disruption to the application and operations teams. Data access and security will not drastically change, and licensing options such as BYOL are similar as well. Cloud providers generally offer migration tools to help customers move their servers and databases to these managed offerings to reduce the effort and risk of migrating.
Overall, replatforming is a cost-effective approach since you can prioritize what applications to optimize and scale-up as your resources permit. However, planning is essential to a replatforming strategy to ensure you stay within the project’s intended scope. It’s usually best to initially focus on applications that will provide the quickest wins, followed by those that would provide immediate benefits when replatformed to a cloud infrastructure.
A refactoring strategy is the most in-depth of the three, also making it the most expensive, time-consuming, and disruptive. Sometimes referred to as rearchitecting – a broad term with other meanings as well – refactoring involves rebuilding applications from the ground-up. When successful, a complete refactoring can deliver superior performance, lower ongoing costs, and provide new capabilities.
Refactoring also provides an ideal occasion to address technical debt. Because the changes are so pervasive, it can make sense to address design flaws and past decisions to cut corners for expedited releases. This approach can give development and testing teams the chance to build CI/CD into their development processes. However, these benefits do not happen simply by adopting a refactoring strategy, but must be included in your migration scope. Otherwise, you will never realize them.
While the benefits of a successful refactoring strategy are significant, there are also some major drawbacks. Refactoring is a complex undertaking that will introduce different technologies and a new way to manage infrastructure. It can also be a lengthy process to rewrite a large, complex application, particularly if there are a lot of upstream and downstream dependencies.
Companies often want to reduce the risks associated with such an undertaking by using a virtualization tool like Gluent Data Platform, to insulate the existing application from the underlying platform. Some of the key benefits these types of solutions provide are:
- Faster time to value
- Smaller phased refactoring approach – avoiding the “big bang” approach
- Reduce cutover time with automated data conversion and migration
- Insulates dependent systems from refactoring changes
Choosing the Right Cloud Migration Strategy For You
Realize that there is no one strategy that will work for every application. When reviewing your applications, you’ll need to determine what benefits you want to achieve by moving it to the cloud, and compare that to the cost and risks of migrating.
Non-transformational strategies like lift-and-shift and replatforming are often the best approach, but that depends on your applications. Typically, the benefits are quick to achieve with relatively low risks. In other cases, refactoring is a more logical choice – or sometimes the only choice. It’s not uncommon to implement multiple strategies for the same system over time, often a quick lift-and-shift or replatform followed by a refactoring effort.
Choosing the “best” strategy depends on your company’s short-term and long-term needs, as well as its ability to execute. Prioritization is critical, especially if you are undertaking a refactoring effort since it obviously poses more significant risks. Beyond prioritization, it will be critical to mitigate cost overruns, integrate back to legacy systems, and minimize cutover downtime.
Refactoring for the cloud can also be a significant step toward improved data sharing for a company. A transparent data virtualization solution can provide seamless integration back to legacy systems and consolidate data to the targeted cloud platform. These capabilities reduce the risk of those cost overruns while minimizing cutover downtime.