Moving Your IBM Mainframe to AWS Cloud
Approaches to IBM Mainframe Modernization
You may notice throughout this document that we use the terms “Mainframe Modernization”, “Mainframe Migration” and "IBM Mainframe to AWS". Migration is a type of modernization, whereas modernization encompasses a broader set of strategies or options. In many cases, you will employ a combination of these strategies, the right mix of which ones will be determined during the critical application portfolio rationalization step of the project’s assessment phase. Here are three of the most common approaches:
Often called “lift and shift”, this is a process that reuses the existing code/program/applications, typically written in COBOL, by moving them off the mainframe, and recompiling the code to run in a mainframe emulator hosted in a cloud instance. This approach minimizes the upfront risks and the length of the project, realizing hardware and software cost savings soonest.
Running mainframe applications in an AWS-hosted emulator also opens the possibility of new innovation, like leveraging .NET, Java or other APIs to integrate with previously inaccessible programs and data.
It may be tempting to say, “Let’s just write new programs from scratch,” to modernize the mainframe applications. This approach is extremely risky and fails a vast majority of the time. It is complex, costly, and time consuming. The resources and investment required tends to greatly exceed the forecasted budget and ROI.
A new, modern codebase may still be the correct end objective, but a better approach would be to first move the applications to a cloud-based emulator, migrate the database to a cloud-based database, then focus on replacing modules/code over a deliberate, multi-phased approach. When it is time to rewrite, there are several code transformation engines you can choose from to reduce the effort and minimize the risk.
Another mainframe modernization approach is to completely replace the mainframe functionality with a program or suite of programs, typically a Software-as-a-Service (SaaS) application. You typically see this with purpose-built solutions for finance, human resources, manufacturing, enterprise resource planning, etc. There are also industry specific apps that may solve the problem that a custom mainframe solution was needed for decades ago.
The upside of using SaaS is that your organization no longer worries about maintaining code. However, you will find that while you can configure a SaaS application with various options provided by the vendor, you may find it very difficult and costly to customize your instance (if at all), as the shared codebase runs all tenants (customers/organizations) using the "service".
There are additional variations on these three modernization approaches and you’ll likely use several strategies in achieving your goal to completely migrate from the mainframe. It is commonly accepted best practice among legacy modernization practitioners to primarily use the lower-risk, lower-cost Reuse approach first to capture the gains and benefits in the shortest time possible, followed by a deliberate and phased approach to Rewrite or Replace the applications.
Challenges of Mainframe Modernization
Mainframe migration projects are complex and require close management of the process, budgets and timelines that have been set as project goals. A Reuse approach will involve rehosting (from IBM mainframe to AWS) and likely some re-engineering and refactoring to complete an entire mainframe migration. It will also involve data and file conversions for transitioning the database to the cloud.
As we’ve been emphasizing, the first challenge of any mainframe modernization project is to develop a rock-solid plan built upon a thorough application portfolio assessment and rationalization. As you put your plan together and begin to execute, here are additional factors you’ll need to watch out for:
Many mainframe environments with large and complex application portfolios do not have documentation that details what these mainframe applications do, and how they do it. Many applications are decades old, so the original system, with changes likely every year, has become a maintenance nightmare. The external interaction with these systems, the Input/Output, is how these systems get defined to the business, and the rest of the system is just a black box.
Migrating a minimally-documented system of this nature is tricky and the testing prior to the “go live” deployment is critical to mitigating this issue. (And, of course, copious documentation should be captured for the resulting system.)
There are a couple of general points about the application portfolio that should be noted. As mentioned above, the lack of documentation on these aging systems makes the migration effort more difficult. The project team that drives a migration project must then resort to “mining” the actual application source code to determine exactly the behavior of the application.
Another important application-specific issue for consideration is discovering the integration requirements and dependencies of the application with other systems and databases. These integrations and dependencies must be clearly identified and, if still needed, they must be re-connected (possibly rebuilt) and made operational along with the migrated system.
Running Parallel Systems
For a short while, there may need to be some parallel processing between the mainframe application, while it is still being used in production, and the newly migrated system, on the new platform. Planning and executing this parallel processing will be a challenge, and will require extra time and attention to make it successful.
Another example of when you may choose to run parallel systems is if you want to achieve quick reductions in mainframe processing consumed by moving the development and test environments to an AWS-based emulator while keeping the production system on the mainframe for the interim.