If you have a mainframe, you have invested in building a reliable platform and application portfolio that has served as the backbone of your business. But the technology landscape of today requires more flexibility and agility at a lower cost than mainframes can provide.

At Astadia, a Microsoft Gold Partner, we have found that customers are turning to Azure as a modern and flexible option for running mainframe application workloads, and they are leveraging past investments in mainframe applications and data.

When carefully planned, managed, and executed, the rewards of moving mainframe workloads to Azure are numerous. Besides the cost savings of the pay-as-you-go model, once your mainframe application set has been fully deployed on Azure, you will have the freedom to integrate proven business logic with modern technologies for data analytics or mobile enablement, expanding your business to new markets, customers, and partners. With that in mind, migrating mainframe applications to the cloud seems more like a necessity than a luxury.

In this post, I will walk through a five-step methodology we have found helpful to moving mainframe applications to Azure.

Mainframe Migration to Azure DDMTI

We recommend you reuse the original application source components and data, and redeploy them to modern Azure services. Mainframe migration enablement tools can keep existing source code intact, but you should also expect to replace some components and rethink data storage to take advantage of Azure SQL PaaS and other Azure storage offerings.

A least-change approach like this reduces project cost and risk compared to manual rewrites or package replacements, and reaps the benefits of integration with new technologies to exploit new markets while leveraging a 20- or 30-year investment.

Once migrated, the application will resemble its old self enough for existing staff to maintain its modern incarnation; they have years of valuable knowledge they can use and pass on to new developers.

By redeploying legacy mainframe assets to Azure, you are poised to easily integrate them into your existing DevOps processes. Doing so brings mission-critical legacy systems into the age of automated testing, continuous improvement, and continuous deployment.

And finally, some legacy migration enablement tools will also allow you to deploy your systems as containers under Docker, which further improves reliability and reduces deployment issues. In some cases, you can even deploy legacy functionality to serverless-computing environments to take advantage of the additional cost savings and flexibility it offers. So now let’s take a look at the five-step process to bring your legacy assets into the modern world.

Step 1: Discover: Mainframe Assessment

The first step is cataloging and analyzing all applications, languages, databases, networks, platforms, and processes in your environment. Document the interrelationships between applications and all external integration points. Use as much automated analysis as possible, and feed everything into a central repository.

Astadia employs a combination of commercial analysis tools, like Micro Focus Enterprise Analyzer, and our own specially-developed parsers, to analyze legacy code quickly and efficiently. This analysis output is used to establish migration rules that are fed into Astadia Code Transformation Engine. These rules get updated and refined throughout the project.

Step 2: Design

After analyzing all of the source code, data structures, and end-state requirements, it’s time to design and architect the solution. The design should include the following details:

Azure instance details: For instance types, in most cases, load-balanced general purpose Dv2 or Dv3 Series instances are suitable for transactional production, pre-production, and performance environments, while general purpose A or B Series instances fit the development, test, or integration environments.

Transaction loads: Non-functional requirements and performance requirements, such as high throughput and scalability, are often critical for mainframe workload execution. This implies careful design and sizing of the underlying Azure network, storage, and computing services.

Batch requirements: Almost every mainframe runs batch applications, which are typically I/O intensive and require very low latency from storage or data stores. Because this can sometimes be a challenge for distributed systems, batch infrastructure needs to be designed and tested early to ensure the appropriate compute and storage resources are selected, and I/O access routines are tuned for maximum efficiency.  In most cases, a combination of compute power and well-tuned SQL will result in performance that meets or beats current batch processing windows.

Programming language conversions and replacements: Some languages which may not be supported or available on the target components can be converted with tools or replaced by newer functions.

Integration with external systems: mainframes are commonly the back-end or system of record for satellite or partner systems, and integration must be preserved after migration. This includes protocols, interfaces, latency, bandwidth, and more.

Third-party software requirements: Each Independent Software Vendor (ISV) may or may not have a functionally equivalent software available on Azure, consequently needing a specific migration path definition.

Planning for future requirements: Business and IT strategies and priorities dictate architecture decisions, especially around addressing future performance and integration capabilities.

Source code may include languages such as COBOL, PL/I, Natural, Assembler, JCL, etc. Data stores may include networked, hierarchical, relational, or file-based data stores.

Migrating IBM mainframe applications to Azure:

Mainframe Migration to azure Ibm mainframe to azure inforaphic
Figure2 – The core component of the mainframe migration architecture is Astadia’s Mainframe Cloud Framework that uses a suite of emulators and utilities to execute the legacy code.

The core component of the architecture in Figure 2 is the Mainframe Cloud Framework, which uses a suite of emulators and utilities to execute the legacy code. In this scenario, Micro Focus Enterprise Server provides the necessary transaction processing features of CICS and IMS to support redeployed mainframe code. This Mainframe Cloud Framework runs on Azure Virtual Machines for compute resources.

In most cases, mainframe hierarchical data structures will be converted to Relational Database Management Systems (RDBMS) solutions like AzureSQL, for example. Flat file structures, such as VSAM, are supported by Enterprise Server and can retain their current structure. Elasticity of the solution is facilitated by Azure services including Azure Load Balancer and Autoscale.

You’ll want to carefully select which mainframe redeployment tools you use; we recommend choosing ones that require you to make the least amount of change since it greatly reduces project costs and risks. For example, Astadia normally uses Micro Focus Enterprise Developer for development and Enterprise Server for emulating transaction monitors and running batch workloads. This combination allows migrating COBOL applications to Windows and Linux with minimum change to the original source. However, you will need to design custom-developed solutions to meet requirements that aren’t met by emulation tools. COBOL is almost always migrated, but programs written in languages like Assembler will need to be rewritten because they are not supported by the target emulation  environment.

Some program functions may be replaced by the target operating system or other target-platform components, so do a little analysis to find the gaps. Some legacy Assembler sort functions, for example, may be replaced by RDBMS SQL clauses. This is also where you will need to define your data migration strategy. You can keep flat files in their same legacy flat form, but you may want to consider converting them to relational in order to facilitate integration with modern SQL-based tools, and to facilitate scalability with proven RDBMS.  Doing this will introduce additional effort and some risk to the project, so you need to balance that extra cost with the benefits to determine if it’s worth the effort. Hierarchical data should be converted to relational data using conversions tools or extract-transform-load (ETL) programs.

Step 3: Modernize

This is an iterative, automated process utilizing Astadia’s Rules-Based Transformation Engine to make mass changes to source code. If the modified code compiles, it’s ready for unit testing. If it doesn’t, developers review the errors, find a fix, update the migration rules, and run the program(s) through the engine again. Many times, error fixes in one program may be applied en-masse to fix the same errors in other programs, giving you the ability to leverage economies of scale.

Modernize mainframe to azure cloud infographic
Figure3 – As you go through the modernization process, the Astadia Rules-Based Transformation Engine with improved migration rules gets faster and more accurate for migrating follow-on source code.

As you go through the modernization process with more source code files, the Transformation Engine with improved migration rules gets faster and more accurate for changing follow-on source code. This is because source code files tend to repeat the same coding patterns requiring the same transformation rules. While the legacy code targeted for redeployment is going through these iterations, you should also take steps to write new code to replace those legacy components that will not migrate to Azure.

This step also includes building out and validating the new databases. To make this easier, Astadia has developed a DDL conversion tool that analyzes legacy data file layouts and database schemas, and then generates flat file and relational schemas for the target databases, as well as ETL programs, to migrate the data. Once the target file and database environment has been validated, static data can be migrated in parallel with code migration and development activities.

Dynamic data—data that changes frequently—will be migrated during cutover to production.

Step 4: Test

The good news about testing is that you mostly need to focus on the code that has been changed. You may decide not to unit test every line of code since most of it hasn’t changed, but testing should focus on:

• Integration

• Data accesses

• Sorting routines that may be affected by using ASCII vs. EBCDIC

• Code modifications to accommodate data type changes

• Newly developed code

Any Continuous Integration/Continuous Deployment (CI/CD) pipeline test which executes from a non-mainframe platform (such as from a T27 client platform) can be kept unchanged and follow DevOps best practices.

Because many legacy applications have few, if any, test scripts and documentation, you will likely need to spend time and resources to develop test scripts. We recommend investing the time in developing the proper test procedures to make your applications more robust on Azure.

This is also when you may want to consider implementing automated testing and deployment in support of DevOps.  Although the planning and setup to achieve this will add time and effort to the project, it will pay off in terms of accelerating testing during implementation as well as post-Production. You will also need to perform load and stress tests to ensure your applications are prepared to handle high volumes.

Step 5: Implement

When migrated applications have been tested, verified, and optimized, the process of deploying those applications can begin. In reality, many deployment activities are initiated in parallel with earlier phases—things like creating and configuring Azure instances, installing and configuring mainframe emulation software (e.g. Micro Focus Enterprise Server), migrating static data, and other infrastructure or framework activities.

In some cases, environments may be replicated to achieve this, or existing environments may be repurposed. Such replications are typically facilitated by automation tools such as Azure CloudFormation or Azure OpsWorks. The specifics of this may depend upon application and data characteristics and any company standards or preferences you might have. After dynamic data is migrated and validated, cutover to production mode can be completed.

Subscribe to our newsletter

Related news

Related white papers:

No items found.

IBM Mainframe to Microsoft Azure

An overview of the IBM mainframe and the IBM Mainframe to Azure Reference Architecture.

View more
IBM Mainframe to Microsoft Azure

Let's Talk

Get in touch with our experts and find out how Astadia's range of tools and experience can support your team.

contact us now