Bob Ellsworth: I'm Bob Ellsworth, the Director of Mainframe Transformation at Microsoft.
Steve Steuart: I'm Steve Steuart, CTO of Astadia.
Bob: And I've been with Microsoft for just 18 years. Prior to that, I spent 30 years working on mainframes both for IBM and for Amdahl. I started this mainframe transformation initiative about 10 years ago, helping customers migrate their existing mainframes to take advantage of the latest technology.
Steve: Yes. And I'm Steve Steuart, CTO of Astadia. We've been doing legacy modernization strategies for the last 20 plus years. Looking forward to explaining how we do these things today.
Bob: Great. Thank you, Steve. And when we think of mainframe transformation, it really is about taking advantage of the latest technologies as I've mentioned. And we think of Azure as the new mainframe. And as we go through this presentation, you'll learn why Azure is the new mainframe. We've been experiencing an amazing time in the industry in technology. And it's a lot like the transformations that we've seen in the past and the revolutions that we've seen in the past. You think of revolutions as those that drive a huge impact to society, to economics, to transportation. And that really started with the steam engine in the steam revolution, and this industrial revolution. The second revolution being the electrical revolution with the advent of electricity, followed by electronics and IT. And this is when I started programming back in 1969,1968. And today, we're in another huge impact in society, which is the digital revolution or digital transformation.
When you think of the digital transformation, really, the technology that's driving that is the advent of cloud capabilities. And the ability to deploy solutions and technologies in IT outside of the data center. And this spans a gamut from productivity to business apps, new applications, data intelligence, and security and management. But when we think of this cloud capabilities, it's so important to have the key functionality and key capabilities and support of the cloud. And how Microsoft differentiates our self from our competitors is the fact that we have a global presence - we'll talk about each one of these - that we're highly trusted. And also the opportunity to take advantage of the cloud when it makes good business sense through hybrid technologies. So we think of the investment that Microsoft's made in technology, in the data centers worldwide. We have more data centers than our competitors. We got 54 regions worldwide available in 140 countries. And most importantly, when we think of Azure as the mainframe, it's about the high level of availability that we've created through the availability zones. We continue to expand our investments and provide data centers where customers need them on a global basis.
When we think of trust back in the mainframe days when we're building customer applications and the data center, it was really up to every customer to ensure that they abided by the industry standards for their particular industries. And the same applies in the new days, in this digital transformation and using the cloud. It's so important that the cloud provides the highest level of trust and has the accreditations required to support each industry that wants to use the cloud. Our cloud is the most trusted and compliant cloud available. As I mentioned, hybrid is one of the areas that Microsoft really differentiates ourselves. And the idea and the reason we invested so much in providing a hybrid environment is that we want to be sure that you take advantage of the cloud when it makes good business sense. And also, that we make it easy to have a hybrid environment to support both your on-premise utilization of technology and the cloud utilization of technology. We've done this through consistent identity, applications, data management. A couple of great examples is active directory. Everyone uses active directory to be able to authenticate users within the data center, and we created an Azure active directory so you can have a common identity. We replicate between your on-premise active directory and the Azure active directory so no matter where you run your applications, you can authenticate users against the same information. We make it easy to manage both the on-premise environment and the Azure environment through integrated management security technologies.
We also have consistency in data so that your on-premise SQL server-- you're able to replicate that up, and use Azure data services and have a consistent data platform. And lastly, the dev ops environment, we have consistency there where you can build applications to run in the cloud and through Azure stack, you have the ability to run those very same advanced services applications within your data center. Now, to illustrate this hybrid platform strategy, just imagine today you're running the majority of the work in your own data center with your own infrastructure, you've got racks of x86 running Windows and Linux. Perhaps you have mainframe environments or mid-range environments. The importance is your ability to go beyond the data center and take advantage of new technologies and capabilities in the cloud. And that's where the Azure public cloud comes in. With the Azure global data centers, we provide infrastructure as service, platform as a service, and advance workloads, including software as a service. And again, this way, you can take advantage of the cloud when it makes the best business sense on an application by application basis. One of the most mature ways of using cloud services is software as a service. And for Microsoft, this includes things like Office 365, Dynamics CRM, and AX. And also the global software as a service marketplace. So if an application is one you can simply consume and don't have to maintain or support yourself, it makes good sense leaving the burden on the cloud provider and using software as a service technology such as Office 365 and these others.
In addition, if you do have a need to run your advanced applications, consuming advanced services and run those within your data center, you may have a data sovereignty challenge where you need to keep your data within the walls of your data center, you now have the ability to use Azure Stack. You can build those same applications, run those in the public cloud when it's appropriate, or use those same advanced services applications, such as Machine Learning and Artificial Intelligence, and run those within your data center itself. Now, when you think of mainframe and how this fits into this new world of digital transformation, most customers in your mainframe environments-- it's very expensive, IBM Z Series or Unisys Mainframes, very expensive platform should be running today. And that's just the beginning. As we have engaged with customers around the world, the number one issue historically has been cost. And by doing a mainframe transformation, moving out of those legacy mainframes, running those workloads in an x86 platform and now in the cloud, you're able to substantially reduce cost. But that's just the beginning of the journey. So mainframe transformation's all about taking advantage of reducing cost with an x86. But it's also about going beyond that. And being able to consume advanced services.
When you think of the application model back in the early 2000s. It was all about web services and services or an architecture. And that was one of the first ways of building a custom application and going beyond a single platform to other platforms within your data center. Well, today, the cloud is really that next step in the evolution of application development, where you can build applications and you consume services rather than have custom applications provide those services. A great example is cognitive recognition. No one would want to start from scratch and build your own cognitive recognition application such as facial recognition, text, or voice recognition. Instead, in modern application development, you consume those services from the cloud, from Azure. The same thing with artificial intelligence and machine learning. You wouldn't want to build those systems from scratch when you can consume those services from the cloud. When you transform your mainframe and take your existing workloads through technologies like rehosting, move those into the cloud, you're then able to extend those applications and consume those advanced services. So it's a way of continuing the journey on your mainframe transformation. Now, we think of mainframes-- and those of you that have worked with mainframes for a number of years and that's all your history, I really share that history with you. As I mentioned, I started in IT back in 1971. And I spent 30 years working in IBM and Amdahl, a competitor of IBM. I spent 24 years at Amdahl and working on mainframes. And when I came to Microsoft, I was in the Window Server Team responsible for enterprise credibility-- reliability, availability, serviceability. I came in with a list of 150 enhancements which we need to make to Windows and SQL to be more enterprise-capable. I had the opportunity of going in front of Bill Gates and Steve Ballmer and asking for an investment to allow us to improve the reliability, availability, and serviceability of Windows and SQL. I got that investment. And then I drove tremendous product changes in the Windows Server team. Several of those product changes continue to grow and be enhanced today. And let me walk you through some of those.
When we think of reliability, a big key area of that is to be able to capture and recover from hardware errors in the mainframe. On IBM 370 that I started working on in 1972 when it was first released, there's a capability called alternate CPU recovery. And what that did was it captured CPU failures and allowed the process to be taken offline and the system continue to run. Later on, in 1986, IBM came out with processor availability feature where we actually recovered the application that was running at the time the processor failed. But when it came to Microsoft, I worked with Engineering, I asked them, "What happens if you lose a processor?" And I was told, "Well, you get a blue screen of course." Well, that's not very modern. It doesn't deliver the reliability you need. So I worked with Engineering in Intel to create the machine check architecture for Itanium, which then we implemented in Nehalem. And this allows us to capture those hardware errors, just like processor availability feature, and recover from those hardware errors.
You think of availability and avoiding system outages-- those of you working in the mainframe are very familiar with technologies such as Parallel Sysplex using the company facility. And then Geographically Dispersed Parallel Sysplex. Well, we've replicated that same functionality in Azure. Initially with SQL Always On, high availability for ensuring SQL databases, recovering from those across systems. Azure Site Recovery and Concurrent System Patching, and also with the Azure implementation of Geo Replication Services. You're able to go across large geographies and ensure that if there's a catastrophe in one location, you're able to failover to another Azure data center.
We think about serviceability in that ability to maintain your system externally. On the IBM mainframe, you have the Independent Service Processor. And in Azure, we've got an Azure Management Console, which is separate from the VM's or the services you're utilizing. We also have the Virtual Machine Manager.
We think of virtualization, which everyone's familiar with on the mainframe, CVM which started with the VM/370 in 1972-- first system I worked on-- and PRSM Logical Partitioning. Well, in Microsoft-- when I came to Microsoft, we didn't have a virtualization technology. I have three patents in virtualization on the mainframe. And so I led the charge to acquire Connectix to become our virtual technology. We first released Virtual Server which became integrated into our system called Hyper-V. And it also provides the virtualization capabilities of Azure.
We think about security on the mainframe. You either use RACF or Top Secret or ACF2. And in Azure, we use Azure Active Directory. And this allows you to have authentication for users as they sign into the system or as they access applications. It allows you to provide the same information. During the mainframe transformation, we replicate information out of the mainframe security systems into Azure Active Directory.
Now on the mainframe, you typically don't run-- 100% utilize every day. You've got peaks and valleys in utilization. At month-end, you may turn on additional processors called Capacity Upgrade on Demand. This again is one of the patents that I achieved at Amdahl and that IBM cross-licensed for their Capacity Upgrade on Demand. And in Azure, we have a very similar capability which is Elastic Computing. So the idea is, in the cloud, with the new mainframe, you only pay for what you use. So by only using additional capacity when you need it, you're not having to keep that capacity in reserve and pay for it when you're not needing that additional capacity.
When we think about turning data into action, it really is about new functionality, new capabilities to do machine learning and AI, which is provided by Azure. On the mainframe, of course, you may be using Cognos, SAS, Watson, and others. But in Azure, you've got all kinds of new technologies that are coming to the market every day to turn data into action.
When we think about the development environment, and hopefully you've moved beyond using TSO/ISPF for doing application development and using technologies from IBM, IBM Developer for z which runs on Eclipse. Well, of course in Azure we also have support for Eclipse and Visual Studio which is called Azure DevOps. By using Azure DevOps, you're able to attract new kids at a college that know Eclipse or Visual Studio and they can use their knowledge of these development environments to maintain existing legacy applications such as COBOL applications.
And lastly, we think of modern application development going beyond the developer tools to deploy technologies in new ways through Kubernetes Services and through Azure Service Fabric. Those capabilities are not available on the mainframe. So hopefully, this gives you some ideas of not only how Azure delivers similar capabilities to a mainframe in reliability, availability, and serviceability; but also how Azure goes beyond what you can do in the mainframe to allow you to embrace new advanced workloads, new modern ways of doing Azure DevOps, and also new ways of deploying applications and solutions.
So with that, let me pass it on to Steve to talk much more about the implementations of Azure in the new mainframe.
Steve: Thank you very much, Bob. One of the things that is very exciting as a company is being involved in legacy modernization. It's probably the most exciting time there is, mainly because a lot of times we were just doing an infrastructure play. Just moving it to a lower-cost platform. But being able to leverage cloud and all of these services in the frameworks that are contained within Azure, we were able to expand a lot of your existing data and leveraging it to Power BI AI machine learning. So I'm going to kind of walk you through a little about how we do these things. But before we start that, Astadia has been in this business for well over 25 years doing legacy modernization strategies. We started off offloading development to ZENIX and Santa Cruz Operation UNIX boxes way back in the day. And since then we've done-- we had clients that were kind of offloading development dev test and they say, "Hey, since I'm testing, can I run it?" And in 1994, we did our first mainframe migration. We've done about 200 since then. The last 18 months, we've seen an exponential growth on moving a workload to the cloud. And the investments that Microsoft has done into Azure, we've actually created-- Azure to me is the new mainframe. It has all the capabilities. It can scale. It can meet the performances. And we're talking to clients right now, they're in the 300,000 net range, going to cloud. So the cloud is ready to go. Astadia was awarded the top performer for mainframe to Azure. And that's our CEO, Scott Silk, and Bob is right there.
Let's talk a little bit about how we actually do these things. If you look at what's happening, there's a transformational change that's happening in government and also in the private sector. The baby boomers are retiring. Research says there are 10,000 people a day that turn 65. And so there is what I call passing of the torch scenario. You have applications that work, they run, that they need to be rejuvenated. And so your options are to rewrite everything-- which is really complex and prone to failure-- or you try to reuse what you currently have. And so we're seeing this passing of the torch scenario. And if you're able to take your existing applications you currently have and then rejuvenate it by moving it to the new mainframe which is Azure and then expose that data and all of those things, that's where we're seeing the biggest bang for the buck.
All right. So let's talk about the mainframe. This is your standard idea mainframe or your Unisys mainframe. This is the original cloud. If you think about what the mainframe did, it had a-- I was an operator, and I will get a phone call from some business user going, "Where is my report? Where are my files? What's taking so long?" And I was the one doing the elastic computing by updating the priority for that program and making sure he got that. So what's happened is that transformation of the mainframe has now turned into the cloud, and Azure has-- instead of the phone call, you have automated ways to expand and contract the set priorities. If you're queued up, there's X at the brink of another VM or those kinds of things. So you're able to scale out and scale down as necessary. So the cloud is the-- the mainframe was the original cloud. And now, we're going to talk about how we moved it to the new cloud which is Azure.
These are all the components that exist in the mainframe. Each one we can map to Azure. If you have your Assembler, COBOL, PL/1, Natural, Fortran, all of those things, there is a-- not everything can migrate over, there's some things you're going to have to do-- but there is a corresponding thing. If you look at your security from a RACF, you can definitely map RACF, ACF2 and Top Secret over to AD, and also leverage SQL service security for those types of mapping. But there is a corresponding mapping session. And this is what we do in working with our partners at Microsoft to do a complete mapping of what you currently have and where it needs to go.
So we start off with the discovery. What do you have? What is your journey? Where do you want to go? And then start coming up with the stepping stones to get you to the end place that you want to go. We design the architecture. We start modernizing your code, modernizing your applications, moving that over. Testing is key component. One of the excellent by-products of migrating these applications is the test scripts that we're going to need. And this is a great opportunity to start leveraging some of the automated test tools that as your DevOps currently has and can interface also in the open-source arena, like using Jenkins and things of that nature, for automated testing. You have to create test scripts anyways, so might as well leverage that. And that is a return on your investment because the test script doesn't get thrown away. You continue using it. So you can start moving it to more of DevOp site culture.
Implementing, deploying it, and also mapping the way you operate your mainframe today is going to change as the way you operate on Azure. So we'll walk you through that. And then you're running on that platform. One of the things that we've seen is we have to do that mapping of everything and what we-- basically, everything you currently have on the mainframe, there is a corresponding thing. So we've talked about RACF to AD. But things are now available to you that if you look at Azure OMS, you're able to monitor your applications, react to it if something occurs. Start leveraging all the different things that exist on the Azure framework. To me, where the real juice comes in is being able to expose your dormant data into AI machine learning and be able to identify trends and react to those kinds of things. So we always talk about-- we have a mainframe, and people have gone into the cloud or even on [inaudible 21:50] service often, but they never really designed it to be like a mainframe class. If you're going to embrace the cloud and you have a mainframe today, treat it like a mainframe. Create these high availability architectures. Leverage what you currently have.
Here is an example at a very high level of how we provide that fault-tolerance, fault-resiliency framework. We usually create multiple regions, use Azure Traffic Manager. Bob was just talking about SQL replication service but we do replication services so the data is then synchronized between multiple regions. Load balancer, if your queue depth gets to a certain amount, spin up another program. Being able to meet those requirements, but you expand and contract as necessary. But here is a classic example from a very high level of how you can architect and deploy mainframe class within Azure. One of the things that we do is that not only "how do we deploy my mainframe in there?", it's "what is in my mainframe?" What you think you know about your applications ain't so. And that is because a lot of times, the last great application portfolio assessment project that you guys have done was Y2K. And a lot of times, you need to look at what you currently have and understand the relationships and everything that you have in there and start identifying what are the groups of programs that want to bring you down. We use tools to identify those things. Going through your JCL or your WFL [inaudible 23:23] get called and look at all these different things. If you look at the series-- we have three distinct, what I call star systems here, that if you take one, and bring it on down, what are the dependencies when I slice that out? Can I update a file 30 seconds later? If I update a record on the cloud, do I need to update a record that exists on-prem in my hybrid solution? What are the SLA's for that? Do I need to do two-face commit?
These are the forks in the road. When you're doing your modernization strategy, we will work with you to identify. And we can walk through-- some cases you may not be able to take one star system, they may have to take the whole thing. It's a case by case scenario. But we leverage tools to kind of identify what you currently have. And these are examples of some of the tools that we use here. We can actually drill down into the program, see where you are in relationship to your star system, everyone's got their own star system, and kind of walk through some of these things. And then we can kind of identify what is your roadmap based on what you currently have. And there may be a scenario where-- I always akin the mainframe to my parent's garage when we had to move into a smaller house. You got to go through and start purging what you currently have. What are the applications? Do an application rationalization initiative to identify the programs and applications that actually meet the demands for your business and are aligned to that, and start reducing your dependencies on apps. You got to clean out your garage for that. Or you clean out your mainframe for that. And we can help you with that portfolio assessment. These are just some more screenshots around that.
But let's talk about, you know, what is the art of the possible after you've taken your application and move it on down? I want to address one quick thing, a lot of times people ask, "I got this mainframe, it's super powerful, it runs really fast, it has I/O, are you telling me that you can run this on the cloud?" I'm telling you, you can. And there are a couple of things that happened over the years that allow us to do that. One is the mainframe, no doubt, has a superhighway for I/O. With the advent of SSD, when there's optimization strategies on the mainframe to maybe partition data and do things like that, is there-- basically, you're dealing with the spindle in the disk up there. And we're leveraging SSD on this side that you're able to meet a lot of those, the I/O commands, demands. Also, because of AI and machine learning, the demand for faster chips on Intel and AMD, which are the chips that are in the cloud infrastructures that Azure has, those chipsets now exceed the capacity for COBOL transaction. That, with cheap memory and SSD, you can move massive workloads in there. Like I said, we're talking to 300,000 companies today. But they're not taking 300 MIPS right out of one gate, they're taking an LPAR at a time. So you can take as big as a mainframe you currently have, and configure it and run it on there. Now, so that means you have the processing capacity. Now we're able to extend if you look at, you know, these are just some things that brought up that-- Azure currently has, from cognitive services: vision, speech, language, knowledge, and search. We can then expand what you currently have on your mainframe to other things. I'm going to use some examples like AI chatbots. AI chatbots that you want to do for, you know, maybe internal for password resets, and maybe interfacing to your mainframe or people if they just want to do a quick chat session to inquire about information about their account and things of that nature. Those things you can open up pretty easily without having additional resources or personnel to answer the phones, when you can answer a lot of the questions via just a simple chatbot. So that's an example. And create those interfaces, the tools allow you to consume web services and that's how you're able to expand that.
Vision is another one. We talked about facial recognition, Bob was just talking about some of them. We have clients that augment their RACF interface, now going to AD, but your mainframe application now can authenticate you based on facial recognition. Again, a very simple, easy way of expanding your application. Like the whole Windows Hello, but being able to put that in front of your existing mainframe. So I'll talk a little bit about some of the things that we've seen out there. The Air Force is a project we're currently working on today. And their journey is, we want to get to predictive maintenance. I want to be able to predict what type of activities we currently have. So what they're doing today is that they're very reactive, right? Something breaks, I fix it. I flew 500 hours, I need to do XYZ. Maybe we identify a defect in the aircraft as we're looking at something, we're proactive and fix that. How do I get to predictive maintenance? So if you look at what the Air Force currently has, they know what aircraft they're maintaining, they know what the crews are, they know what the start and end time is for a mechanic to do an activity. After a flight of 500 hours, I got to change the gasket or whatever, whatever maintenance activity. They know the cost of the parts, they know the start and end time, all of those things. That's dormant data that I can take and feed into machine learning to teach it. If I fly a C130 and I flew 500 hours, and I need to make this maintenance activity, they will know as you do machine learning what the cost will be. Add that, whatever telemetry data, this information that's contained within those black boxes, that then you're able to pull down and then cross reference what the aircraft is, you're able to predict maintenance activities and what parts you have. Right now, they're buying parts for worst case scenario. And then if you're able to predict what my maintenance activity is based on the aircraft and how they're flying, that's how you're able to optimize your inventory structures, your staffing levels. And we're talking about millions and millions of dollars in savings by having that type of information.
So look within your organization. What would you like to have from the standpoint of how can I expose my data, my dormant data to be able to take advantage of AI and machine learning for your business? Power BI, things like that. But that's a really great example of how-- the Air Force today is looking at Azure to expose that, analyze that dormant data to make smart business decisions for themselves. Again, a byproduct of being able to move your mainframe-based applications into the new mainframe called Azure. COBOL and AI is real. We use Micro Focus, they've got a great investment into that. Leverage the assets, learn from others. Rejuvenate the applications. A lot of times the CIO or the CEO goes, "I want to use AI. I want to do that." And the perception is, "I need to rewrite everything from scratch." A lot of these systems aren't documented. So how are you going to rewrite something if it's not documented? You could bring in an SI that doesn't know anything about your business try to document that and see how that goes. So the best thing to do is take the existing applications you have and rejuvenate them into the new platform. And, with that, basically, just think about what you currently have and how can we start leveraging a lot of those things. And, Bob, I don't know if there's anything you want to add, but this is a really exciting time. I'm really excited about the projects that we're seeing today, leveraging the AZURE framework. And I want to thank you for all the efforts you've done many years ago because they're really coming to fruition today.
Bob: Well, thank you, Steve, I really do appreciate it. And great job walking through, really how to take advantage of the cloud, and how to take advantage of these advanced services and new functionality that you just don't have available in your data center and your current systems. What you bring, what you really demonstrate, it is just an amazing time. And the kinds of technologies available to supplement the legacy systems that you had in the mainframe. And the idea is not to throw away what's been done. That great investment customers have made in their legacy systems, it continues to be the crown jewels of a lot of the data centers and the functionality and applications they provide to their users. And so the idea is, don't throw away what you've got, take that legacy investment, and take it to the next level by implementing functionality in Azure and by embracing new ways of developing new functionality by embracing these services. So Steve, when you think of the customers you're working with-- you mentioned a number of examples, like the Air Force, where they're embracing doing machine learning and artificial intelligence, analyzing the data in new ways-- are you seeing that with other customers as well?
Steve: Yes, it's across the board. But not only, you know, it's interesting, this is a global problem. So we're seeing it globally, and we're also seeing it in different areas as well. The key thing right now is that the cloud infrastructure-- the cloud has been around, it's not a new technology, the cloud has been around, I want to say 10 years, at least, or probably more. And it's now evolved to where it's actually able to take the mainframe workload, so we're seeing it in the commercial, in government. Specifically, in government. A lot of mainframes in the government space, insurance, financial, industries. It's across the board.
Bob: One thing you pointed out also, Steve, is that the capacity of a mainframe, you can achieve that by deploying solutions in Azure. And actually, let me give you an example. We did a performance study back in 2012. So this is on older Intel technology, it was on a Dell 980 and a 580 from HP. And we took the very same COBOL KICK workload off the mainframe, we ran it on some Z series in the ETF's data center, and also ran on the very same workload on Dell 980s and 580s. And we found back in those days and older technology, on average, we got about 200 MIPS per core. And so I always wanted to know, well, how do you relate an Intel Core with a mainframe processor? IBM Z series processors running full speed over about 1000 MIPS, so it takes about five Intel cores to equal one IBM processor. But the cost of five Intel Cores is about 5% of an IBM processor. So it's a very cost-effective platform. And in Azure, with the new m class virtual machines, you can have 128 cores in one virtual machine, workloads that we've done benchmarks on from our partners, our tools partners, we're seeing 20,000 plus MIPS in one virtual machine in the cloud. And so it's the cloud and the technology used in the Azure Data Centers delivers the highest performance possible. And 20,000 MIPS in one virtual machine is pretty incredible. You know, it really equals or exceeds what most customers are using today in a single mainframe. Are you seeing the same thing where there's some hesitancy from customers in considering a mainframe because of concerns about the capacity performance they deliver?
Steve: Well, yeah, the problem is that they view it to be, you know, when you talk to the-- some of these classical, "Well, I have 300,000 MIPS. There's no way 300,000 MIPS is going to map to an instance." You don't take-- that there's no such thing as a 300,000 MIPS LPAR, you take it one part at a time. How do you eat an elephant? One bite at a time. And that's how you do the mainframes. You're going to do one LPAR at a time. And the reality is, the capacity for the LPARs that you guys-- the LPARs to an Azure, it definitely exists.
Bob: And you mentioned, also, treating the Azure environment like a mainframe, but there's one set of differences. In an LPAR, you typically run, you know, all of your online, your batch, your database, everything's in the same LPAR, and then multiple LPARs for different business systems. But in a distributed system, so taking that mainframe workload, you don't put it in one virtual machine. You have one or more virtual machines for the database layer, you have a virtual machine for the application layer, one or more, you have a different virtual machine for the tools. And so you're even breaking out the capacity of an LPAR into multiple virtual machines to really map the benefits or the capabilities of a distributed system. Are you seeing the same thing in configuring the replacement for a mainframe, a pool of servers? Or virtual machines being able to do that?
Steve: Yes, definitely. And the other thing is also when you separate the database and actually put it into the database server and have your application server just to production, it's necessary for that because that whole group is-- that workload's moving over there. So we're definitely seeing exactly what you just said.
Bob: Well, also the technologies are improving from our tools partners. And Micro Focus is a great example, where the next version of their Enterprise Server technology, you scale out with Enterprise Server, so you have a different KICKs region. It is deployed in separate virtual machines.
Steve: Sysplex and Azure. Who could have thought?
Bob: Right? Sysplex and Azure. Even KICKS, the operator console looks a lot like KICKS on the mainframe.
Bob: Yeah. It's really KICKS in distributing, KICKS in the cloud.
Steve: Yeah. So it's really exciting.
Bob: It is. Now one last area is you mentioned, you know, developers. And one of the things I pointed out was the fact that you can hire new kids out of college that know Eclipse or Visual Studio, and teach them in the COBOL dialect, to be able to support and then extend those COBOL applications. To me, you know, with people retiring, especially in the federal sector, we're seeing a lot of people that are at the end of their career, and they're getting ready to retire. But the challenge is, how do you continue managing to maintain these systems when those people retire? Are you seeing the same thing as you engage customers?
Steve: Oh, yeah, I mean, here's the reality right now, somebody has been working on TSO ISPF for the last 20 years and you put them in front of a Visual Studio, they, you know, "what's going on?" Because they're not accustomed to that IDE. IDE is really something that's really personal to somebody when they're doing coding, right? The kids are coming out of [inaudible 37:48] for Java, or C#. They're using Visual Studio or Eclipse. But the reality is, if you look at today's programmers, they program in C#, looking at [inaudible 38:42] they're learning Python, they're learning all these different languages because that's what the business demands. But you have the IDE and the plugins to help you with a lot of those things. COBOL has that in Visual Studio. So when you bring in a C#-- I'm telling right now, a good C# developer with good debugging skills can pick up COBOL because you can read it. That's one of the beautiful things about COBOL. And so we're seeing the people pick up the language a lot quicker because it's in the IDE with all the help that they currently need.
Bob: Yeah. Well, Steve, it's been a pleasure presenting with you today. And I've enjoyed the relationship we've had for the last 10 years.
Bob: And look forward to helping more and more customers transform to the cloud.
Steve: Yeah, same here. Thank you for the opportunity.
Bob: Okay, Thank you.