Good evening, everyone. Welcome to our newest episode of Walter's World. My name is Walter Sweat. I'm the CTO here at Astadia, and I am delighted today to be joined by one of my good friends, Bob Ellsworth. Who's the Worldwide Director for Mainframe Transformation at Microsoft. Bob, thank you so much for taking the time being with us here today.
Thanks for having me, Walter.
I got to tell you, as long as I've known you, and I probably shouldn't tell you this, but you've always kind of been my hero because you actually were an assembler programmer early on in your career. That's something that not many people ever get to do. Can you, tell me from those days to now what has changed?
Oh, I tell you it is pretty amazing. So I started in 1971. I was, I worked my way through Virginia Tech, maintaining the IBM operating system, the MVT at the time and migrated MDS. So of course assembler was my favorite language as a systems programmer. But you think over the last, what, 49 years, what a change there's been in the industry.
It's crazy, isn't it?
It sure is.
Well, Bob, as the Worldwide Director for Mainframe Transformation, what exactly do you and your team do for Microsoft?
Yeah, what we do, you always think of, you know, Microsoft, mainframes, what the heck do they have in common? But in fact, you know, this has been a long standing capability and offering we've had here at Microsoft. I started our mainframe migration practice in 2008 and we built an ecosystem of partners with tools and services to help customers transform for the mainframe. And so our goal and our responsibility is to work with the customers, help them understand what the options are, what the tools are, the services to ensure they have the right solutions to be successful in moving workloads off the mainframe
Makes perfect sense. So in your tenure, have you seen the pace of companies, of mainframe companies, especially who were looking for alternatives, who are, you know, now talking about the cloud, has that changed a lot over the years?
Yeah, it's absolutely accelerated. And so just in the last 12 months, maybe 18 months, we've seen a huge increase in customer interest in mainframe transformation. And part of it is the maturity of the cloud, the maturity of the technologies and the services from our partners where we've got a lot of great experience and had successful migrations and more and more customers are questioning. Do they really need to stay on the mainframe for the workloads that they're supporting today or are there options, especially in the last 12 months, I look back in the last 12 months, I've had 350 customer engagements. Crazy, on the level of interest. And the other thing that's driving it is, you know, IBM's requirements that you upgrade your mainframe to say for a generation minus two. And a lot of customers was z13's. Are you evaluating what their options are or do they need to plan to upgrade the z13 by August of next year?
Well, I would think with the associated costs that go along with upgrading a mainframe, not just the cost to IBM, but the third party products is that can make a huge difference. Just going to a different size machine. You may not necessarily be getting a whole lot of extra "oomph" from those third party products, but you can expect a much bigger cost.
Well, definitely if you go to a larger machine, if your workloads are growing, of course your software costs are going to go up commensurate of both the IBM and third party costs. And what we find typically your software costs are about half of your total between hardware and software. And so if you continue to grow, your budget needs to continue to grow.
Makes sense. But one of the things that, you know, when we first met, the cloud really was just being looked at back then. So it wasn't necessarily a part of what everybody was looking to do. I'm curious about over the last eight years or so, how have companies started to look at Azure specifically as a part of their mainframe migration strategy?
You know, it is interesting. The cloud has matured pretty dramatically and whether it be Azure, AWS, GCP, we've seen some huge improvements in reliability, availability, serviceability, you know, those characteristics that you expect on the mainframe. And because of those continued advancements, we're seeing more and more customers setting a strategy where they have a cloud first strategy where they'll select the cloud for building new applications. Then they start to evaluate well, what's left in the data center and how can we consider migrating some of those workloads as well? And it's pretty easy to move, you know, Windows and Linux virtual machines up to the cloud, much more difficult, much more challenging to move mainframe workloads, but you don't want to have that be the last thing left in the data center, but having a strategy to move those business mission critical workloads is part of what most of the customers are putting it together today.
That certainly makes sense. Thank you. One of the things that I've always found interesting when people talk to me about the cloud is elastic scalability, and where that really seems to come into play is for organizations who, I don't know, maybe if it's just that they're more seasonal in nature where they need to be able to respond more to their changing business needs. Is that something that people talk to you a lot about being able to leverage cloud as a way of doing business in different ways that they've been able to do so before?
Yeah, it really does. You think about some of the benefits of the cloud, you know, the elasticity. So they be able to the ability to grow and shrink your capacity based on needs and not have to have, you know, turn on additional hardware to do that. Simply reactivating the virtual machines, even disaster recovery and fail over where on the, you typically have a backup data center or another location. You can fail over to a, with the cloud, you can set up the environment to dynamically fail over from one environment to another. And getting back to, it's sort of interesting on this capacity upgrade on the mainframe. I'm actually a patent holder on the mainframe and I have three patents. One of those is one thing, one called dynamic CPU add and remove. And I got that patent while working at Amdahl competing with IBM. IBM cross-licensed that so their capability called capacity upgrade on demand. This is where on the mainframe, you can turn on spare processors and use those when you need them. And, you know, you do have to contact IBM and enable that and then pay the cost of it. And it's so much easier to do in the cloud. Simply adding, removing processors to your virtual machines is a much easier way to get that additional capacity when you need it. And you only pay for what you need.
Such a huge savings and such a great benefit for organizations who, for whatever reason may exceed expected peak demand to be able to respond to that. That's something that didn't exist prior to the cloud, certainly.
Yeah, that's absolutely the case is that ability to only pay for what you consume and especially you think of licensing software on the mainframe. Most customers pay for the peak four hour period. If you happen to have a month end process, that drives utilization higher than the rest of the month, you're paying a premium for the entire month for your use of software. And the beauty is up in the cloud. Again, you only pay for what you consume, so you can grow your utilization as you need and only pay for the consumption that you have.
That's perfect. Thank you, Bob. As you talk to so many customers, can you just kind of lay out what you think may be the top three major factors are that drive companies to consider alternatives to the mainframe, to look at the cloud?
Yeah, we've found it's actually for business drivers that we've seen and the top one historically has been cost. As we've, you've talked about, you know, the elasticity or the increased capacity needs that you have drives up your costs on the mainframe. But when you compare the, the mainframe hardware costs, you acquire the hardware, you depreciate it over three or four years, you know, running the same workload in Azure comparably, it's about 10% of the mainframe cost. So x86 hardware is much less expensive than mainframe hardware. And that's what reduces those costs. So cost reduction is something customers are always looking for. Skill shortage, that should be a big one, not just assemble of programmers like me, but COBOL programmers or natural or whatever you happen to be running as is the second item. And we've seen that more and more as the gray tsunami is coming in more and more people are retiring. The skill shortage is the second one. Business agility, so it's difficult to upgrade applications, to address the needs of the business. And then cloud choice, being able to decide to move out of the data center and move these workloads to the cloud. Those are really the four drivers that we see day in and day out.
That makes sense. One of the things that I've found, and I wouldn't say if he felt the same, it's interesting. There are organizations now huge organizations that have never had a mainframe who've come up, you know, in the last 10 years who have been able to develop their platforms without a mainframe. I think about FinTech where they drive change so rapidly that organizations really have to be able to respond in ways that they never had to before just from a competitive balance. Would you agree?
Absolutely. And you find the challenge of cloud native companies are making a huge impact on legacy brick and mortar companies where the legacy companies, they have a disadvantage because they have a legacy system that they have to maintain and manage and utilize, where cloud native companies can simply build new solutions in the cloud, but they also have an advantage because they've got the customer install base that they can resell to and engage with. So it's a, it's a double edged sword, but those legacy companies are at a disadvantage when it comes to addressing new business opportunities.
I'm sure that's a reason that a lot of them do start to look for that.
Bob, you know, here at Astadia, we for years have offered a replatforming solution where we're able to take people, before the cloud it was on premise where they were running off of the mainframe, now with the cloud, that's where everybody wants to go, but we also offer the ability for people to refactor. You can talk to so many organizations. I'm just curious as to your opinion as to what do you hear people wanting to do? Do they want to kind of keep in the platform they're in or are people really starting to look for other ways that they can run their environment?
It really varies from one company to another, or even from one application team to another, within the same company. Typically what we find is if a customer has the skills to maintain, if they have the skill resources to maintain their applications and support them, that know things like COBOL then rehosting is the right path for customers that are having challenges, finding the right skills to support, maintain their workloads or applications. That's the ones that are more interested in refactoring to another language. And you think of rehosting, you know, that's typically been the most mature technology, but the refactoring technologies have also improved over the years and are much more automated. So as the technologies improve, they become much more acceptable to customers. And also as customers face that skill shortage, they may look to refactoring over rehosting. Yeah. So it's great. When we engage with customers, we really help them understand the pluses and minuses of each one and then ensure they pick that right. Solution, whether it's rehosting, refactoring or other.
Yeah. I think that you would probably agree. We, there is not a one perfect answer for every organization nor is there one perfect technology. So the fact that the industry has matured so much, that people have options, I think really puts everyone in a great position now.
Yeah, it really does. Any time the tools mature and you get new entries into the market because the market opportunity continues to grow, that drives existing vendors to improve the tools that they have to be much more automated and much more exacting and capable. We're seeing that advancements with existing Companies and also new companies enter the market with their tools. It's all driven by that customer demand. And so we are seeing, we have seen a shift over the last two years where we're seeing a larger percentage than in previous years think about refactoring or previously, it was maybe 75, 80% wanted to rehost and up to maybe 20% wanted to refactor. That's shifting a little bit more, maybe up to 25% wanting to refactor.
Okay, great. Thanks Bob. So I'd be interested. Can you tell me anything about what Microsoft specifically is doing in terms of looking at new tools? I know you work with partners and the partner ecosystem extensively, but internally are, y'all doing things to help organizations be able to consider moving to the cloud more easily.
Yeah. Here at Microsoft, as I mentioned, BMI team handle the customer engagements and manage the partner ecosystem, drive opportunities together. Well, we also have other teams across Microsoft that assist in these efforts that are key contributors and collaborators. One is a team called Azure Global Engineering. And they're part of the Azure team. The beauty is they've got deep mainframe migration experts as part of that team. And they, we bring them in to help architects solutions to help with the biggest problems, since they have direct access into the Azure engineering team, they're able to take those big challenges back to Azure engineering to make improvements to our technology. And to make advancements. A couple of examples of that. One is you have several customers where they use VCAM on the mainframe. VCAM being a very high performance database, even more so for particular usages than DB2. And for those customers, they needed high performance for data access in the cloud. Our Azure Global Engineering team worked with the cosmos DB team and create an emulation for VCAM and cosmos DB. And that way we were able to take it back to a couple of our high performance customers and show them how they could get the same, if not better performance in the cloud. In addition to the reliability and availability, the distributed database capabilities, really take advantage of that cloud database system to replace the functionality of VCAM on the mainframe. So that's one example. Another example I like to share is we had a customer, of course, in Azure, you can select what database you want to use, whether it be CQL or DB2, or Oracle, all fully supported by ourselves and those vendors as well. And we had a customer that needed to be able to share databases and they wanted to stay on DB2 because of the complexity of the stored procedures. They wanted to go from DB2 on zOS to DB2 on Linux running in the cloud. And so our global engineering team worked with IBM to get DB2 pure scale, working in Azure and fully shared the DB2 database between virtual machines in Azure. And so this was a huge collaboration, which is unique on our side to be able to support those unique customer needs like DB2 in the cloud and database sharing like you do with parallel systems. So we continually look for the challenges like that and ensure that we can satisfy and deliver new functionality in the cloud to support those challenging mainframe opportunities.
And as we both get the opportunity to work with what I think are a different size of potential customers today you know, back in the day, if we talked to someone who was at the house and MIPS, we would always get kind of giddy. That was a milestone. Now we're looking to organizations that are, you know, 100,000 to 200,000 MIPS who had different needs. So that's exciting that y'all get to provide that kind of collaboration and effort to help these kinds of customers really be able to duplicate what they're doing on the mainframe.
Well, and that's big that's Walter. You're absolutely right. That's been a big shift that we've seen, you know, and when we started this practice back in 2008, you know, up to 5,000 MIPS was, you know, stretching it and then it became 10 and then 20. And now, as you mentioned, we're working with customers with 450,000 MIPS, you know, 300,000 minutes. Yeah. Which is crazy. And part of that also is you need to support these workloads in a hybrid way to be able to continue running some workload on the mainframe. And inter-operate with workload running in the cloud and with advancements in technology, we can do that architects
To do that as well. Wonderful. Thanks. So I'm going to put you on the spot here, Bob, put your Swami hat on for me. Can you, if you were to look out five years from now, do you think that there's going to just continue to be an increased acceptance of, from mainframe customers making the move to the cloud? Is there any reason that people would not go to the cloud?
You know, there are some blockers today, in particular data sovereignty where a customer needs to keep, let's say a financial customer and within financial services, banking, capital markets, insurance, 39% of all mainframes are in that space. And so where those Finserve customers, you may be in a country where you have to keep the data in country. And if there's not an Azure data center within that country, they're pretty much stuck with, they need to keep the workloads in the country where they reside, so data sovereignty continues to be a blocker, perhaps acceptance by an industry, and the certifications of the cloud for specific intricacy. And that applies to both mainframe workloads, going to the cloud and other workloads as well. So those are some of the key blockers that we see. As I mentioned, just in the last year and a half, we've seen a huge acceptance of the cloud as a reliable available, scalable kind of solution. Oh, it's increased the percent of customers that choose to go to the cloud. Instead of on premise. I would say today, we're seeing 60% plus customers choose cloud first and that's where they want to land. And we see that increase month over month. Well, in five years, it's going to be 95% so easily predict that 95% mark, might be higher than that. But, all the barriers will be broken down by then. So, you know, it'll continue growing each year and reach that 95% within 5 years.
I fully agree with that. And I think that each success, every customer who is successfully able to move to the cloud just makes it that much easier for the next company down the road to consider it to know that it absolutely can work for them.
Absolutely. Yeah. Those customer references really go a long way to breaking down the concern of a prospect. If they're considering the cloud.
I've always likened it, no one ever wants to be the first to a technology. No one ever wants to be the last one using an old technology. So you have to find that right time to do it.
Well. And another term we like to use bleeding edge. You don't want to be the first one, but yeah, you really need to look at, you know, making no decision is a bad decision. So not being the last one on a platform, you pay a premium when you're the last one on a platform.
Indeed, indeed. Bob, from the organizations that you've talked about and for the companies who have actually made this transition to Azure, what does performance been like for them?
Yeah, it's so important. You can configure a system so that you can get the same or better performance than you have on the mainframe. And you have to be careful in how you configure them. First of all, you need to set up the environment just like you would have mainframe in the way you manage it and support it. You don't treat it like a distributed platform. You treat it like a mission critical mainframe environment and architect the solution to deliver the performance that you require. You know, when you think of, of mainframe processors, mainframe processor delivers a 1,768 MIPS, I think, for the z14, z15. And so it's, you know, high performance mainframes are an amazing platform. IBM continues to do an incredible job on the mainframe. A x86 equivalent processor or equivalent Corp is about 200 MIPS. So it takes a lot more of course, to equal the performance of a single processor on the mainframe. But with technology being able to distribute the workload across multiple cores, you can get the same level of transaction responsiveness and throughput and capacity as you do on a mainframe again, properly configured. We've done, our partners have done workload tests running the very same workload on the mainframe. And also up in Azure, they did the latest test with a 128 core virtual machine and then 128. And we're able to get 28,000 MIPS. I don't have a single image environment. So the capacity is there to handle the size of the partitions of almost any customer. When you think of things like batch, most customers in their data center don't have solid state drives. When you go to Azure, you can configure to have premium storage, which is solid state. Typically batch are delayed by waiting for IO operations, the spinning device to get back to the right cylinder. Because of that for batch, a lot of times you can reduce the batch cycle, the batch window, because you can get the batch done faster. Because storage is more accessible and for online's, as long as you distribute the transactions across the right number of, let's say, Kix regions running in the cloud, you can get the transaction rate that you need for your online's as well. So again, it's so important to architect the solution correctly, and deliver, the performance that you're looking for.
That makes sense. One of the things that I think everyone is always interested in when they're considering moving off of the mainframe, if they've never worked in an environment off of the mainframe is, you know, what's this new world going to look like, and what's it going to cost me? How easy is it to kind of guesstimate what the Azure consumption would be to match what mainframe workload is today?
You know, it's pretty darn easy as long as you know, for each partition each L part, as long as you know how many MIPS that partition is using at a the peak. And as long as you know, you know, how much of the zip processing is used or how many IFLs are used for Linux or zip for DB2 offload, as long as you have a sub capacity report that you typically submit to IBM to be charged for your software costs, you can determine what the MIPS and the zips are and the IFLs are, as long as you know that, it's pretty easy to set up an Azure architect and Azure configuration and using our regular Azure pricing calculators, you can determine what the cost is going to be. This is a service that, you know, the Azure global engineering team provides for us. So as we engage with customers, we take a look at their partition configuration. We set up the VM configuration in the cloud and provide the cost estimates for what the Azure costs would be for both processing capacity, for storage, for connectivity, like express route for ingress and egress. So there's a lot of components you have to take into account, but that's really easy to set up. Once you have that defined, you can easily tell what your monthly bill is going to be running that configuration.
Obviously, a very important consideration as people are considering this, it has to make economic and technical sense. And you know, to me, it's exciting that there are enough wins out there that people can recognize that technically, absolutely this is possible. And from a cost perspective, the savings are just phenomenal from everything that I see.
You know, and that's the key and time and time again, as we build these configurations, we find that the Azure costs are about 10% of what the mainframe costs is when you'd appreciate that cost over three or four years. So, that's a huge driver that helps offset the cost of doing the migration. You know, if you can pay for the migration and let's say 12 months or 18 months, and from then on, you get continual cost savings by having gone through that process it makes really good economic sense.
Absolutely. So, Bob, I think that lead to probably the most important question that I'll ask you today. And that's, if people were interested in learning more, if they want to reach out to Microsoft to say, let me see how the cloud can help my organization, what's the best way for that to occur?
Well, the best way to reach me and my team is we have an email of course, firstname.lastname@example.org. So if anyone is interested in having a conversation or getting assistance, or just having questions that they need answered, they can reach me in the team at email@example.com.
Perfect. And I'll throw in a link as well. If anybody has questions about Astadia and our capabilities and helping people move to the cloud, www.astadia.com. We have references and case studies and ways that you can reach out, to ask us any questions you might have. Well, Bob, it has been a delight talking with you. Thank you so very much for taking time out of what I know is your very busy day. And I hope that this was a really informative and helpful session for everybody. I think it had to have been.
Great. Thank you so much, Walter. I appreciate the time.
It was my pleasure and everyone, thank you for taking the time to join us today. This podcast has been recorded. So if you want to share it with others, you know, please just visit us on our website and you'll be able to see it and access it from there as well. Thanks again. And we're looking forward to you joining us on our next podcast. Thanks everyone. Have a great day.