Virtual CICS User Group Sponsors

Broadcom Mainframe Software
DataKinetics

Virtual CICS User Group | December 2024

The 7 Deadly Sins of CICS Integrations

Russell Teubner
Distinguished Engineer
Broadcom

Scott Brod
Product Manager
Broadcom

Horror is not constrained to the movies, but happens every day in your CICS environment! We will share common examples of poor and horrific examples of CICS integration which drives up MIPS and degrades response times. Just when we thought the insanity of screen scraping was over, along comes Robotic Process Automation (RPA). What if your Cloud first, API, or Microservice strategy increases MIPS by cementing your legacy architecture into your Hybrid Cloud Architecture vs optimizing it? Join our fun-filled recap of past and current sins and let the healing begin.

Russ TeubnerRuss Teubner

Distinguished Engineer
Broadcom

Russell Teubner is a Distinguished Engineer at Broadcom focusing on mainframe application modernization and integration. A seasoned inventor and entrepreneur, over the last 40 years Russ has applied his creative energies to solving difficult problems associated with integrating IBM mainframes and emerging technologies. As the CEO and co-founder of HostBridge Technology, acquired by Broadcom in August 2022, Russ positioned its flagship integration platform, HB.js, as a solution for large global enterprises to bridge the gap between hybrid/cloud applications and the mainframe.

 

Read the Transcription

[00:00:00] – Amanda Hendley
Hi, my name is Amanda Hendley. If we haven’t met, welcome to today’s CICS User Group session. Excited to have you all here. This is our last User Group session of the year. When I see you again, it’ll be 2025. Our next session is actually just a month away, so it won’t be a long wait between our events. Excited for today’s session. As you know, we always have a couple of quick things to talk about before we get to our presentation today. We got a great presentation. I’ve been assured that we will not take a full 2 hours, but I think it’ll be really good stuff. Obviously, there’s a lot of interest. Our registration numbers were fantastic for this one. I think I’m just really excited to see how this goes. You can ask questions throughout, through chat, but we’ll make sure we also dedicate a little bit of time in the end. Since Russ and Scott are both on, I think we’ll be able to answer anything that comes up today. Our partners for this series are Broadcom and DataKinetics. They’re great supporters of the program, and I encourage you to check out any of their products and solutions to see if it’s a good fit for you and your company.

[00:01:21] – Amanda Hendley
In a little bit of Planet Mainframe news, we’re going to be at Share 25 in DC at the end of February. I hope I will see you there. I don’t have my booth listed on here, but we’ll be exhibiting. It’ll be the launch of our Influencer Mainframe program for 2025. It’ll also be a part of the launch or release of our mainframe navigator. You should have received an email yesterday, if you’re on our mailing list, for the Arcadi Mainframe User Survey. I’m going to drop it in chat, too. This is the QR to complete the survey. If you have not done that yet, we would love to get your… That is not the link, so give me one second and I’ll get you the link. But I’d love to get your feedback on this user survey. It’s highly referenced and the more people we get to give us their feedback, the more robust and encompassing the report is. Broadcom has actually helped us with some of the development, and we’re also going to be incorporating some external data this year as well. I’ll make sure to drop that link in chat for you to complete that survey sometime after today’s session.

[00:02:40] – Amanda Hendley
Also, when you’re exiting today, we’re going to ask you two questions on a survey, and it’s just, did you learn anything today, and is there anything else we could show for you? Take a look as you’re leaving, and it takes two seconds.

[00:02:54] – Amanda Hendley
Now for our session today. We are talking about the seven deadly sins of CICS integrations. I’m going to stop my share so that our presenters can start their share. Let me tell you who they are. Russ is a distinguished engineer, focusing on mainframe application modernization. He is an inventor, an entrepreneur, a founder and CEO of Hostbridge, and has spent 40 years in taking his creative energies to solve difficult problems. Scott Brod has had numerous roles on the buy and sell side of mainframe at Broadcom. He’s the product manager at Hostbridge. We’re excited to have this really interesting session for you today. With that, I’m going to turn you over to Russ and Scott.

[00:03:43] – Scott Brod
Thanks. Let’s see. All right. My name is Scott Brod. I am the product manager for the Hostbridge products here at Broadcom. I’ve got 24 years of experience with mainframe and everything on mainframe. Mostly, It has been mainframe infrastructure. Now it’s all in software development and application modernization and integrations. I’ve been with Broadcom for about 5 years. Before that, I was on the customer side. Today, Russ, do you have anything that you want to add on yourself?

[00:04:19] – Russ Teubner
Sure. Well, first of all, thank you all very much for taking a little bit of your day to spend with us. We hope this is going to be really informative, but also a dialog. So don’t be I’m ready to put questions in chat. Again, my name is Russ Teubner. I came to Broadcom a few years ago as a result of Broadcom acquiring my software company, which was Hostbridge technology. I’ve been in the mainframe and the CICS collaboration space for, dare I say, 4 decades. Sorry, I guess I’m old. But nonetheless, over that period of time, I and a brilliant team of engineers have had the good privilege of working with many of you and your organizations, designing and delivering integration products and solutions for large organizations around the world. A few years ago, Broadcom acquired Hostbridge, and our entire team came over, and now we’re much larger. And so we are having the time of our life, growing and building this platform, for bigger and better benefit with our customers. Scott, back to you.

[00:05:32] – Russ Teubner
Sure. I can go ahead for a couple of slides. All right. So today we’re going to have a little bit of fun. We’re going to talk about the integrations that you’re using now and how you’re integrating your mainframe into your current strategies. Because the real background and what’s going on today is no one is thinking about how they’ve integrated their CICS environment and their applications and data, which a lot of these were built 10, 20 years ago by people that are no longer with the company or no longer supporting it. Today, we’re going to talk a little humorously about the past integrations and how they’re being used today as sins, or as Russ likes to call them, anti-patterns as well. We’re going to get into the seven deadly sins of CICS integrations. Now, there could be more than that, and there likely are, but there could be less. But these are the seven most popular ones that we’ve seen from real customer data that are, hands down, the biggest ones that you really should look at first.

[00:06:44] – Russ Teubner
Next slide. This is just a quick slide to show. We’ve seen it in a lot of presentations. Basically, what it comes down to is the growth and footprint of mainframe is growing. For every MIP, that leaves the mainframe in the space, 10 more are showing up. You can’t walk to any conference today without seeing the hottest of topics in the tech industry, and that’s AI. For some end, it was always inevitable how AI would work its way into the mainframe arena, but not in the ways a lot of people think, more in the ways of driving transactional workload. We live in a mobile-centric world. Integrating your mainframe into your current strategy. In some cases, there’s a lot of these AI front ends that are driving transactional workload. We really have to think about our quality of service and is the platform that we built to integrate 20 years ago, still able to handle the workload that we’re seeing today.

[00:07:51] – Scott Brod
Next slide. Here’s what we think the most hard-hitting business cases are for optimizing your integrations. Here we’ve got… On the left side, we’re going to focus on your vitality of your applications. By increasing the vitality of applications and data, now, these are the applications and programs that are your business. This is the crown jewels of your business. We want them to be around going forward into the future and access in the most efficient way, getting the best quality of service and able to handle the new workload that’s coming down the pipe. We want the applications to be scalable, resilient. Now, the mainframe is already a resilient platform. We all know that. However, we need to be scalable and handle these volumes today without sacrificing quality of service. Think about when you tap your credit card and millions of others are doing it in the same second. Do you want to have to wait 30 seconds for that approval? Or do you want it to approve instantly? Add that 30 seconds up millions of it’s a lot of waste of time. My favorite is the freedom to evolve, becoming future proof. Now, that can be taken in a lot of different ways, whether it’s plans to integrate your current CICS environment into your current hybrid strategies or if you’re one of those customers that have been getting off the mainframe for the last 7-10 years, and we’re all mainframers here, so there’s definitely shops out there that are trying to get off the mainframe.

[00:09:25] – Scott Brod
When you ask the question of when they will, they have no end in sight. having a good integration, you can actually help it in that way, too. You create business process APIs that allow you to point your new front ends potentially off the platform at the new endpoint. On the negative side, if you have core integrations or continue to use integrations that were built 20 years ago, you’re not getting the full value out of your mainframe. They’re more costly to scale. If they’re doing screen scraping, They break really easily, and they’re going to perform in a much more inefficient way as we move forward. It’s really harder to evolve. Those integrations that were there that worked for a long time, they might not have They’re going to be really hard to move forward. How do you integrate them into your new strategies, all the new things that are coming down the line? Next. Now, Now, many organizations have integrated in the past, and you’re likely doing it now. We talk to a lot of customers, and when we ask them, Are you using any green screens? Are you doing any screen scraping? Sometimes they think they’re not, but in the back Again, those integrations that were built, they are, and we do have those findings.

[00:10:50] – Scott Brod
Now, the technology that they used 15, 20 years ago to build out these integrations or some of the microservice APIs were great. Fantastic at that time. However, technology and the needs of the business change. What was awesome technology back then is dated and hasn’t been updated. What I’ll be talking about in 5 to 10 years from now is going to be different. It’s an ever evolving process. You’re upgrading the hardware every couple of years. Why are we not looking at the actual integrations to it? Think about your phone. Your mobile phone that you had 10 years ago still makes calls. Sometimes it still works. Even the one that you had a few years ago. But do you want to use it as opposed to the latest and greatest? It applies to all technology, including mainframes. Whereas quality of service back then was fine, not so much now. We really need to look at how we’re integrating the platform into our hardware strategy. Now, Russ is going to talk about it in a bit more detail. But in the end, we’re going to see why and talk about why we aren’t changing. What are the reasons?

[00:11:55] – Scott Brod
Is it senior management? Is it us? The sins of integration that we’re going to talk about today, some of them you’ll recognize and you’ll nod your head that, Yeah, we know we’re doing that. Now, if you know you’re doing it, it’s very important to admit you’re committing these sins, but how do we fix them? We’re going to give you some pointers and recommendations of how to get there. Now, take a look at this slide and put it in the back of your head for later. If we were presenting this at a conference, I’d print this out and put it in front of you. As we’re going through the sins, you can put it each send where it would show up on this chart. Think about the technologies of integrations that Russ is going to go over and where they would fit in this chart. Depending on what it is, what’s the cost of implementing it? First, the operational cost of maintaining it. Optimally, we want to be in that top right quadrant, low implementation risk and low operational cost. Russ, I’m I’m going to hand it off to you if you have anything to add here before we get into the fun stuff.

[00:13:05] – Russ Teubner
Yeah, it sounds great. Thanks for that great intro, Scott. I think this really is a good scorecard slide for everyone to just think through. As we review and talk about different techniques and technologies of integration to CICS, just ask yourself or think about what you’re doing and think about it in terms of implementation, cost and risk, and then operational cost and risk. We’re going to come back to the slide and actually talk about some things, some different solutions and where we think they might fit in that two-dimensional graph. But as Scott said, what we’re going to do now is launch into talking about these so-called sins, and I certainly hope that word’s not offensive to anyone. But we really just offer it humorously just to catch everyone’s attention. As Scott said, at any organization, there may not be seven, there might be one, there might be three. Heck, there might even be a dozen. But nonetheless, we’re just offering this as a playful way to think about, well, what are we doing right and what are we doing wrong and where are our opportunities? Now, one of the things that we are really committed to, and the reason why we at Hostbridge originally did a lot of this work on integration, is that so many organizations approach integration or mainframe or CICS optimization as if they have to boil the ocean, as if they have to solve all problems at once.

[00:14:41] – Russ Teubner
And we don’t think that’s true at all. But in order to form or create an informed opinion about that, we actually created a tool. We refer to it as HTAC, but it’s really our microscope. It identifies identifies problematic integrations. Now, the story behind this is fun, simply because the only reason it exists is a customer like yourself, literally called me one day, and it was someone that I knew at an organization based in New York City, and he had since transitioned to another larger organization on the West Coast. And he called me one day and said, Russ, we’ve got a problem. We believe we’re burning massive amounts of CPU time due to inefficient integration between the mainframe and the outside world, but we can’t find it, we can’t see it. We began to play around with that idea and ask ourselves a few years ago, what would a tool like that look like? We created this thing that we call HTAC. We also refer to it internally in Broadcom as our CICS, Integration, Health Check offering, because that’s really why we did it. We want you to be able to assess the health of that boundary from the mainframe to the outside world.

[00:16:13] – Russ Teubner
Now, we did this based upon SMF data, because we said what do you have that we could start with? And so what we do is we invite you, our customers, to send us a set of SMF data and then we take that data and we load it into a very sophisticated analytics platform to do some very unique analyses. And those analysis will let us really shine a flashlight on what’s going right and what’s going wrong. We’ll show you a couple of examples of what we can see. The purpose of this presentation is not to focus on this product or capability. It’s really to let you know that as we talk about these so-called sins, we’re not making this up. Scott and I just didn’t sit down in a room one day and said, well, let’s come up with seven evil things. No, we actually see this in the data that we analyze for customers every month. We’re not doing this. We see a lot of data. And so this is a reflection of the data that we’re seeing. Hope that’s clear. Okay, let’s jump in.

[00:17:35] – Russ Teubner
The number one sin or anti-pattern is we keep seeing organizations using non-mainframe platforms and services to orchestrate mainframe assets, programs, and transactions. They’re running something out in a mediation layer, and I’ll actually try to get my laser pen out here. I’ll put a star right there. We’re calling it a mediation layer. And these take lots of different forms. They can be homegrown web apps, they can be mule-soft, they can be business works, they can be RPA platforms, but they exist outside the mainframe. And someone has written a script, so to speak, or a program. And what it is doing is just driving a whole series of intricate interactions all the way, full travel, across the network and through and into CICS. We will show you some of the problems associated with this. But let’s just focus on the most obvious. When we do the math on the style of integration, it’s very common that the cumulative CPU overhead for processing these requests, just the overhead of doing all the ins and the outs, can be greater than the CPU expended to actually do productive work. In other words, each one of those interactions is meant to run a piece of logic, access a screen, hit some data, whatever. But when the orchestration is done outside the mainframe, the problem is the overhead of all the ins and the outs starts to really add up and people I lose track of it. Not to mention the fact that there’s a tremendous accumulation of latency in that process.

[00:19:39] – Russ Teubner
Let’s go on and look at send number 2 because it’s actually a bit of a variation on sin 1, but it’s just a slightly more specific. It’s this: using non-mainframe platforms and services to orchestrate screen-oriented transactions. Now, we would all like to think that the application base running on CICS is no longer screen-oriented. We’d like to think that, BMS maps and 3270 screens have gone the way of… But when we look at the data, nothing is further from the truth. Now, there are some exceptions out there. We did a recent study for a very large bank and brokerage, and sure enough, they had done tremendous work over the last decade to essentially migrate all of their applications to be purely bits of business logic. Now, they were still committing sin number one. They were orchestrating that business logic from outside the mainframe, but they had no longer eradicated all of their terminal-oriented transactions. I will tell you, though, from looking at the data, they’re the exception and not the rule. Many, many organizations still have lots of screen-oriented applications, applications that still have commingled business logic and presentation logic. And what’s worse is people outside your organization without your permission, maybe unbeknownst to you, they have written all sorts of interesting programs, services somewhere outside the mainframe, and they are orchestrating these interactions all the way across a global network. Nothing is more inefficient, bar none. We’ll look at a couple of examples of how screen scraping can really go bad a little bit later in the presentation. But suffice it to say, sin number 2 is about using something outside the mainframe to orchestrate screen-oriented applications.

[00:21:56] – Russ Teubner
Now, we’re going to talk a little bit later that orchestration is an important function in the modern world, but where you do it off the mainframe versus on the mainframe really makes a difference. Just to remind everyone, and I know everyone on this call like this or CICS professionals, but we speak to a lot of people who are not mainframe people, and they forget what are all the components that are in that path. If there’s some sort of… Say it’s an RPA bot over here. My pin went crazy here. If there’s an RPA bot over here, let’s say that’s a UiPath bot, and it’s orchestrating a whole string of dozens, hundreds, even thousands of interactions back and forth, we are not only accumulating latency, but we are making all those full travels through all these network components, and quite likely, driving a higher volume of transactions. So again, in an audience like this, I know we all get that. But when you’re someone working When you’re sitting outside the mainframe and you think you’ve come up with the idea for a great new bot or spreadsheet macro, sometimes you’re not thinking about all the factors that go into servicing each and every request.

[00:23:30] – Russ Teubner
Sin number 3. Now, track with me here on this one because this is really, really important. And that is, we believe that sin number 3 is defining APIs based upon a bottoms a built-up approach or based upon existing artifacts. Let’s imagine that you begin your API journey with one of two questions. Question number one, what are the business APIs that I need? What are the APIs that the business requires in order to flourish? Well, that’s one way to start your API journey. Another way to start your journey is by asking a different question, very different. What are the artifacts that I have that I can expose? Very different question. And the nature and composition of your APIs will be radically different. When you begin with the question of, what do I I have that I can expose, you end up thinking about your existing programs. What are my artifacts? Well, I have some common area programs over here, and maybe I can expose each one of those as an API. In fact, that’s what this diagram is illustrating. This is, in fact, somewhat of a real-world example that we see quite frequently where organizations had, let’s say, a program. Let’s see that the purpose of that program was to a particular function. Well, and what do they do? They use, maybe a decade ago, they might have used a CICS capability like CWS to be able to create a API for a comm area program. Today, they might use zOS Connect to do that. But again, they’re creating one API to invoke one program, one to one. Well, what happens in the modern world is that organizations need to accomplish business functions, business features. Let’s imagine that the client, what they really need is an API that gets a customer manifest of information. Well, there’s no one API to do that. We’ve got a bunch of APIs for each of the artifacts, the comm area programs. What do they do? Well, they write something down here outside the mainframe, and then they essentially orchestrate and drive those interactions.

[00:26:02] – Russ Teubner
What happens? Same thing we saw before. That is that latency goes up, CPU overhead goes up on the mainframe. So send number 3 is really subtle. You have to think about it. And that is, when you define APIs, mainframe APIs, based upon artifacts, not business objective or functional purpose, you end up creating a lot of APIs that are good, but they don’t solve the business problem. As a result, something outside the mainframe has to compose those APIs into something that the business really needs. Now, Again, we’ve seen this in the data many, many times. This, in fact, is that same diagram, but it’s more specific because this really was a bank. This is a bank, and they  decided, they had programs. Apologies for the penmanship there. They have one program that will get the customer details. Let’s say if I’m a customer of the bank, another program to get my checking and balance, another program to get savings, another program to get mortgage. They then used zOS Connect to expose each one of those. But what did they end up with? They ended I’ve got up with an API for every one. Well, that’s good. But is it great? Not really. Because when you log on to their mobile app, what the server that’s servicing the app needs is to get the full dossier on me as a customer. What happens, again, is that now there’s an app server over here. What the app server does is it drives all those API requests in. What does that do? Again, latency goes up, overhead goes up. You got to be really careful with this. Now, when we first saw this pattern at this bank, I asked the executive or the director over the mainframe, I said, talk me through why you chose to do this. One of the things that they said was that, well, we know it’s not optimal, but I viewed it as consistent with our organization’s approach around microservices. Now, if you want to make my head explode, that’s a good topic to start with. Because whereas 5-10 years ago, we had a very precise understanding of what microservices is, was, does, or should have been. Today, as a term, it’s thrown around and used in some very in effective ways. In fact, even if you opt for more of a classical or rigorous definition, we are all, as an industry, beginning to experience the limitations of granularity.

[00:29:13] – Russ Teubner
In other words, an approach toward microservices tempts us to see every… To break up things into their most granular form and then orchestrate them together to create bigger building blocks. Well, there’s There’s some value in that. We all have to admit that, but it can be taken to an absurd level. Perhaps no better case study have I seen than this one that came out of Amazon. The Amazon Prime, they run a blog, a great technical blog, where they just chronicle their user experience. I don’t know if you can read this, so I highlighted their words over here, but they had originally designed a lot of their monitoring infrastructure to be heavily reliant upon microservices. They realized it was killing. Not only were the costs really high, but the overall performance was quite low. And so I embedded in this article was a statement I thought I would never read, and that is the move from a distributed to monolithic architecture. In other words, from a distributed monolithic architecture… I’m sorry, I’m saying that wrong. I get a little excited on this point. I apologize. It’s the move from a distributed microservice architecture to a monolithic architecture achieved scale, resilience, and reduced cost.

[00:30:49] – Russ Teubner
I could just get chills when I think about that. How long have we waited for people to see and realize that when you divide everything into these tiny, tiny little microservice units and then try to orchestrate them, what you end up with is an architecture that doesn’t scale very well and costs more. We just put this forward as a so-called sin by saying, please find the balance. The balance is different for every organization, but please don’t tolerate inefficiency in the name of some abstract theoretical approach or architecture that many who have adopted it in mass are now beginning to come back to a more rational view around.

[00:31:46] – Russ Teubner
Okay, set number 5. This doesn’t take much time because we see this, but I think it’s so intuitive we can all understand it. When was it? About a decade plus ago, the CICS team came out with some really great capabilities under the covers of CICS about with Web Services Assistant. We could now take a single comm area program and we could wrap it and allow it to be invoked as a program across a SOAP request using Simple Object Access Protocol. And many organizations did that. They ran through that, and they’re still doing that today. But what have we learned over the past decade plus. We’ve learned that heavyweight protocols like SOAP that really add no value just represent an unnecessary cost. So suffice it to say on this one, if you are one of those organizations that are still driving work to CICS via CICS Web support or other means of implementing a Simple Object Access Protocol or other heavyweight protocols, we encourage you to reconsider that. If not, stop. If what you’re doing, in fact, is this still, I would encourage you, the easiest fix for this is go run a POC with either HPJS, our product, or zOS Connect. Experience what it’s like to use a more modern style of indications such as a RESTful style of indication of a service and a more modern style of data expression such as JSON. You will not only probably be more delighted with the experience, you’ll also save CPU because it takes a fair amount of horsepower to just continually, millions of times every day, unwrap those SOAP packets, extract out the data just for the privilege of then taking the output from a single column area program and re-wrapping it back in a SOAP or an XML document wrapped it in a SOAP envelope. A lot of horsepower, and typically none of that is ZipEnabled, which is not where you want to be. In other words, sin 5, keep an eye if you’re still out on your protocols. If you’re still using some of these heavyweight protocols, create a plan to remediate that and to move forward.

[00:34:29] – Russ Teubner
Let me move on to sin 6. This is not so much a technical thing like, oh, don’t use SOAP or don’t do screen scraping or whatever. It needs to be a shift in we think a shift in people’s perception. For example, and the way we describe it is, when all you do is optimize your silos, your technology silos, you don’t necessarily optimize the overall integration. For example, this slide, the first version of this slide, I actually created years ago when that customer called me and said, hey, Russ, I think we have an integration problem here that is costing us a lot of CPU. I said, what have you done? What have you explored already? They said, well, I’ve had a number of meetings. First of all, I went to the application owners and I said, and we had a meeting about the applications. Have they been tuned? Are they using the latest version of the COBOL compile? Are they compiling them with the correct architecture and all that? And apparently they were. Then he went to the CICS sysprogs and began asking questions. Are we tuned for maximum efficiency? Then he went to the zOS sysprogs. Then he went to Capacity Planning team. He went to the networking team, and then he went to the end user. Everyone thought they were doing it as optimally as they could. In other words, they really believed that their silo was optimal. But this individual, the guy running the project, was left with an astoundingly great question. I just love it when customers ask these sorts of simple, brilliant questions. And he said, here’s what I know about our business. They’re a global concern. We book about 200,000, no more than 200,000 extremely high value orders every day. But we execute typically over 20 million transactions. Why? Great question. Just very simple, very to the point. And he really wanted to know why. If everything is running so optimal, why do I run 100X transactions to the number of actual business units of work in his mind, those high-value orders?

[00:37:17] – Russ Teubner
And so that really became the journey that we went on, not only to develop part of the HTAC product, but specifically with this customer. Now, I’m going to show you just a glimpse of what we discovered at that particular customer. Now, this is a global technology logistics company. And one of the things that the HTAC product does is analyze that flow of SMF 110 data looking for signatures of robotic process automation. And so this is an example of what we found. So we found thousands of these things crawling their system. But this one, this RPA, when you read this, is this RPA bot took 22 minutes and 30 seconds. It ran a total of 10,503 transactions, running at sustained 7.8 transactions per second. This is occurring across a global network. The transactions themselves only took 1 minute and 33 seconds, so a rational person would ask, what happened to the other 21 minutes? That would be latency across a global network. How much CPU time did those transactions take? 27 seconds. In 22 minutes, they burn 27 seconds by doing this automation outside the mainframe. What did they do? They ran a transaction called O-R-R.

[00:39:04] – Russ Teubner
And how many times did they run it? They ran it 10,503. And what did they do? Each one of those transactions, all they did was a bot pressed the enter key. Now, once we began to see these patterns in their data, then the customer clearly began to figure out and do the math why they were running so many transactions relative to the high value orders they were creating. Just this bit of insight from the the HTAC analysis caused them to go back to the end customer and said, hey, you have some piece of code running on your workstations that it looks like it’s pressing the inner key on this transaction thousands of times. Why are you doing that? And the answer was, well, about eight years ago, we needed to automate something related to order status. There wasn’t a clean way to do it and so someone wrote an Excel macro that logged on via TN 3270, got to the ORR transaction, pressed the inner key as fast as they could to wait for a 20 character string to change, reflecting the change in the order status. On average, that takes anywhere from a low of a thousand to a high of 30,000 depressions of the inner key to get the update.

[00:40:40] – Russ Teubner
Suddenly, it became clear not only why they might be running 100X, the number of transactions that they really needed to, but also what their next to do should be, and that is they needed to create an API that specifically was an inquiry around order status. Now, we could go on and on. We see these things almost week in and week out with large organizations. This is another one that we’ve seen. Again, we’re still focused on these terminal-oriented transactions because it’d be funny if it weren’t so sad, quite honest. But this is still going on. But I will say that every now and then it takes a very creative human, particularly an experienced CICS programmer, system programmer, like many of you on this call, to be able to understand a bot. This is the DNA of a bot that we detected. If you’re a CICS person and you look at that, the way you read that is this bot ran a transaction called SMID, and it pressed the clear key, and that was the BMS map and map set. But isn’t it rather curious if you’re a CICS person, why they might ever need to press the clear key back to back?

[00:42:14] – Russ Teubner
That’s rather odd. When we saw that as a team, we looked at it, I was like, wow, what is going on with that? Well, it turned out that this macro was running on an IBM. They don’t currently market. It’s I mean, marketed by a different company called HATS. I think it’s still an IBM product. And this macro was populating of the home screen of a web application. And the people who wrote that macro really didn’t understand the way it CICS or the application worked. And so NetNet, what they discovered as a result of this analysis was an astounding fact. And that is that 50%, 5-0, no kidding, no joke, of all transactions driven by HATS on this macro had no functional value, no business value, represented 100% overhead and cost. This is a company that was every day between 3:00 and 5:00, their mainframe, their back was against the wall. This was a big part of the point. I always like to cover these. Maybe you don’t have these problems in your organizations, but if you do, and we have the privilege of analyzing some data on your behalf, we’ll be sure and bring them to your attention.

[00:43:45] – Russ Teubner
Let me talk about a topic that we picked on just for a little more in-depth. We mentioned orchestration. And orchestration in the modern world, it’s a reality. It needs to happen many times in order to create the most relevant business-oriented API. But the question is, where? Where should it occur? We’ve already kind of beat this drum a lot, and that is that orchestration off the mainframe has some implications. It occurs at network speed at best. Network latency and traffic is high. Mainframe overhead is high. Most interestingly and problematic is that most solutions that end up with a lot of mainframe orchestration running or a lot of orchestration running off the mainframe, end up exposing a lot more data and implementation details than they really ever should. In other words, something out here should never be concerned about the fact that the business logic over here happens to be organized as a series of COBOL programs or comm-ary this or channeling container that or transactions. They should have no knowledge of that. The API boundary should be functional. But when you’re doing this orchestration off the mainframe, one reason is because usually the APIs have been exposed that are more technology-oriented.

[00:45:35] – Russ Teubner
But anyway, that’s an attribute of doing orchestration off the mainframe. But what if? I mean, just what if you could do that orchestration on the mainframe? Then it would occur at machine speed. It would occur at machine speed. You end up with far less traffic, far lower latency, mainframe overhead is lower, and all of the implementation details are hidden. The theory here is very simple. What if I could send a request in to some integration layer running on platform, and then from that, interact with all the necessary apps and data at machine speed, and then let the response a single response, flow back. Well, that would be cool. Well, that is possible, and there are a couple of different approaches to do it, and so I’ll go through them. Yes, it can be done. And there are two basic approaches. One, you can always write your own. And there are customers who do this. Now, they tend to be some of the largest and most sophisticated on the planet because can afford to do this. In other words, there’s one very large globally systemic bank. They have a large team of Java programmers. Whenever a new mainframe API needs to be exposed, that team creates a new Java program. Running under CICS. It exposes a business-oriented API out. Here’s a custom Java program. It exposes the API out, and then the Java program itself interacts with all the necessary artifacts. Well, I say take that back. It can interact with programs and data, but when you’re doing it at that level, it’s really difficult, if not impossible, for most people to figure out how to interact with screen-oriented applications. So option one for doing orchestration on platform is write your own. It works. You can do it. Now, it’s a little, but We’d like to believe there’s a better solution, and that is we do, in fact, offer a product to do this. We’re not going to talk about it. That’s a whole different topic. But we offer a product called HPJS. It came from HostBridge. And what it is, is an on platform orchestration engine. It runs on the mainframe, under CICS, and on the Zip engine. And its purpose is very simple. Receive an API request and then orchestrate whatever needs to occur using a script written in the most widely known programming language on planet Earth today, JavaScript. This is a server-side JavaScript engine that scales at mainframe levels and runs on the zip.

[00:48:50] – Russ Teubner
Now, we’re not going to talk about that anymore other than to say there are many ways to achieve orchestration on platform and however you need your orchestration to occur, we really encourage you to look and consider a doing it on the mainframe. Let me put this very objectively. When we’re involved, Scott and I are involved in these conversations about orchestration and where it should occur, the most rational thing is to have a basic premise.  And ours is this. We think you should perform the orchestration. Let me just put it right here. We think you should perform the orchestration closest to the apps and data with the highest volume of interaction. Let’s take this case study here. Let’s imagine that the highest volume of interaction is with mainframe apps and data for the APIs you need to expose. Then by all means, you should be researching and exploring, how can I do that orchestration on platform? If, however, the converse is true, that the mainframe apps and data play a very minimal role in the provision of an API, then no big deal. I mean, yeah, use non-mainframe tools or techniques to be able to do that orchestration off the mainframe, because you’re not going to be driving a lot of volume to the mainframe.

[00:50:42] – Russ Teubner
When you need to go to the mainframe, fine, do it. But if all your volume is here off the mainframe, great. Do the orchestration off. No harm, no foul. We’re starting to see a number of customers who it’s not one or the other. They’re doing substantial amounts of work on the mainframe and substantial amounts of work off. And so what some of them are choosing to do is use an orchestration or integration platform like ours, HPJS, on platform and use something else off platform. And then whenever something on the mainframe, let’s just assume that’s our product, I’ll be generous, and say it’s HPJS, well, if HPJS needs to be able to invoke a service off the mainframe, that’s fine. Make an outbound API call. Likewise, if the integration platform off the mainframe needs to be able to access mainframe, fine. Just make the API call into the mainframe and get it serviced. There are a number of different ways to achieve this. But again, we encourage to really think about this fact that since we see this non-mainframe services orchestrating fine-grained artifacts on the mainframe, since we see that as the number one sin or anti-pattern, we encourage you to really think through this long and hard.

[00:52:23] – Russ Teubner
In summary, again, we encourage you to do the orchestration as close as possible to the apps and data. And we also, again, I can’t stress this enough, is that you do not want to expose technology driven or technology-oriented or implementation-oriented APIs from the mainframe. You want to expose functional or business-oriented APIs, exposing technology-oriented APIs APIs, APIs that are defined from a given COBOL program and its comm area. That was cool a decade ago. Today, it’s not. It’s not the way to do it. It is not the state of anymore. If there’s any API mismatches, resolve them on the platform. It’s a smart way to do it and the least expensive way to do it.

[00:53:23] – Russ Teubner
With that, let’s just begin to wrap up. We put up that little chart that had the microscope on it, quite a few minutes ago. I’m going to lay in a couple of solutions here that maybe we can just play with as a thought experiment. One is, I think we can all agree that any solution involving things like screen scraping via Excel macros, we would say that that implementation costs and risk, it may be very low, but the operational cost and risk is very high. That is just not the way you want to build your infrastructure. Now, doing screen scraping via RPA platforms, I had to actually move that up or make it lower in the implementation cost and risk axes, because frankly, it’s drop dead simple. If you’ve never seen someone write a a path bot to screen scrape your mainframe, you should. Because it’s so simple, it’ll begin to make sense why people are doing this. It’s not in the best interest of the organization, in our humble opinion, but it’s dreadfully easy. But it does, in fact, we think, carry a lot of operational risk and cost. Now, we looked at a couple of other approaches here, one being on mainframe orchestration. I mean, we, forgive us, we have our own opinions here, but we think based upon our work with customers over the last 20 years, that is the low cost low-risk approach on both axes, implementation and operational. It’s getting really great reviews, a lot of organizations. Now, it’s certain, to be clear, doing that on mainframe via custom apps is certainly low risk operationally, but it is higher cost from an implementation standpoint. Because what are you doing?

[00:55:30] – Russ Teubner
You’re writing a custom app as opposed to using an off-the-shelf product. Now, let’s ask the $64,000 question. Where would you put zOS Connect. By the way, zOS Connect does a lot of things. It talks to a lot of different subsystems. It’s a very wide product, very broad in its suite of capabilities, but when it comes to CICS, it does one thing. It lets you create an API on a single business logic program or artifact, be it a comm area program or a program using channels and containers. It’s one to one, one API to one invocation of a program. And so I put it right there in the middle. Everyone’s experience might be a little different, but to me, I’m going to propose that it’s there. But even more importantly than that, what we see in real organizations is that when zOS Connect is used as a solution, it rarely occurs in a vacuum. In other words, since it’s exposing really technology-oriented APIs, not business-oriented APIs, what happens is organizations, they tend to end up using off mainframe orchestration tools. We’ve seen this time and time again in the data where an organization uses zOS Connect to expose CICS business logic programs.

[00:57:23] – Russ Teubner
And then because there’s an API mismatch between the APIs exposed by zOS Connect and the APIs that the business requires, they stand up something outside the mainframe in order to orchestrate all of the necessary zOS Connect API calls. What we would say then is that the true operational and implementation cost and risk actually gets worse because it gets tangled up in that off mainframe platform or doing the orchestration. Now, compare that with the approach of saying, well, what if we coupled, let’s say, zOS Connect with doing on mainframe orchestration. Well, that actually could be quite lovely. In fact, we think it’s pretty darn smart. Because in fact, what zOS Connect can do is you can use it to expose an API outside the mainframe. But what if that API were actually being fulfilled by, let’s say, something like HPJS that can touch any artifact? Then we overcome the one-to-one limitation, and now zOS Connect’s risk and cost profile changes. Now it moves upwards and to the right and it gets lower. We encourage you to look at many of these technologies, not just as if zOS Connect is your one and only solution, but look at them holistically, as in, what do I really need to expose business APIs, not technology-based APIs, and do it in a way that creates the lowest cost and risk profile in terms of implementation and ongoing operational…

[00:59:31] – Russ Teubner
Scott, I think I’m going to wrap it up there and pass the mic to you.

[00:59:38] – Scott Brod
Thanks, Russ. I hope everyone learned a lot today. If you were counting, you would be at right around six sins, which brings us to the seventh sin. Now, the seventh sin is not a technical sin. It’s actually not a technical sin at all. It’s really a thinking process and how you go through what your thoughts are and what your best practices are. Now, thinking that CICS applications can’t be first-class citizens with API and cloud initiatives. Now, hopefully none of us mainframers think that way, but there are a lot of people that do, and there are a lot of leadership people that do. Your cloud-first initiative shouldn’t trump your good integration architecture. You really have to think about how you’re doing things now. And as Russ said earlier, we’re not trying to boil the ocean. We’re trying to make small, powerful changes that can show you immediate results. It could be something as simple as making a couple of changes to an application or a couple of ways you’re accessing a certain application. Changing one application that might be doing screen scraping into a more API-based solution with business process APIs.

[01:00:56] – Scott Brod
You’ll see not only your quality of service go up, but your cost can go down. Let’s remember, any network traffic is driving CP as well. But really, the goal here is just to get to a more best practice, to get to a more future-proof way of looking at things. You want to perform that orchestration as close as possible to the applications and data themselves. If your applications are off platform, orchestrate off. But if we’re talking CICS and the programs and data within, the the best way is to orchestrate on the mainframe. And your APIs, whereas 10 years ago, 15 years ago, those technically-based microservice APIs were the best way of doing it. It was new, it was cool. These days, there’s better solutions out there, more efficient solutions.

[01:01:48] – Scott Brod
With that, I just want to go to the next slide. I think we’re good there. But we have a health check that you can run. The health check is we take a look at SMF data and we give you recommendations. We show you exactly everything that’s going on in your system, all your integrations. It’s all about getting the best efficiency out of your mainframe. Think about it. What’s getting your change? What’s stopping you? What is out there? What are people looking at? What’s stopping us from making those changes, from looking at those things? Sure, it could be some more extra workload. But these applications and data that your business runs on, this is what we’re using. If we want them to continue to get the best quality of service going forward, we really need to keep it in the back of our mind. When things come up and changes need to be looked at, this is a great place to start. Are there any questions? There was a few questions in the chat throughout the last hour or so. Are there any more questions out there that anyone wants to raise their hand for?

[01:02:55] – Russ Teubner
Hey, Scott, I might just chime in. My friend Lillian asked a great question in chat, and I just wanted to go back to that slide just to make sure that I addressed this. She wanted to see… If I can get all this erased here. She was making the point in her chat question that the operational costs of terminal emulation and screen scraping is high. I just want to chime in and say, boy, Amen. You are absolutely right. I just wanted to note the fact that this is a really weird diagram, and maybe I need to change it. But notice that the axes are a little different here, right? High down here, so yes, screen scraping has very high operational costs. When I put that chart together, I may need to rethink this because it might have been a little confusing. But absolutely, doing anything like this has… We say it’s low in terms of implementation costs, because frankly, it’s simple to do. The trouble is the ongoing cost of driving high volume transaction loads through emulation of screen scraping is in fact sky high. You just wouldn’t imagine when we do the math. Just to clarify, Lillian, drove home that point very well, perhaps better than my chart did.

[01:04:41] – Russ Teubner
I’m just seeing Lillian opined in chat. Yes, low implementation. Lillian knows firsthand in her organization that the history in that organization is that, yeah, implementation costs are so low that all sorts of people across the world have built lots and macros that go against the mainframe. It has created a tremendous amount of mischief in the organization at a heck of a lot of costs. So, yeah. Yeah. Deborah, I see that. I think I was too cute by half in putting the axis like that, so we may have to go back. So thank you for that input.

[01:05:29] – Scott Brod
I think we’re so used to seeing the magic quadrant, so we’re trying to get everything to that top right mentality.

[01:05:38] – Russ Teubner
Okay. Lesson learned. We’re going to go amend the chart. This is why these Q&A’s are great.

[01:05:49] – Scott Brod
All right, I think I’ll pass it back over to Amanda.

[01:05:53] – Amanda Hendley
Thank you. I’m just noticing that on your chart, too, and I agree with Deborah. But at the time you were explaining it, it didn’t even occur to me that it was slightly off. Well, great. Well, thank you so much for this presentation I can’t wait to recap this. For everyone on the session today, we do a recap and video and everything will be ready in a couple of weeks, and we’ll send out everything in a newsletter. To wrap up for today, I don’t have any new news, but I did want to drop it as long as I haven’t lost it. The link for the Arcadi mainframe user survey, that is the correct link that will get you there. It tells me it’s a 20 minutes survey, but everyone that’s finished it has finished it in nine minutes. That’s the average for it. I hope you’ll take a few minutes today and give us your feedback on how you’re using mainframe and what you’re planning on for the future. If you complete the survey before, we’re going to do about weekly drawings, and then every week you will have more chances to win the sooner you do it.

[01:07:20] – Amanda Hendley
Then if you also complete the survey, you’ll get the full survey results ahead of everyone else. That’s your incentive to fill it out other than, I’ve been told it’s a fun survey to complete. In lieu of news related to CICS, because there’s never a lot of brand new news articles out there. We get a lot of great presentations and user groups and conference sessions, but we’re not making a lot of news, so to speak. I did want to bring your attention to Planet Mainframe. We are just overflowing with articles this time of year, and there’s some great content on the website right now, especially if you didn’t catch us during COBOL month. I think it was at the end of October, we had some really great articles in there about COBOL and then Security Month also. I want to, again, thank our partners, Broadcom Mainframe Software and Data Kinetics. Encourage you to connect with us. We’re just about everywhere except TikTok. We will not be getting on to TikTok, if I have my way. Connect with us on social. I’d love to see you on LinkedIn. You can connect with me as well. Our next meeting, January 14th, we’ll be talking Ansible automation.

[01:08:46] – Amanda Hendley
We’ll see you, everyone, in the new year. Like I said, about a month from now. Scott, Russ, I want to thank you both for presenting today. As I said, we’ll have the recap out shortly.

Upcoming Virtual CICS Meetings

March 11, 2025

Virtual CICS User Group Meeting

 

May 13, 2025

Virtual CICS User Group Meeting

Ezriel Gross, Rocket Software

Register Here