Virtual IMS User Group Sponsors

BMC Software

Virtual IMS User Group | June 2023

Simplifying IMS Performance Problem Identification and Determination

James Martin
Senior Solutions Advisor
Rocket Software

Read the Transcription

[00:00:36.690] – Trevor Eddolls

Okay. Well, welcome everyone to this meeting, the VirtualIMS user group. I’m Trevor Eddolls, and I’m CEO of iTech-Ed Ltd. We’re a mainframe consultancy, analysis and technical authoring organization. I’ve been chairing these user group meetings for IMS, as well as CICS and Db2 since 2007. Some of you may even have been at that very first meeting. And I’ve decided that it would be a good idea to share the hosting of the groups to introduce some new, fresh faces. So first of all, I’d like to welcome my co-host and that’s Amanda Hendley, who you may know from her time at Computer Measurement Group. So welcome Amanda. And thank you for being here today.

[00:01:27.350] – Amanda Hendley

Thank you, Trevor. I’m happy to be here.

[00:01:30.230] – Trevor Eddolls

Great. And on another positive note, you’ll see, we have a new sponsor. The user groups have always been about sharing mainframe education and spreading knowledge in general. And we felt there was a certain synergy between what we’re doing and what Planet Mainframe is doing. So, we’re pleased to announce that we’re joining marketing forces to help co-promote the user groups and our respective resources and reach a wider audience. Planet Mainframe will help us both promote this user group and help us continue to support and build this mainframe community. Okay, Amanda, let me hand over to you now.

[00:02:17.190] – Amanda Hendley

Thanks Trevor. And I do want to thank Planet Mainframe, but also recognize our other partner, BMC. If you are not familiar with them, BMC works with 86% of the Forbes Global 50 and has customers around the world. So, I hope you’ll check them out.

[00:04:45.480] – Amanda Hendley

Our presentation today is Simplifying IMS Performance and Problem Identification. Performance problem. Identification and determination. James Martin is a Senior Solutions Advisor for Rocket Software – an IBM Preferred Development Business Partner. James has been working with IBM IMS performance tools for 16 years, and his work includes three years with IBM as a client technical professional in the IMS tool space and 10 years at Fundy Software working in Technical Sales. He’s held several testing roles and likes to travel to a lot of conferences and is a frequent speaker – both for this user group and at conferences like SHARE. James is also a 2023 IBM champion. So, thank you for joining us today.

[00:05:39.450] – James Martin, Senior zSolutions Advisor, Rocket Software

Thank you, Amanda. And thank you to everybody else too, who’s joining the presentation today. I know your time is valuable, so I will definitely try not to waste it. And I guess we’ll just go ahead and get started with the presentation.

[00:05:57.450] – James Martin, Senior zSolutions Advisor, Rocket Software

As Amanda said, we’re here today to talk about simplifying IMS performance, problem identification and determination. And that can mean a lot of different things in the world today. As we all know, the mainframe platform has been here for years and years. It’s not going anywhere. It’s part of our core business technology that we work with today. And most of us on this call work with IMS and or CICS or Db2 on a daily basis. And when we talk about looking at simplifying performance issues in IMS, there’s really several different ways you can do this. There are several different products out there that will be very helpful to you around this area. Today in this particular presentation, I’m going to focus on the offerings from IBM’s IMS tools because that’s my wheelhouse. And when I get into the agenda slide (Slide 2), it’s kind of funny the way I built this screen, because I kind of built this screen backwards.

[00:07:07.350] – James Martin, Senior zSolutions Advisor, Rocket Software

And, when we talk about performance issue identification and determination, I like to talk about working from the top down. In the IBM suite (that we’ll call it for this presentation), that usually starts with our monitoring tool. And the IBM offering is OMEGAMON XE for IMS. And that’s really the top-down view when we talk about performance in IMS. And it doesn’t really matter whether we’re talking about IMS system performance, transaction performance, application performance, or even database performance, that’s where we kind of start that journey into identifying performance issues in IMS – using OMEGAMON for IMS. Once we identify performance issues using our monitoring tools, whatever they may be, we can look at that problem in many, many different ways. We can dive into that problem by doing a deep dive, possibly cross-system subsystem analysis with a tool like IBM Transaction Analysis Workbench. That tool, as well as OMEGAMON, also now offers us the ability to forward data into off-host analytics platforms such as Splunk, Elastic, Kafka, all those types of things. And by forwarding that information into these off-host platforms, we can really break down the performance, whether it be at the transactional level, database level, whatever the case may be, into those off-host analytic platforms.

[00:08:51.610] – James Martin, Senior zSolutions Advisor, Rocket Software

And we can graph them and do all kinds of cool things with them today. But if we are talking about deep-dive analysis on the mainframe, then we start to get into the tools you see at the top of this particular slide. The IBM IMS Performance Solution Pack where we have batch reporting capabilities for historical reporting in the 1) IMS Performance Analyzer. Deep-dive log analysis capabilities in the 2) IMS Problem Investigator. And if we are IMS Connect users, we have the ability to also merge that IMS Connect data using the Connect Extensions or 3) IMS Connect Extensions journals for operations and monitoring.

[00:10:22.450] – James Martin, Senior zSolutions Advisor, Rocket Software

So (Slide 3) the first thing when we talk about identifying performance issues in IMS is understanding what it is that we’re looking at, right? Whether we have IMS whether we have IMS and IMS Connect together. We need to understand what parts of the performance issue that we’re looking at can be helped by different aspects and integration between the tooling that I mentioned in the previous slide. So, when we talk about IMS, you have the obvious input and output side of time in IMS Connect, and in IMS-TM in this particular case. And that coverage you can see at the bottom of the slide can be covered start to finish using IMS Connect Extensions, IMS Performance Analyzer, and or IMS Problem Investigator or Transaction Analysis Workbench. Now, if we are looking at specific parts of the IMS transaction, whether that be the input queue, the actual application processing itself, sync point, then there are other avenues or other aspects that we can look at. And you can see that I’ve given you very detailed coverage ability of the tools we’re speaking about for different aspects of what we might be looking at.

[00:12:01.890] – James Martin, Senior zSolutions Advisor, Rocket Software

So if we’re talking about input queue, application processing in Syncpoint, if we want to do a deep dive analysis into that, we can use a tool like the IMS Performance Analyzer and its batch reporting capabilities to look at historical data to determine or try to determine what the problem we’re dealing with might be. But if we want to get down to the very granular detail, the DLI calls, the Db2 calls, the MQ calls, the IMS database calls, for instance, then we’re now looking at two different ways to do this. One of them is on the screen, one of them is not. There’s a facility in OMEGAMON XC for IMS called Application Trace Facility (ATF) that will allow you to get the granular detail of calls made by IMS or made into IMS, whether those are DLI calls, Db2 calls, MQ calls, whatever the case may be. And we can get very detailed performance metrics in those ATF records very similar to the other aspect of it or the other thing you could use to find this information, which would be to run the IMS monitor. Now, OMEGAMON IMS for ATF is unique in the way that it uses that data and writes that data into the IMS log.

[00:13:26.210] – James Martin, Senior zSolutions Advisor, Rocket Software

If we were to use the IMS monitor for a similar type of information, the monitor collects that data into its own data set, and we have to figure out how to format it and merge it into our problem analysis. But with OMEGAMON ATF facility, we do not have to do that. It’s written directly into the IMS log, which makes it much easier to deal with. But what you can see on the screen here is that by combining and integrating these different tools that are available to us, we can get complete coverage for the entire time that a transaction spends within IMS, whether it uses IMS Connect or not.

[00:14:10.910] – James Martin, Senior zSolutions Advisor, Rocket Software

Now, similarly, if we’re in a CICS DB control environment (Slide 4 – CICS-DBCTL: Complete Coverage), which means CICS is our transaction manager, but IMS is our back-end database, then we can also get a lot of insight into those types of environments as well. And very similar to the way I did on the previous slide, I showed you the different coverages that you can get from different tooling. And if we’re talking about the very granular side of it, again, the PCB / DLI calls, the EXEC CICS VSAM FILES, the DLI calls, we can get a lot of detailed information out of OMEGAMON IMS and IMS Performance Analyzer for that. But if we want to get the overall picture, we will need to incorporate other tooling. And on the screen here, you can see the CICS Performance Analyzer, very similar to the IMS Performance Analyzer in the way that it looks at CICS transactions, could be used in this particular case. But what’s not on the screen here that could also be used in a case like this would be the IBM Transaction Analysis Workbench, which allows us to do deep dive analysis / transaction tracking across multiple subsystems or systems of record in z/OS. So, a couple of different ways to look at CICS DB control environments to get the type of information that we’re looking for to do problem determination.

[00:15:50.590] – James Martin, Senior zSolutions Advisor, Rocket Software

So, moving to Slide 5 (IMS PA form-based reporting), the next tool that we’ll talk about when we’re looking at IMS and identifying problems in IMS is the IMS Performance Analyzer. And there’s a couple of aspects that I really want to point out today when it comes to simplifying problem determination in IMS.

[00:16:09.850] – James Martin, Senior zSolutions Advisor, Rocket Software

And the focus of this particular aspect of the presentation is on an IMS Performance Analyzers feature called form-based reporting. Now, form-based reporting, for those of you that are not familiar with it, is customizable reporting, which means you build your own reports. Performance Analyzer comes with a wealth of reporting capabilities both at the IMS system level and at the transactional level in IMS. And the ability to build customizable reports is something that’s very unique in the Performance Analyzer. It allows us to determine the users, to determine what metrics are important to us. The example that I have here on the screen, it would be analysis for what we call CPU “heavy hitters”, which is always something that we’re investigating in IMS. Performance is CPU usage because that equals dollars, right? So, when we’re looking at performance in IMS, the CPU usage and the way that transactions are using CPU is always a very important metric to understand. And you can see over on the right-hand side of this particular screen the actual form that is being built for this particular report. And you can see that I have Program, Transaction Count (Tran Count), and then I have multiple instances of CPU time views.

[00:17:37.360] – James Martin, Senior zSolutions Advisor, Rocket Software

And that’s because if I’m doing a CPU analysis in IMS, I want to look at various different aspects of CPU usage here. So, the Performance Analyzer gives me the ability to look at CPU times multiple different ways by building a report like this. And in this particular instance, you can see that I have CPU time listed and I want to see the Avg CPU Time usage by a transaction schedule. And then I have the ability to enter things like ranges to show me different aspects inside of a transaction scheduling, which could have 100 or 1000, or 3000 different transaction schedules inside of it. I also want to see a breakdown of the range of different CPU usages. So again, I have greater than two milliseconds, which would be not good, greater than half a second, which is bad. And then greater than 2 seconds, which is ugly, right? Anytime we’re using 2 seconds of CPU inside of an individual transaction schedule, it’s not a good thing in IMS. But by building a report like this, you can see that the output over on the right side is broken down by Program – NAME, BANKING, FINANCE, MOBILE, whatever the case may be.

[00:19:04.490] – James Martin, Senior zSolutions Advisor, Rocket Software

I have a Tran Count which shows me how many transactions were run inside that particular schedule. I have some smaller ones on here and some larger ones on here as well. And then across to the right, you can see it there’s, the Tot CPU Time, the Avg CPU Time, and then broken down by the ranges that I specified the CPU usage inside of a particular schedule. Now, the key point that I want to point out here is that this is a summary type forms-based report, which means the data is summarized. And summarized data is great for pinpointing certain aspects that I might want to look into deeper.

[00:19:46.720] – James Martin, Senior zSolutions Advisor, Rocket Software

In Texas, we have a saying that says “Summarized data is kind of like putting one hand on the stove and one hand in the freezer. In the middle, or on average you’re comfortable, but on one end you’re burning up, and on the other end you’re freezing”. So, while averages and summarized data is great, the other option for forms-based reporting, which is the LIST type forms-based report, would actually break down by individual transaction the timings that we’re looking at here. So once I did this particular analysis, I obviously would probably want to identify any further analysis that I needed and then break that data down or cut that data down to just show me those instances of transactions that are outside of either my service level agreements or outside of anything that I consider to be an exception.

[00:20:44.890] – James Martin, Senior zSolutions Advisor, Rocket Software

Another unique feature about the IMS Performance Analyzer (Slide 6 – IMS Transaction Index) is its ability to build what we call an IMS Transaction Index. Now, the IMS Transaction Index is a very unique record. It’s not a record that is natively created by IMS. All of the other systems of record CICS, Db2, MQ, they all have really good performance accounting records in them. The closest thing we have in IMS is the X’56FA record. But we don’t really have a single record in IMS that shows us all of the performance metrics associated with a transaction. So, the IMS Performance Analyzer can use the IMS log to build a single record that contains all of the performance metrics for a transaction. And that single record can then be used a variety of different ways to help you do performance analysis and problem determination in IMS. It’s very nice because it can take a SLDS that contains 300,000 transactions with over a million log records, and we can cut that down to one single record for each transaction within that SLDS or that extract of the SLDS. And that’s very nice from an aspect of identifying performance issues because I can then use the transaction index as re-input into performance analyzer to run my transit or performance reports. But I can also plug that index record into a tool like the Problem Investigator and/or the Transaction Analysis Workbench. And then I can take that record, merge the different instrumentation data sources around it, and track a transaction start to finish just by using a single record. And the single record is just so nice when it comes to identifying performance issues because I don’t have to dig through a ton of log records to do it. So, many uses for the transaction index record – it’s a very useful record.

[00:22:55.430] – James Martin, Senior zSolutions Advisor, Rocket Software

And if we proceed on with the presentation to Slide 7, we will see just exactly how that index record can be used in a tool like Transaction Analysis Workbench to track a transaction from start to finish. And I think the key thing to identify, as we work through this example, is to identify that this is a transaction that uses multiple subsystems. And that’s where the power of the Transaction Analysis Workbench really comes in. Because in today’s world, we’re not just using IMS. Most shops have IMS, they have Db2, they have CICs in most cases. They’re using MQ or WebSphere or IMS connect, whatever the case may be.

[00:23:43.030] – James Martin, Senior zSolutions Advisor, Rocket Software

And the ability to merge all of that instrumentation together and put it into one place and see a transaction start to finish is just so powerful because when we talk about performance in IMS, it’s always about where did the time go? So, when we’re talking about a long running transaction or a performance issue, it’s about, “Well, where did we lose time?” So, in this particular example, I have a couple of IMS transaction index records on the screen here, as you can see, and I’ve identified one obviously at the top that has a processing time of 72.6 seconds. Now, in anybody’s IMS shop, that would be just terrible, right? That’s terrible performance. The one below, it’s not much better, it’s 18 seconds, but we’re going to use the top one for the purposes of the example today. So, in that particular case, I would be looking at the transaction index here, and I will have merged the other instrumentation data sources with it, whether that be the Db2 log. In the Transaction Analysis Workbench, we can also merge SMF data around that. And we all know that in the Db2 world and in the CICS world, all of the pertinent performance metrics are in SMF. In IMS. We’re lucky. Most of our performance stuff is all in the IMS log, but in the other systems of record, we don’t get that luxury. So being able to merge SMF data makes the Transaction Analysis Workbench a very powerful tool.

[00:25:20.880] – James Martin, Senior zSolutions Advisor, Rocket Software

Moving to Slide 8 – Tracking A Transaction Across Subsystems. Because when we track that particular transaction, we then merge all of the instrumentation data record into one place, and you can see at the top of the screen here, we have the CA01, which is the IMS transaction index record. But we also have other instrumentation data sources in here as well. The green identifies IMS log records. The blue identifies SMF records from Db2. The red identifies the Db2 log, which is mostly used for recovery. And so, when we start to build this picture and track a transaction start to finish, we can start to identify some things in this particular transaction just by looking here on the screen. And the Transaction Analysis Workbenches, as well as the Problem Investigator gives us the ability to look at this track transaction set in either ELAPSED or RELATIVE time. And you can see that using the E line command here, I’ve switched this particular view to ELAPSED time.

[00:26:30.090] – James Martin, Senior zSolutions Advisor, Rocket Software

So, the ELAPSED time is going to show me the delta difference between each event that happens within this track transaction set. And as we move down through the transaction, we obviously see the 01 input, the 08 application start the GET UNIQUE for the PCB, the 31 DLI GU. But then you can see down about the end of the green records, the 5600 CREATE thread for external subsystem. So that’s the EASF sign-on record. And then we start to see these blue records occur. And those are the SMF records from Db2. And as we step down through there, we can see a couple of things. I want to point out here the 233 3AD pair inside of that, which is a stored procedure entry. So that’s a stored procedure that’s being executed by Db2. Then we see there’s a package allocation, a SQL update. Then Db2 begins its unit of recovery with the Db2 log, the SAVEPOINT update in the data page. And then inside of that stored procedure (SP), you can see there’s a SQL UPDATE. And then there’s an OPEN C1 which is opening a cursor inside of this particular SP. Then there’s another OPEN statement SQL. And then you see that after the 499 SP STATEMENT EXECUTION DETAIL record, the SP EXIT.

[00:28:00.390] – James Martin, Senior zSolutions Advisor, Rocket Software

And then once that stored procedure is exit, Db2 starts to perform its work. You can see the SQL requests. And then there’s this SQL FETCH C1, which is fetching information from that cursor that was opened in the SP. And after that 59 SQL FETCH C1, you can see that we had a 1.4 second jump in just that particular time. So just by looking at this merged instrumentation data together, I’ve now identified a performance issue just by looking at that. And if I was to scroll down a little bit where there were more records here, there would be a matching 58 record (SMF record) in Db2 that would give me details about what was being done in this particular FETCH from the cursor. And in this particular instance, this was an index scan in Db2 that scanned 3 million rows of data.

[00:28:57.930] – James Martin, Senior zSolutions Advisor, Rocket Software

Now, that might be the norm in Db2, but I’m an IMS expert, so obviously I don’t know, but just having this type of information at my fingertips would allow me to go to my Db2 counterpart and ask him: “Hey, is this normal?” Is this something that happens all the time in Db2? And he could teach me something about Db2, and I could possibly learn if that’s normal. If that’s not normal, and if it’s not normal, I can then turn this problem over to the Db2 folks so that they can do some problem determination using their own tooling to find out why this index scan is happening and does this index scan need to happen. So maybe some application changes, who knows? But again, you can see just by merging that instrumentation data together and tracking a transaction start to finish, I can already start to identify performance issues in IMS and or Db2.

[00:29:58.190] – James Martin, Senior zSolutions Advisor, Rocket Software

Now, I mentioned don’t make them on IMS, ATF records, or application trace records. So, here’s an example of a report (Slide 9 – OMEGAMON For IMS ATF: IMS PA Report) from IMS Performance Analyzer. Performance Analyzer has an entire reporting section that deals just with Application Trace Facility. And what’s some of the neat things that are inside of these ATF records, as I said, this is monitor level detail in IMS. So, you can see very distinctive RELATIVE duration and CPU times for all the DLI calls that are happening in this particular transaction.

[00:30:36.200] – James Martin, Senior zSolutions Advisor, Rocket Software

And if there are external subsystem activity, you can see that information in here, too. You see we have some SQL calls in here as well. Now, the neat thing about this is we also get DLI CPU TIME, ELAPSED TIME in DLI Db2, CPU TIMES, and ELAPSED TIMES in Db2. Again, database performance is not something that we can determine out of the IMS log and/or out of the Db2 log, In most cases. It takes a combination of multiple different records and monitor level detail to get that type of information. But you can see in this particular report, ATF gives you that information, which is very valuable, especially if we’re looking at database performance, whether it’s in Db2 or IMS, and trying to understand if the database performance is what we need it to be to execute our transaction workloads.

[00:31:40.810] – James Martin, Senior zSolutions Advisor, Rocket Software

Now, one thing I haven’t mentioned a lot is here on Slide 10 (AIOPS for IMS: Live Feed and Log Forwarding) – forwarding instrumentation data to off-host platforms. And they don’t necessarily have to be off-host platforms either. Because I know there are a lot of customers out there today that still do their performance analysis and their analytics analysis on Z.

[00:32:05.150] – James Martin, Senior zSolutions Advisor, Rocket Software

So, there’s several different flavors of AI ops for IMS. And it kind of starts with the OMEGAMONs, as I mentioned earlier in the presentation. And in the OMEGAMON world, we have a new feature which just came out with the latest release of the OMEGAMONs. It’s called the OMEGAMON Data Provider. And the OMEGAMON Data Provider creates JSON streams into off-host analytics platforms and gives you live data. So you will be looking at live data in these ODP feeds coming out of OMEGAMON for IMS. And that allows us to look at things as they are going on in our system now. Live data is always a good thing. Most of the other tools that we’re talking about here are all post processing or historical data. So being able to stream that data, as it’s happening, is kind of a unique feature. The other tooling that we have, IMS Connect Extensions can give you near-live data that is also streaming JSON lines into analytics platforms like Splunk, Elastic, whatever the case may be.

[00:33:33.470] – James Martin, Senior zSolutions Advisor, Rocket Software

And this allows us to see the aspects of IMS Connect in our performance metrics. Now, as we walk down through these, also remember that by merging these – or mashing up as they call it in today’s world – we can merge this instrumentation data together and try to get the complete picture of what’s going on. So OMEGAMON IMS Connect Extensions are near real-time. Log forwarding and batch can be done by several different tools. The IBM Transaction Analysis Workbench can forward IMS related logs via batch jobs. It can forward OMEGAMON ATF data SMF for CICS, Db2 and some SMF records for z/OS are also supported. It actually has a feature inside of it, the log forwarding feature that allows us to pick and choose the types of data that we want to stream into these analytics platforms. Again, post-processing in this particular case. But a lot of times when we’re working on performance issues in IMS, we’re not always looking at the real-time, today’s time. We had a problem yesterday at 03:00. We need to go back now and figure out why we had that problem, so we don’t have it again today at 03:00.

[00:34:58.950] – James Martin, Senior zSolutions Advisor, Rocket Software

The IMS problem investigator can create CSVs. All right, so we’re not talking about JSON forwarding here. These are CSVs for IMS log data. So, CSVs are very useful if we’re using tooling like Excel, things like that from Microsoft. But again, most analytics engines today are using JSON or something similar to JSON to get this data. Now, one of the relative newer players in the game is the IMS Performance Analyzer. It’s always been able to create CSVs, but we are in the process of getting this tooling to actually show you JSON lines. Right now we have two or three reports, I think, that are available to stream JSON to Elastic and Splunk or any of the ones that take JSON. And I’ll show you some examples of the actual resource usage or the IRUR report streaming into Splunk. And that’s really useful from an aspect of looking at aspects of IMS system performance. We’re working on getting a lot of JSON streaming into the forms-based reporting so that we can also look at transit reports from the Performance Analyzer as well.

[00:36:19.330] – James Martin, Senior zSolutions Advisor, Rocket Software

Now, at the enterprise level, these have been around for a while.

[00:36:23.380] – James Martin, Senior zSolutions Advisor, Rocket Software

The zCDP offering from IBM and the IBM Z Operational Log and Data Analytics platform is the analytics platform from IBM that’s at the IBM and IBM zWIC and zWIN gives us the ability to look into interdependency across different z/OS workloads as well. But that’s really at the enterprise level.

[00:36:50.900] – James Martin, Senior zSolutions Advisor, Rocket Software

So, as I said in the previous slide, “Different tools for different things, right?”. (Slide 12 – AIOPS for IMS: Operational Analytics Done Two Ways). If we’re looking at historical data, IMS system log reporting, those are going to be IMS Performance Analyzer or Transaction Analysis Workbench. And this is for historical problem determination reduces the complexity of relying on what we call batch and green-screen tools. As much as we hate to admit it, the batch and green-screen tools are much lower used in today’s world than they were 20 years ago. 20 years ago, we were all doing this stuff on Z. So again, this is kind of growing with the times and growing with the people that are coming into our profession and need some assistance in working with an old system of record like IMS. So, if we’re looking at application performance problems, we can look at minimal types of configuration, no prerequisites for streams, we don’t have to worry about prerequisites because we’re streaming JSON and most of these analytics platforms handle JSON. A

[00:38:07.220] – James Martin, Senior zSolutions Advisor, Rocket Software

And then the near real-time monitoring over on the right, we’re talking about the OMEGAMON suite and IMS Connect Extensions. And the advantages of those is that we’re just looking at real-time. It’s simple, it’s concise. We can get a wide view when we’re talking about mainframe across multiple subsystems, especially if we have multiple OMEGAMONs running. If we have OMEGAMON for Db2 OMEGAMON or IMS, then that data can all be mashed up together and put into an analytics platform. And it’s ideal for system monitoring. Some people like to look at the TEP in OMEGAMON, some people like to look at the 3270 Classic, and then there’s some people that don’t want to see either one of those. They want to have an analytics platform that’s being fed by ODP, that they can just jump back and forth or keep opening their screen with nice graphs and details so that when they get an alert, they can be alerted right away to some problem that might be going on. And this really helps us identify trends and spikes in performance and things like that, because it’s going on as we speak.

[00:39:22.020] – James Martin, Senior zSolutions Advisor, Rocket Software

Slide 13 (OMEGAMON: Open Data Provider) A little bit of infrastructure for the Open Data Provider and OMEGAMON for IMS. You can see up at the top on the left we have the OMEGAMON agents, whether it’s z/OS, Db2 CICS, IMS, and they are writing their data to a Data Interceptor, which again is writing that data to a PDS or Persistent Data Store. And that Persistent Data Store is where you would view the data in the TEP or the e3270ui or the classic view in OMEGAMON. But if we were enforcing and/or using the Open Data Provider feature in OMEGAMON, we would then use that Data Interceptor to write native records to CPT using a YAML configuration. That data would then be forwarded to ODP or OMEGAMON Data Connect Transforming and Forwarding. And then it’s written by TCP to JSON to Splunk, Elastic. And you can also have some of these time series like Grafana and Prometheus being fed this data as well through the API that’s provided for the OMEGAMON Data Provider. But again, just a quick picture of how all this stuff really works.

[00:40:36.060] – James Martin, Senior zSolutions Advisor, Rocket Software

Slide 14 (OMEGAMON for IMS: Easy To Roll Your Own) And I think the nice thing about JSON in these cases is that it’s pretty easy to “roll your own”. The JSON payload fields that are defined match identically with the doc in the OMEGAMON for IMS documentation. So, there’s not really an uptake. If you can identify the fields from the docs for OMEGAMON, then you can easily create your own feed using JSON because the field names are identical.

[00:41:11.880] – James Martin, Senior zSolutions Advisor, Rocket Software

Slide 15 (OMEGAMON for IMS: System Health) And so when we start to look at some of these aspects of Open Data Provider and we look at OMEGAMON for IMS, and in this case, this particular dashboard was built to look at IMS system health. You can see it provides the individual IMS subsystem view showing various different aspects. In this case we’re looking at CPU and I/O, message queue statistics, transaction processing rates, response times, locking activity, dependent regions, buffer pools, subpools. This is a chart or a dashboard that I built. But you could really build these dashboards out to be anything that you wanted them to be. If this was not sufficient for what you were looking for, well, you can always add aspects around that as well. So, it’s pretty much whatever you want to be in these dashboards. And I think that’s kind of a unique feature of these off-host analytics platforms and can be very useful when we’re not wanting to look at green-screen, we’re just wanting to get a quick picture.

[00:42:26.000] – James Martin, Senior zSolutions Advisor, Rocket Software

Slide 15 (IMS Performance Analyzer: IRUR) And here’s an example of the IMS Performance Analyzer streaming its data. In this particular case we’re using Elastic, but this is the IMS Performance Analyzer internal resource usage and availability report. And you can see that it has multiple reports inside of it. We can get information about buffer pools, application scheduling statistics, virtual storage, logger statistics. I mean, there’s just a wealth of reports inside this particular report. And you can see that you can also view that report in SDSF or on the green screen, just like you have for years. But what’s new in this particular feature is the ability to stream it to JSON and see that data all kind of put together and on the same screen all in one place. And that’s really nice, especially if you are executing multiple reports in the IR.

[00:43:26.980] – James Martin, Senior zSolutions Advisor, Rocket Software

Sometimes you want to see that information by itself. But what happens when you want to see all that information on one screen together? Well, when you need to do something like that, then the streaming really helps you, because as you can see on the screen, there’s multiple types of information in these charts here, but they’re all coming out of the same report. So, again, it’s all on the screen together. I don’t have to jump from report to report to see it. So very helpful from a standpoint of finding all of my data in one place.

[00:44:08.360] – James Martin, Senior zSolutions Advisor, Rocket Software

Slide 16 is an example of a Transaction Analysis Workbench, Elastic mashup, (when I say mashup, I mean merging multiple sources of data together in one place). In this particular case, we’re looking at IMS and Db2, and you can see that this is actually broken down to show me IMS on one side and Db2 on the other side. And if I wanted to mash this data up and actually see it inlaid over on top of one another where Db2 and IMS are together, I could do that as well.

[00:44:40.850] – James Martin, Senior zSolutions Advisor, Rocket Software

Again, it’s all about how you customize these dashboards and what type of information that you’re looking at. But just for instance, here in IMS, I can see the average transaction response times right here. So, I don’t need a PA report to go and look at this. I’ve streamed it from TAW, and I can see that information right here on the screen. And then if I want to see the Db2 aspects, again, they’re on the right-hand side. So mashups are very nice, especially when the mashup work is done for me and I don’t have to go out and try to get instrumentation data from Db2 and get it from IMS and then figure out how to put it together and figure out how to get it on these dashboards together. That’s the nice thing about the tooling we have today, is it does all the heavy lifting for us, and it mashes up the data for us and just streams it into these analytics platforms and lets us focus on building the dashboards the way that we want to see them.

[00:45:47.860] – James Martin, Senior zSolutions Advisor, Rocket Software

Here’s an example of OMEGAMON IMS ATF Data (Slide 17) being streamed by the Open Data Provider and as you can see here, I have IMS and Db2 once again. So very similar information to what you saw in TAW. Now, the biggest difference is this data that we would be looking at from OMEGAMON is live. It’s real time data. So, this is data as it’s happening, as opposed to what the TAW would show us, which would be historical data. And you can see here again, I have average response time information. I have elapsed time and class three support suspend times, and I also have other types of data in there as well. Now, I mentioned IMS Connect, and IMS Connect Extensions is the provider of the instrumentation data created by IMS Connect and kept in the IMS journals. Now what’s unique about IMS Connect Extensions is it’s near real-time, which means it’s not exactly at the time that it’s happening. But I do have customers out in the world that are 5 seconds latent when it comes to this. So, they’ve actually kind of built their own IMS Connect monitors using this particular Connect Extensions feed. But IMS Connect Extensions is not just a performance reporting tool. It also has multiple operational capabilities that allow you to manage your IMS Connect in your enterprise.

[00:47:22.640] – James Martin, Senior zSolutions Advisor, Rocket Software

On this particular screen. There is a feature in IMS Connect Extensions that’s called Data Store Drain that allows us to drain a data store so that we can efficiently – and without potentially losing in-flight information or messages flowing back to the front-end – shut down an IMS system for maintenance or whatever the case may be. So, in a Data Store Drain, what happens is you issue the Data Store Drain command, and it immediately takes that data store offline. It won’t allow it to accept any new messages, but it will allow for messages that are flowing back to the application on the front-end to go ahead and process and be sent back so we don’t have to worry about losing in-flight messages. And in the graph on the bottom, you can actually see that operation taking place. So, we have those lines going across here, and then we see the spike up, which is where we issued the DRAIN command. And then you see the line go flat around 6:08. Well, that’s when the last message that was waiting to go back to the front-end application is sent back to that application.

[00:48:39.240] – James Martin, Senior zSolutions Advisor, Rocket Software

So, we’re now down to zero messages and it flatlines. So, we can now safely shut down IMS, perform our maintenance, and then IMS comes back up 4 minutes later, and we resume normal processing. So, it’s really cool to be able to see performance metrics in these off-host analytics platforms, but it’s also really neat to be able to see things that we do operationally happening. That’s kind of cool, too, because with the capabilities of Connect Extensions to do workload routing, data store drains, and other operational capabilities in IMS Connect, sometimes it’s very useful to get a visualization of whether the rules that you have in place for IMS Connect are being performed like they should be.

[00:49:29.900] – James Martin, Senior zSolutions Advisor, Rocket Software

A really quick one on CICS Performance Analyzer (Slide 18). CICS PA has been streaming JSON lines to analytics platforms for over a year now, maybe even two years now. But again, you get the same type of information you would in IMS, but you get it for CICS. You can see different dashboards can be built. This is actually a Splunk example for this particular one, but you can see all the workloads, including the IMS DB CONTROL workloads, Db2 from CICS PA, and get it in this nice graphical representation as well.

[00:50:09.460] – James Martin, Senior zSolutions Advisor, Rocket Software

And in summary (Slide 19), I think 1) modernizing our IMS performance tools is something that is really needed in the world today. We hear all the time telling these young kids that are coming out of college, you don’t want to mess with a system like IMS or Db2. It’s an old system-of-record. It’s really hard to learn, really hard to understand. I mean, I’ve heard all types of different things over the years, but the reality of it is that these professionals that are coming out of college today, very few of them are being taught green-screen mainframe. I don’t know of a ton of colleges – there are some out there that teach this type of information, but it’s very few and far between. So, we have to start giving the younger generation tools that they can use to support the systems that we’ve worked on and loved for years and years. And I think that 2) some of these analytics streaming functions for performance and things like that are a step in the right direction because they’re all comfortable working with point-and-click, and Microsoft, and graphing, and bar charts and things like that. That’s something that they do get from school. So, the ability to modernize those tools is something that’s been needed for years, and it is happening. So that’s always good to see. And 3) performance information is now available to everyone at your organization with these off-host analytics platforms, all the way up to your managers can determine and look at these graphs and bar charts, and they kind of make sense. It’s much easier to explain than trying to bring your boss over to your desk, put him in front of a green-screen, and say, “Hey, look at this PA report.” And he looks at you and says, well, I don’t even have a clue what this means.

[00:52:02.150] – James Martin, Senior zSolutions Advisor, Rocket Software

What does this mean? Whereas if you have it in a nice pie chart or a bar chart or a line chart or whatever the case may be, it’s really easy to point to a peak in one of those line charts and say, yeah boss, look here, right here, we had this big spike, and we have it every day. So, we need to do the work to try to figure out what’s going on here so we can stop it from happening, because this is putting us outside of our SLAs. Could be a multitude of reasons. And I think the big message I wanted to send about Performance Analyzer is it has a lot of old fixed reports in it – came from the IMS PARS project, which some of you may or may not know many, many years ago. Fixed reports that give you all the metrics you could ever want. The problem is it also gives you a lot of metrics that you don’t want. And that’s why I always encourage customers to 4) use the form-based reporting in Performance Analyzer because it cuts out all the clutter. Why should we include metrics in a report that aren’t pertinent to our performance investigation? It just doesn’t make sense to me. It’s easier to do, yes, but once you get up and running with forms-based report and the wealth of sample reports that are provided in Performance Analyzer, I think you’ll find and agree with me that it’s much nicer to just look at the things that you’re interested in.

[00:53:26.320] – James Martin, Senior zSolutions Advisor, Rocket Software

And 5) Transaction Analysis Workbench gives you the power of looking across multiple subsystems, which I can’t underplay. Being able to see all the systems of record and merge them all together. When you’re tracking a transaction that uses IMS, IMS, Connect, Db2, MQ, all those types of subsystems, the more instrumentation data we can have, the better when we’re doing a performance investigation. Now, granted, we will eventually get down to the granularity of just looking at one piece of instrumentation data, but when we’re looking at the big picture, if you don’t have the big picture, there will always be some system of record that gets the blame. And most of the time, that system of record is not to blame. And that’s the irony of it. It’s the one that you can’t see, is the one that you always blame. And again, the more information we have, the more power we have.

[00:54:27.380] – James Martin, Senior zSolutions Advisor, Rocket Software

If you’d like more information or more detailed presentations on any of the things I’ve spoken about today, please don’t hesitate to reach out to either Tracy Dean, who’s the offering manager from IBM, or you can reach out directly to me, and we would definitely be happy to set something up for you and give you a much deeper, in depth presentation on just one of the products that you’ve seen here today. And thank you very much for attending today. And again, I appreciate everybody’s time today.

[00:55:11.580] – Amanda Hendley

Thank you so much, James. My pleasure. We’ve got a few comments coming in. Someone wanted to know, “How does OMEGAMON recording impact logging performance and capacity?

[00:55:44.660] – James Martin, Senior zSolutions Advisor, Rocket Software

Yeah. I’m assuming, probably incorrectly, that you’re talking about the ATF or the Application Trace Facility in OMEGAMON and what type of logging impact that’s going to have on you, correct?

[00:55:58.380] – Amanda Hendley

Yes.

[00:55:59.160] – James Martin, Senior zSolutions Advisor, Rocket Software

Okay. So, you are going to obviously see an increase in logging. If you turn on ATF we’re writing more records, so you’re going to see more logging. But will you see the same impact as the IMS monitor? No, you won’t. A couple of reasons why. 1) OMEGAMON writes to the IMS log, where the IMS monitor writes to its own data set. 2) OMEGAMON also will call it picks and chooses the aspects and condenses the aspects. And they’ve done a ton of work in OMEGAMON to make it much more “log efficient”, I will call it, than using the IMS monitor. Now, I do have customers that have ATF turned on all the time, but I also have a lot of customers that turn ATF on and off as they need it, just like they do when they run the monitor. So when we talk about impacting logging, that’s always the suggestion I make, because when your boss comes to you and says, hey, we got this problem and we need to fix it, he will usually let you get any data you want to fix that problem if it’s serious. So, in those cases, they will turn on ATF. Some of them still turn on the monitor, but it’s getting fewer and fewer today.

[00:57:26.840] – Amanda Hendley

Thank you, James. Excellent Presentation. Interestingly enough, you mentioned Tracy Dean. Our next event on August 8th features Tracy. She’s going to be covering: “What Does IBMz Cyber Vault Mean for an IMS Environment?”

Upcoming Virtual IMS Meetings

August 13, 2024

Virtual IMS User Group Meeting

IMS Catalog and Manages ACBs Encryption for IMS

Dennis Eichelberger
IBM

Register Here

February 11, 2025

Virtual IMS User Group Meeting

IMS Catalog implementation using Ansible Playbooks
Sahil Gupta and Santosh Belgaonkar
BMC

March 8, 2024

Virtual IMS User Group Meeting

Dennis Eichelberger
IBM