[00:00:00] – Amanda Hendley
Welcome. Thanks for coming to today‘s virtual user group. Today, we are talking about DB2. We’ve got a pretty easy agenda today. We‘re going to do this introduction. Then we’ll have our presentation, then there‘ll be some time for Q&A. I will share with you some news and articles, and we’ll talk about what‘s coming up next. If that sounds like a plan, let’s get to step two. Before we move on, though, I do want to thank our partner IntelliMagic, I guess, IBM IntelliMagic now for their sponsorship of this user group. There are partnership opportunities available if you are interested in having the opportunity to promote within this user group. Just reach out to me. If we haven‘t met, I’m Amanda. Amanda@planetmainframe.com. Let‘s see. After After today’s session, if you would, go ahead and there‘s going to be a two-second survey on your way out. Just real quick, did you learn anything from today’s session? We‘d just love to get your quick feedback there. I promise it’s not a big, intense survey. I‘m holding that for when the Arcadi survey goes out in a couple of weeks because I want everyone to fill out the Arcadi mainframe user survey for me. Take a look and just hit those on your way out the door today.
[00:01:32] – Amanda Hendley
Now we are ready for our session. Today‘s session is ‘I REST My Case!‘, Exploit APIs for Productivity, Toine Michielse, I’m so sorry, tripping over it… is going to present for us today. This is a great session. I actually got to see it at IDUG a a couple of weeks ago, so I think you‘ll really enjoy it. A little bit about Toine’s expertise. He‘s been working in DB2 for zOS since version 1 and has a wealth of knowledge, COBOL, IMS, DB2 programmer, Systems engineer, spent a lot of time in the DB2 lab. I don’t think we could find a better expert to talk about this very important topic because we know APIs are really one of the best, strongest ways to modernize and to improve your productivity. With that, I‘m going to turn it over to you and let you take it away.
[00:02:38] – Toine Michielse
Hey, thanks a lot, Amanda. I hope everyone can hear me. I‘m going to start sharing my screen here. You see my screen?
[00:02:53] – Amanda Hendley
I see inside the program.
[00:02:56] – Toine Michielse
Yeah. You don‘t see the starter slide of the presentation?
[00:03:05] – Amanda Hendley
I do now.
[00:03:06] – Toine Michielse
Oh, yeah. Excellent. Okay. So, yes, my name is Toine Michielse and Amanda, I understand that you trip over my name. It‘s a horrible name. I’m Dutch by origin and the Dutch language is somewhat of a throats to seize, so it‘s almost impossible to pronounce. Don’t worry about it. Last time I was here, that was actually today, nine months ago, it was 19th of March. I was working as a solutions architect. Meanwhile, I‘ve taken a management position in Broadcom. But my passion is still the same. Let me tell you what my passion is. My passion is, of course, DB2 data, but most of all, mainstream modernization. You hit it right on the spot, Amanda. As far as I’m concerned, RESTL APIs, they play an enormously important role when it comes to mainstream modernization. I hope in this presentation, I can make that clear. If I don‘t work, then I do enjoy paragliding, and I also like making a lot of noise. I play drums in a rock band. If you do like rock, there are some CDs out on Spotify. But let’s get to the meat of the presentation. APIs. They are so important when it comes to integrating services or data in whatever process that you‘re looking at.
[00:04:53] – Toine Michielse
There‘s a wealth of APIs. There’s regular normal application programming interfaces through which you include the service of some other module into your program. But there‘s also those phenomenal RESTful APIs that will allow you to invoke services independently of where they are. Restful APIs, they follow a very popular industry standard. Since a few years, pretty much all the main mainstream vendors, at least the mainstream vendors that I’m aware of, we‘re all adopting access to our products or to our services through RESTful APIs. Now, I want to start with a little bit of an additional message. As far as I’m concerned, when you start working with RESTful APIs, one of the things that you really want to look at is something called the API mediation layer. The API mediation layer is provided by ZOWE, the Open Mainframe Project. The benefits that you get from using the API mediation layer is, first of all, a strengthened security posture. I have some more slides on this, so I‘ll leave that out. But what’s important is the second bullet on There‘s a few concerns that people typically have, people that have grown up in the mainframe like myself.
[00:06:38] – Toine Michielse
That is, in the mainframe, we‘ve been working so hard for many, many decades to come to the point where we have unrivaled resilience, availability, security, robust, etc. The fear is when we start adding open source components like ZOWE, where we start adding other accesses to the mainframe, other gateways to access the mainframe services or data, that will come at a cost. The API mediation layer, the ZOWE API mediation layer, will actually be critical in overcoming those issues. What you will see in RESTful APIs that are provided by vendors, that there’s more and more a move from basic authentication or authentication at the service level to using the authentication request managed by authentication endpoint, login endpoint, provided by a ZOWE compatible RESTful API provider. That will, of course, be verified against your your ESM, your RECF, your TopSecret or ACF2, it doesn‘t really matter. From that point on, you just use access tokens. That is much more secure and robust. However, it does more. Once you start playing around with RESTful APIs, if you start with maybe one product, I apologize, I work for Broadcom, so I have just a bunch of Broadcom products here. But maybe you start with sysview for DB2 for performance management.
[00:08:31] – Toine Michielse
You have a bunch of RESTful APIs provided by them. That means you‘re going to have to open a port for that RESTful API provider. The provider is typically a Tomcat server somewhere that requires its own port on your IP stack. Now, maybe the next product that you want to play around with would be mainframe application tuner. Again, you have RESTful APIs, again, a port that you need to open. Lo and behold, this tends to spread very, very quickly, and that means that you have to open more and more and more ports. Of course, that puts more risk because all those ports, they need to be protected well. If we would put something like the API mediation layer in between, then all you have to do is expose the port of the API mediation layer to the outside world. There’s only one point that you need to protect, monitor, etc. Through the mediation layer, you will go to all of the RESTful APIs providers that are within its control. Apart from that, you have some nice features as well, but that‘s not so important for this message. That’s what I wanted to tell about the mediation layer or how to get started with RESTful APIs, if you haven‘t done so already.
[00:10:02] – Toine Michielse
What I do in the rest of this presentation is I would like to present three use cases. Three very different use cases, where I have used RESTful APIs to create a situation where I increase the productivity of the people using the data or the services. The first use case is dealing with performance monitoring. Why is that? Every subsystem on the mainframe, that‘s a bit quick. Every subsystem on zOS, zOS itself requires monitoring. Typically, what we want to do nowadays is monitor for trends. We want to monitor for trends because we want to see problems coming. In this use case, what I do is I use a RESTful API that exposes performance metrics. In this case, it’s a broadcom RESTful API for C3V2, but it could be really any RESTful API that has performance metrics. I use an open source component to collect the data that I can use for analysis or as I will do in this use case, use to visualize the metrics in the dashboard. Again, that‘s an open source component, and I don’t know why this thing insists on moving forward. I‘m not done yet. But once that’s done, once you have those dashboards up and running, then operations or DBA can use those dashboards to monitor and analyze the performance.
[00:11:40] – Toine Michielse
That‘s the idea. Now, I said I use open source components. The first open source component that I use is Prometheus. Prometheus is a piece of software that you can download. In essence, it doesn’t cost you anything. If you like, you can get a surface I have a contract for it, but everyone is free to download it and use it. It runs everywhere where you would like to run it. It runs on your Windows station, it runs on the MacBook. It also runs on Linux servers. Anywhere you like, you can run Prometheus. That‘s extremely easy to install. The second component that I use is Grafana. Again, open-source software, and by now, it’s very well known for the capabilities to visualize data. Of course, data visualization is extremely important when you think about doing analytics and monitoring in a proactive manner. Grafana itself provides a wealth of data sources Prometheus being only one of them. What will happen is, Profana will issue queries against Prometheus, queries that, with a little bit of fantasy, look like SQL. They‘re called promQL. And Prometheus will then take the data out of its database and provide it back to Grafana, which then makes wonderful graphical reports.
[00:13:09] – Toine Michielse
You can make them active, meaning you can have a clickable interface on your Grafana dashboards that show, for instance, I don‘t know, data about the metric from a manual or allow you to click onto more detailed dashboards, et cetera. Those two components, they’re free for you to use, download them. Installation, it took me about 20 minutes. To tell the truth, I‘m not an Opus, or I’m not a Linux guy, I‘m a mainframe guy. But it was really easy to set up. Now, before I go into a little bit of the architecture that is behind it, I have to explain one thing, and that is I will speak about compliant and non-compliant data. What I mean with that is a data format that Promeutheus likes, that’s what I call compliant. The Prometheus data format is something that you see on the left. I have here the response of a RESTful API that I invoke, gets me back metrics, and those metrics, they have labels, call them identifying data, for instance, the subsystem ID, the data sharing group, the function that provided the metric. They have the metric named, in this case, the year, month, day, and down below you see SRP pre–emptible, et cetera, et cetera.
[00:14:37] – Toine Michielse
Then finally, the value. You see that in red on the slide. However, that is not how RESTful API is by default return data. Let‘s say the de facto standard for RESTful API is to return data in JSON format. That’s what I call non–compliant. It doesn‘t really matter for the function. I have both of these data formats in my environment. It does, however, have a little bit of impact on the architecture. If I have a compliant data provider, meaning the RESTful APIs that Prometheus would invoke, they return data in the format that Prometheus likes, then it’s very simple. You install Prometheus, you hook up Prometheus to the RESTful API, meaning there‘s a configuration file in which you provide it with the hostname where it’ll find the RESTful API, the port, the whole API address, if you will, and perhaps some parameters that the RESTful API provides. And off you go. Off your go means in Prometheus, you have something that‘s called the scrape definition that will define the names, the parameters, and it will also define how frequently, you want Prometheus to call. In the samples that I will show later on this presentation, I let Prometheus call the RESTful API every minute.
[00:16:08] – Toine Michielse
That gives me enough granularity of the metrics for my purpose. When Prometheus invokes the API based on the scrape frequency, it will get the data back and it will just simply store it in a database. Now, you saw the API mediation layer appear here really quick. That is my situation. I don‘t like Prometheus to run directly against the API. Please always try to go through the API mediation layer. Now the data is in Prometheus database, a time series database, an extremely efficient way to store large quantities of these metrics. That means that with a small amount of disk base, you can actually store even years of data if you want to. Then Grafana is there on In the end, as soon as an operator or a user wants to have a look at the metrics, he opens up the Grafana, goes to the dashboard that he is interested in. At that point in time, Grafana will set off one or more from QL queries over a RESTful API, by the way, to Prometheus. Prometheus will get the data and feed it back to Grafana for formative. That’s pretty much the architecture. Now, that is for Prometheus compliant data.
[00:17:28] – Toine Michielse
If you have an an API that is not written to return data in Prometheus format, there‘s no harm done. Prometheus can speak to any point. Prometheus doesn’t even know what he‘s talking to. One of the things you can do, and this is an example that I’ve used, is you can have a Python script that works as a proxy. It just opens up the port that Prometheus wants to invoke, the port of the API. Maybe you have to tweak it a little bit. You give it another port, otherwise things get confusing. But if Prometheus thinks he‘s invoking the RESTful API, but in fact, it’s the Python script that does it. The Python script then invokes, again, through the API mediation layer, the true API, gets the non–compliant data, for instance, in JSON format. He parses that data and then puts it back into Prometheus. From that on, From that point onwards, everything stays the same. Grafana just uses Prometheus data for its views, for its dashboard. Reports. Now, if you have a non–compliant data provider going through a Python script, it also means Python can do anything. You can also hook in a non–resistant API data provider in the same architecture, if you like.
[00:19:02] – Toine Michielse
With that, there‘s a wealth of data can be fed into Prometheus database and then be made available to Grafana. As I said, or I touched upon it, Grafana itself doesn’t necessarily have to go through Prometheus. It‘s easy, but it doesn’t have to go through Prometheus. There‘s all kinds of other extenders that allow you to include CSV data or Postgres data or what have you. It’s really a really nice ecosystem. That‘s the architecture. Now, what can I do with that architecture? Why, for instance, why would I want to use this? The whole presentation is built around optimizing productivity. Why do we need to optimize productivity? Because there is in the world nowadays a need for speed. I think most of you will recognize it, at least when I speak to a live audience and I ask the question, Do you have the need for speed? A very short problem resolution turnaround time, quick detection, then people will raise their hands and say, yes, I recognize that. If you think about that, if you think about problem resolution turnaround time, the fastest problem resolution is the one for a problem you never had. If you can somehow avoid problems from occurring, then that will give you zero problem resolution during a run time.
[00:20:33] – Toine Michielse
In fact, the problem never existed. That‘s optimal speed as far as I’m concerned. However, there‘s a few other drivers. One of the most important other drivers that I encounter in my customers is the need to shift left. What we mean with that is the people that understand all those metrics, they have a lot of experience and they’re also very hard to find. These guys, the guys that ultimately have to resolve the problem, if it exists, they need time to do so. What we mean with shift left is try to free up the time of those scarce and highly skilled resources by pushing some of the activities, in fact, the monitoring delivering activities to users that are not so high in demand. I don‘t know, maybe operators, maybe an operating center that you outsource to. The thing is that these people, they are typically not educated to the depth of your DBAs or your system engineers, so they don’t really know what they‘re looking at. You have to make it easy for them to be able to do their tasks without having a thorough understanding of what it is that you’re seeing. Finally, if you want to be able to avoid problems, you have a need for context.
[00:22:10] – Toine Michielse
You need the context of time. Also to optimize the productivity, you also need the context of the persona. A dashboard for a DBA will probably look very different from a dashboard for a zOS person or an operator who‘s only task is to detect problems. Of course, time as a context is needed to understand very quickly when an activity happened. If I look at this, this is a traditional monitor screen. It has a lot of information that the average operator will have no idea what to do with it. While this has a very large collection of metrics that somehow have a relationship with each other, what is missing is the context of time. If you want to find a point in time where, let’s say… let‘s pick one. The get page is like the top half and the set of the screen has a value of 337,000. Now, maybe that’s normal, I don‘t know. But let’s assume you want to find a moment in time where When you got really, really went through the roof and you had all of a sudden a billion get pages in the interval. Then you would have to scroll forward, backwards in time.
[00:23:40] – Toine Michielse
You have to maybe write down all those numbers. You had to build that picture Exactly. That costs a lot of time. That being a lot of time means that you lose productivity. Now, if you, on the other end, have a graphical representation, and this is not get pages, but let‘s have a look at accounting by type, the in Db2 elapsed time, that purple bar somewhat to the top right of the dashboard. That visualization, I think everyone will see in a split second with a blink of an eye, that there is a very high peak out of the ordinary at around 3:00. That is what I’m seeking. Just by scanning it visually, it‘s very easy to detect these anomalies, if you will, those points of interest to alert an SME angle, there’s something that you need to look at. The other thing is, if you have a graph like this, and let‘s assume that you see the value creep up over time. You see it creeping up, creeping up, creeping up, creeping up. Maybe you know that once it reaches a certain level, you’re going to be in problem, you‘re going to have a problem.
[00:24:57] – Toine Michielse
Then again, within a blink of an eye, you can see how long you have before things become problematic. Maybe you say, well, everything stays flat, so it‘s never going to be problematic. That is what I mean with the context of time aiding to productivity. I hope that makes sense. That was use case one. I hope that helps. I’ve been working with quite a few customers now, I‘m setting up dashboards, Grafana dashboards, and I’ve had some extremely interesting customer quotes. I don‘t think I have them in the presentation, but if you’re I really want to know about them, send me an email or contact me. Now, a completely other use case, but also very important, and it‘s not the There’s a type of productivity improvement, and that is augmenting product… It does it again. I just have to move quicker. I have to speak quicker. Augmenting product capabilities. What I mean with that is this was triggered by a customer that I ran into at one of the IDA conferences. Can‘t remember which one. Well, it was Prague. He said, hey, listen, I really love the Broadcom detector product, which is like an SQL monitor. There’s one thing that I‘m missing, and that is the fact that I would like to be able to automate or trigger actions when I see a certain SQL code.
[00:26:48] – Toine Michielse
For them, there were two SQL codes that are really, really important. What he wanted was send an email and/or on the other code, he wanted to make sure that automation they already have in place was triggered. I spoke to our product owner and said, hey, do we have the capability? He said, No. We wanted to build it for a long time, but we never got around. Then I thought to myself, but hey, we do have RESTful APIs that actually give me back those SQL codes. If the product doesn‘t provide it, maybe I can. I started working on this and I created this little architecture. I have a Python script here. I do love Python. You saw it in the previous use case as well. I love Python, maybe because it’s Dutch, like me, or maybe it‘s just a wonderful language. I don’t know. I don‘t want to know, it’s for both. I wrote a little Python script, and it‘s really literally no more than probably 100 lines of code that does the following.
[00:28:03] – Toine Michielse
First of all, it invokes the RESTful APIs. In the previous use case, you could see from Python, it‘s easy to invoke a RESTful API. In the first use case, I would transform whatever Python received into Prometheus data. In this case, I want to use the result of the RESTful API to drive actions. I invoked the SQL Monitor, I get the SQL codes back from the monitor and I see if I need to take action as defined in a small configuration file. From Python, it’s extremely easy to send an email. If you‘re, let’s say, on your Linux server, it‘s extremely easy to send an email to your DBA group saying, hang on, this SQL statement or this SQL code appeared and do something. And the other thing that you can do in such a construct is actually use a ZOWE console command, which is, again, very easy to invoke from a Linux environment or or Windows environment, provided you have the ZOWE CLI installed, which is a matter of minutes. The reason why I wanted to put this in the as well is because if you think about existing automation procedures, 99% of the cases, they are just looking at the log and waiting for a message to appear. If you have existing automation and you know what the format of the messages that they’re looking for, well, you can just trigger that message.
[00:29:51] – Toine Michielse
You can, from the Python script, use the ZOWE console command to put a message on the cis–log. Cis–log is scraped by the existing automation. And voila, now all of a sudden, you have the capability to either send an email, invoke existing automation, or perform whatever action you want. Even though the product itself didn‘t have the capability, with the RESTful APIs that I can use to invoke certain parts of the product, I can build my own action. I can augment the capabilities of the product. And this is what it looks like. This is also a little bit of code to show you how easy it really is. I have some code snippets here. The first code snippet, the endpoint function here in Bison. Really, that is all you need to invoke a RESTful API. No more than that. I know, by the way, it also loads the JSON that gets returned into a response structure. This is really all you need. Of course, some parameters like the path that you need to your endpoint, your API endpoint. That’s the combination of the URI, your host, as well as the base path. You need the API port, and hopefully that‘s your API ML port, and then you need credentials.
[00:31:23] – Toine Michielse
Now, I use basic authorization in this sample. Please do not use basic authentication. That is wrong. I just did it because, well, to be honest, I didn‘t have much time to spend on this, and this was the quickest way to do it. But you really want to use something like digital certificates, for instance. However, once you have that doing things like sending a message to the console, again, this is all the code that you require to format the message. In this case, it’s going to be something like REST API, blank demo, dead blank SQL code monitor. Then there‘s some other stuff that I chunk in the message. That’s going to be sent to the syslog, and automation will have kicked off. This is really all you need, again. I already touched upon this. Please do not use this. This is the way to get it going, but you want to speak to your security guys and see how you can use better methods than basic authentication. What does it look like? It would look like this. This is actually code that was running on my VM. I can show you, but the risk of a lifetime is always a bit tricky.
[00:32:55] – Toine Michielse
I just cut and paste the output. It‘s a monitor that runs permanently in a session. What it will do is at the specified interval, the startup parameter, in this case 30 seconds. Excuse me. 30 seconds, it will invoke the RESTful APIs, see if there’s new SQL codes and process them according to whatever has been configured in terms of actions. I know, by the way, there‘s this flag, Generate SQL. If you think about it, if this helps your productivity, you might as well maximize on the productivity. Let’s assume that you send the email. Why not put all the information, all the contacts that the DBA needs or the person who‘s working on the email to solve the issue. Might as well just get the whole SQL statement. Might as well get the context, like the time, the connection, the plan, the correlation ID, et cetera. Get everything to your heart’s content and include it in that same email. Once you‘re automating stuff, make sure that it’s to the best productivity. Here‘s some SQL codes that I’ve generated. One is just reporting with the SQL text. The other one, a negative 206, that will just send an email to the DBA team with subject, body, whatever you want.
[00:34:33] – Toine Michielse
In the case of a negative 204, I actually trigger a WTO. I send a message to the log, and you can see here that it actually did it. At 7:23:51, there‘s this message here that says REST API demo SQL monitor, SQL code occurred, blah, blah, blah, the context and that is enough to trigger any automation that will be looking for some keywords like the REST API or SQL Monitor, stuff like that. That’s another use case. I hope you like that one as well. The third use case, that is an interesting one. This is not a DB2 use case, it‘s IDMS, but it really doesn’t matter because the focus is on optimizing an activity. Whether that activity, that process takes place on IDMS or somewhere else, it really doesn‘t matter. I know, by the way, I’m not trying to get you to use our products, even though I wrote all of this against our products because I happened to work at Broadcom, and it was for Broadcom customers. But the real idea is to trigger your imagination and see what the art of the possible is with RESTful APIs. In this case, what happened? I was at a customer in Belgium, and this customer had outsourced their operations.
[00:36:15] – Toine Michielse
Their operators were not very skilled. They were unskilled to the point where logging on to an IDMS subsystem, we call in a CV. That actually in itself required them to use runbooks to find the appropriate system, et cetera. One of the tasks that they were asked on occasion to do, I have a timing on these slides that is very annoying, so I just have to go back and forth. I apologize for that. One of the things that they asked to do on a regular basis is cancel a task. Now cancel a task is they have to follow a few steps to some verification. Again, that step, logging on to the right CV or to the right IDMS system and then find the task and cancel it. The customer I was talking to, he said, On average, that takes them about half an hour, 30 minutes. Now, that‘s extremely long. Because let’s assume that they need to cancel the task because it‘s filling up the log or consuming consuming CPU like mad because it’s in a loop, then if that task needs to proliferate for half an hour just because they are unable to log on quick enough and to perform that task, then that‘s a half an hour of CPU down the drain.
[00:37:50] – Toine Michielse
If they‘re unlucky, it might even cost additional fees at the end of the month when you have to pay your software bill. They were very keen on providing a productivity boost for their operators. While we’re working on this, they identified some other tasks that they would find handy as well. Now, this was more elaborate. This is, let‘s say, a diagram of the implementation that I created for them. Again, it all starts with a Python script. You can use anything you like. However, in this case, I needed to have an interface for the operator, this smiley guy over there. I used another open-source component called the whole of this panel to very quickly, from within Python, be able to provide a nice graphical interface. Why graphical interfaces? Because graphical interfaces by themselves, they are a great way to to optimize the productivity. The idea was that through the whole of this panel front-end, they would be able to reach all those CVs, all those IDMS subsystems that they would ever want to touch. That means that the Python script would have to have the capability to interact with all those CVs. For one, it would have to understand what CVs are there, provide the names, and it would have to provide a means to log in to those CVs, and it would have to provide means to get the statistics out of the…
[00:39:40] – Toine Michielse
or information out of the CV, as well as ultimately cancel a task. Luckily, our IDMS development team has provided for RESTful APIs, so I was able to use that and and through all of this panel. Now, while I was working on this, I had discussions with the customer and I said, so one of the things that they would typically be looking at when determining if the task release should be canceled or as part of their job, and they said, well, you know what? One of the things that they would be looking at is how quick the log files are filling up. I thought to myself, hang on, but I can include that as well because the log statistics, I have them in Prometheus. Prometheus itself also provides a RESTful API. I might as well just from the Python script, invoke the RESTful API, issue a promptQL query, this time not going to Grafana, but going into my own Python script, and interrogate the Prometheus database. Then finally, I thought to myself, these operators, as the customer communicated to me, those operators, they‘re not deeply skilled. They need runbooks for everything that they do. Now, an operator typically would execute commands.
[00:41:21] – Toine Michielse
I‘ll show you later that the cancel command is actually automated in this application. However, there’s tons of other commands that the operator would have to issue. Now, if you‘re an untrained operator and you know that you need to issue some commands, then there’s two things that happen. First of all, you typically have to provide parameters to the command. Then the command will give output, and you have to somehow understand what the output tells you. Now, these things are not in… They are in the mind of a very skilled operator, but in the skilled operator, not. They need to look them up. If you need to look them up, typically what you would do is you interrogate the manual. Now, if you want to interrogate the manual, you can quickly go and search through your web browser, hopefully find the manual you need very quickly. Then in the manual, you open it up, you go to the section that describes the command, and then you start reading. But hang on, we‘re talking about optimizing productivity. Why not do that? Because manuals nowadays are online, and they’re typically accessible through HTTP. With HTTP, and with something called Beautiful Soup, it‘s very easy to scrape the data that you get back, the document that you get back.
[00:42:53] – Toine Michielse
With a little bit of extra coding, you can even make it even better and more a productive environment for the operator by having all those actions that he is very likely to do, have them automated already there. This is what it looks like. Here‘s the front-end that the operator is confronted with. Whenever he starts the application, he will get into this into a screen that looks like this. What you’ll see, there‘s two pull-downs here, a pull file that says environment at the upper left corner, and then below that there’s a pull–down system. That‘s the CV. In this case, it’s that IDMS. Now, those two pulldowns, they replace the run book that the operator needs to log in. He doesn‘t have to go through 30 to 70 anymore, find the system, et cetera. He just says, okay, I’m looking for a IDMS system in production or an IDMS system in development or whatever the the environment is, he will click on the name and he gets the data right there and then. Rather than probably spending five minutes finding the run book, following it, it goes back to two clicks, which probably sets them back no more than 2 to 3 seconds.
[00:44:15] – Toine Michielse
That‘s productivity game right there and then. Then once the system is selected, he’ll be automatically presented with these five pull–downs. These pull–downs on the right–hand side with the red bar, labels transactions, user tasks, journals, and system tasks. They provide data that came back from that particular CV. That‘s data that’s supportive for the operator. Now, the way that I‘ve set this up, it’s very easy to configure, but out of the box, the operator gets all kinds of facilities, for instance, for filtering. If you look at, let‘s say, the transaction pull or list box, you’ll see that there‘s a header, task, task code, current program, etc. There’s an empty box below it. That empty box is an entry field where they can start to type, where they can type. While they type, it automatically filters. If they start typing, let‘s say under the DB calls header, they start typing numbers, then the way this works is everything that is bigger or smaller than the number that is currently in the entry box will be filtered out. It’s of no interest. If he would enter something like 400 in the DB calls, then only the second line will remain.
[00:45:41] – Toine Michielse
That, again, is productivity. He doesn‘t have to go through scrolling if the 30 to 70 interface doesn’t provide it. The other thing, and remember, one of the things that I mentioned in the first use case is context. Context that makes that particular individual that‘s driving it more productive. In order to be more productive, maybe some users, some SMEs, perhaps that would use the same thing, they like to see a lot of fields. They would like to see the number of database calls, logs status, et cetera. But the operator is probably not interested in that. In a user interface like this, you can make it very easy to configure all this the data. In this case, you can see that the config button here has been clicked. That opens up a radio button list for all of those list boxes that are out there. The transaction one is clicked and you’ll see which fields are shown. Well, if you want to customize it to your own liking, if you want to apply, if you want to make it your own context, then you just unclick what you don‘t want to see, you click what you don’t want to see, you apply it, you save it, and that is enough to…
[00:47:03] – Toine Michielse
You create your own user interface. That‘s how easy it is. No difficult things in 30 to 70. You just point and click and that’s it. The fact that you can just point and click and invoke all these actions, that is what creates productivity. It‘s all thanks to the RESTful APIs that help in doing this. If we follow the scenario that the the customer wanted to focus on, the most common activity was to cancel a task. In this case, let’s assume it was the over task, and I‘ve created the situation. You see here on the old traditional 3270 screen, you’ll see the open task here that‘s being active, and that’s the task that is here selected, that here is to be selected by the operator. Once the operator selects it, there‘s some additional checking that goes on. The verification that the operator would have to do has been automated. There’s a feedback that says, yes, you‘ve selected this user task, and it is cancelable. You can go ahead if you want. It’s safe to do so. We‘ll click terminate, and once you click terminate, well, in the reality, there’s a confirmation. But then that will result in, again, the RESTful API being invoked, and on the mainframe, that task is abandoned, and this will be the result that you will see on the mainframe screen.
[00:48:37] – Toine Michielse
But again, the operator doesn‘t see this. The operator just operates his task, execute his task, and he will see that the line is gone. That was the original use case. But once you start thinking what a typical operator does, and this will apply to operators in DB2 as well as other subsystems. One of the tasks that they typically do is alert SMEs that something is wrong. Maybe they also optimize the SMEs time by doing a little bit of research for them. It’s not uncommon to look in the syslog for certain events that took place around the time that described the incident. If you ever looked a syslog, or any log for that matter, that can be quite a long data set with a wealth of information. Sometimes, as we say in Dutch, you don‘t see the forest through the trees. Maybe you say it in english as well, I don’t know. If you are in an environment where you can use the capabilities of modern languages, then you might as well start using that. In this case, I added a panel It‘s where the operator can say, okay, I’ll look for a keyword. In this case, 37%.
[00:50:07] – Toine Michielse
It‘s just an example. He puts in the highlight box, puts in this keyword, you can add more keywords. The response of that, of pressing the mark button is that those pound strings will be highlighted. There’s a visual feedback for the operator. And once he clicks condense, then everything that is not relevant, everything that does not have a highlighted piece will be taken out. It will be taken out, not from the log, of course, but from the environment. Now, all of a sudden, all that the operator will have to do is use copy paste from this data box, the data field, and attach all the relevant messages, all the stuff that is not relevant is stripped out, all the relevant messages that he has selected, he puts that in the Jira ticket or a snow ticket or an email to the SMU, whatever you use for communication with the SME. That will allow the SME to have a very direct starting point. In this case, the messages are all the same. That‘s because in the sample, I had only those messages. But you can imagine that if you have, let’s say, the program name and the term and maybe some other stuff that are relevant to the case.
[00:51:35] – Toine Michielse
Once you get rid of all the other messages that are irrelevant, then you optimize the SME time, but by having it in an interface that is provoking productivity like this, where you just have to type the keywords, you also optimize the productivity of the guy that‘s gathering the information. That’s another example of how you can use with a few lines of code and the RESTful API to get the log data in the environment, how you can again boost another part of operator productivity. Finally, well, not finally, I also mentioned that in all the things that they would look at is data in Prometheus, in this case, to find log data or in IDMS terms, journal data. This is not so spectacular. I mean, you have a slider bar where you can select the time range, the date time range. You have a button to click to get the permissions data, and you have a graphical interface. You don‘t need to go out to a dashboard if this happens frequently enough. But more, this is the last part that I want to show you. This is an interface where the the operator can click together his command.
[00:53:05] – Toine Michielse
In this case, he wants to display something. Another option would be to vary something than the command itself. Once he select the command, then automatically the syntax help will be displayed. The syntax help will be here on the right–hand side, just taking from the online manuals, so it‘s always accurate. Then once the operator clicks the execute, he will get output from the command. Again, a RESTful API is used to execute the command and provide the output. Another API is invoked to get the relevant data from the online manual and parse it so that it matches what the operator sees. You have in the syntax, display active task. Well, he already knew that. In this case, there are no parameters added. But if there would be parameters, he would have a description of what those parameters are. What’s more… Down below, I cannot scroll the slide, unfortunately. That would also be an explanation what the output is. The content of this is not so important. What is important, what I tried to express with you is that using RESTful APIs and a little bit of Python coding, you can gain so much productivity in so many different use cases.
[00:54:40] – Toine Michielse
I hope that I was able to express that. Almost at the end of the hour. I‘m not sure if you guys can unmute yourself, but if you can, I would be willing to take any questions you have. The first question that always comes up is, who is that? That’s my daughter when she was three months old. No, she was five, six months old. She‘s now 18 studying in university, but that’s besides the point. That question is answered. Is there anyone who has questions?
[00:55:21] – Amanda Hendley
Feel free to come off mute or type your question in. I know someone asked me directly about the deck, and it‘ll be available
[00:55:32] – Amanda Hendley
Yes. Immediately after the presentation, I will send you the PDF, Amanda, and you‘ll have that. But what’s more, this is a live thing. I would really love to hear from you guys. Any feedback you have, questions that came up. But most of all, I‘m sure that these use cases that I worked on, I have the feedback from others that they’re really nice and they illustrate the capabilities and how you can use them to your productivity. But surely you have other ideas where you could use RESTful APIs, or maybe you think you might be able to use RESTful APIs. If you do have other ideas or you want to discuss, just let me know. Here‘s my email address. You can send me an email. I’m more than willing to jump on a Zoom call with you if you want to discuss. It‘s up to you. With that, I conclude the presentation and I give back to Amanda.
[00:56:32] – Amanda Hendley
I just realized I‘m not sharing my screen, and I think I am. Let me give it a share. Well, thank you so much for that session. The chat is still open. If there are any questions, if you want to raise your hand, I think I’ll be able to see that to ask a question. But I‘ll go through the rest of my slides for today. So thanks for joining us. I want to thank you to our partner in IntelliMagic for sponsoring this session. A couple of articles and news for you. I don’t know if any of you are subscribers, but the recent release of the Cheryl Watson Tuning Letter was today. An email went out just before we launched, which is why I couldn‘t share this before we launched. But there are a couple of pieces in there that you might find interesting if you are a subscriber, or obviously, you could become a subscriber as well. In particular, the analysis of the Telum II announcement is in there. Then IBM unveils DB2 12.1, so a lot of details in there. You’ve probably read that, I‘m guessing, if you’re on here, but just in case you haven‘t, take a look.
[00:58:01] – Amanda Hendley
As always, a reminder to check out jobs.planetmainframe.com if you are looking for something new. Get involved. Hit us up on social media. We‘re on LinkedIn primarily, but also Facebook, X. These videos get posted on the virtualusergroups.com page as well as YouTube, so you’ll have a chance to check us out on both places. I‘ll have the speaker announcement in just a week or so, but save the date for our next session on 1/21/25. With that, I guess I won’t see you all until the new year. I hope you have a wonderful holiday season and see you all next year.