Virtual CICS User Group Sponsors
Virtual CICS User Group | January 2025
Ansible Automation Platform in action, provisioning z/OS middleware with the latest CICS TS collection
Drew Hughes
Software Engineer – CICS Modernization
IBM
Andrew Twydell
Software Engineer – CICS Modernization
IBM
Discover the power of Ansible Automation Platform as it transforms z/OS middleware provisioning with the latest CICS TS collection. This session dives into practical use cases and demonstrations, showcasing how Ansible simplifies and accelerates middleware management on z/OS. Attendees will gain insights into leveraging the CICS collection in Ansible Automation Platform for streamlined automation workflows, enabling operational efficiency and consistency. Join us for an engaging exploration of automation best practices tailored for modern mainframe environments.
Read the Transcription
[00:00:00] – Amanda Hendley
Welcome to today’s session, everyone. Thanks for joining us. I assume you all are all getting settled. My name is Amanda Hendley. Glad to have you here for today’s virtual CICS in your group session. We’ve got the chat open. If at any time I become gargled from my internet connection, I’ll cut my video, but I want to go ahead and get us started so that Drew and Andrew have plenty of time to present for us today. Again, thank you for coming. This is our user group session for CICS. We all have D2 and IMS, and those are available in alternating months. So check those out at virtualusergroups.com. Today, we have our presentation. We’re going to do some Q&A afterwards. We’ll talk about any news or articles that have come out that you should keep an eye on, and I’ll give you the date for our next session. So with that, we can go ahead and I want to thank our partners, Broadcom and DataKinetics are our sponsors for this user group series. We have availability, if you know of a company that should partner, that might want to occasionally share some of their own expert insights as well. You can reach out to me. It’s easy to find amanda@planetmainframe.com. We will be at Share next month. I’m really excited to to be there. Share is a fun show. We’ve got a lot of really fun stuff that Penny and I are planning for our next our next booth at Share. I hope to see you there. That’s where we’ll kick off our influential mainframer nominations, so keep an eye out for that. We’ll also release the 2025 Arcadi Mainframe Navigator. There’ll be lots of fun for our audiences, too. After for today’s session. We have a very quick exit survey. It’s two questions, just as leaving and exiting out, you can just press the answers. It’s basically what you thought about today’s session. We’d love your feedback. Then for Q&A today, you can leave your questions in the chat and we’ll be answering them in real-time. Drew and Andrew both are going to be presenting today. One will be able to monitor our chat while the other is presenting. It’ll be a great opportunity for you to get your questions answered. Then I’m sure we’ll have a little bit of time at the end if anyone needs to come off mute or has anything more in-depth they’d like to talk about.
[00:02:45] – Amanda Hendley
Now, I’m going to stop my screen share so that our presenters can start theirs. I’ll tell you a little bit about our presenters, if I can pull up my notes. Our session today is by Drew Hughes and Andrew Twydell. Drew is developer on the CICS Modernization team, working on a variety of products in the CICS portfolio, including CICS Ansible Collection, CICS Explorer, and the CICS Bundle Plugins. Andrew is a software engineer in the CICS Modernization team at IBM. He focuses on zOS and CICS technologies and does other innovative projects within the mainframe ecosystem. With that, I’m going to turn it over to you and let you just take it away.
[00:03:30] – Andrew Twydell
Awesome. Thank you very much. That explanation makes my job sound very exciting, so thanks for that. Yes, Drew and I here, we’re going to talk through Ansible automation platform in action and how we can use some existing collections to provision CICS and interact with it using Ansible playbooks. But first, we’re going to talk a little bit over the Ansible Basics. So there are people that might know what Ansible is on the call. There are people that might not know what Ansible is on the call. There are people that might not know what Ansible is, so we’re just going to do a very quick dive into what it is. Ansible is an automation technology. It’s used for… It’s an industry standard configuration as code tool used for automation. Their tagline, I’ve put it in the screen here, is turning tough tasks into repeatable playbooks. Anything that is tedious that you’re doing manually or tasks that you have to do over and over again. Ansible is a great option to automate those. It allows for configuration or management of systems, and you can actually target multiple systems at once as well. We’re going to do a demo using…
[00:04:52] – Andrew Twydell
We’re going to do a demo that targets… No, we’re not. Sorry. No, we’re not. But yes, Ansible allows you to, if you’re You’re making changes to multiple systems and those changes are going to be the same, you can perform those changes in parallel, which is a big time saver. Some common uses for Ansible, although by no means is it confined to these tasks. System provisioning is a great example. We’re going to look at that shortly. Bringing up systems and taking systems back down in an automated way. Installing applications, it actually pairs nicely with the first example there as well. Installing applications into systems so that they’re ready to be used. This could be in a self-service manner, so a developer could provision their own system with applications already installed. Then once again, bring them back down again when they’re finished, all automated. That means it’s reliable and repeatable, so you don’t have a clash of environments or you do config slightly wrong and then the environment isn’t brought up in the same way. Managing users and updating certificates tasks like that can be tedious and repeated. A lot of the time you’re doing the same thing. So if you’re managing a user, it’s going to be the same action regardless of the user. Automating that thing with the playbook is a great example of where it can be used. And Ansible doesn’t have to be a manual process either. So CI/CD pipelines are a great way to use Ansible as part of your automation. You can pick specific parts of your pipelines and use Ansible there. It’s a good way to gradually adopt Ansible rather than going all in.
[00:06:56] – Andrew Twydell
So what makes Ansible so popular? So It’s a really good way to standardize your automation tooling across your organization. It’s very likely somewhere in your Ansible is already being used because it’s easy to pick up. And as I said, you can do little pieces of automation with it and build up your collection as you go. It can also be used on lots of different platforms. We’re going to be targeting zOS today, but it works against Linux, it works on your local machine, which means it’s one tool to be used across your entire infrastructure. It can also be used against network switches, which I always think is a nice fact, if your network switch allows SSH.
[00:07:43] – Andrew Twydell
There’s a lot of existing expertise when it comes to Ansible because of its popularity, which means finding people that know how to do it and the documentation and the help in the community is easy to find. And it’s extensible. So if you’re trying to do something that Ansible can’t do easily or nicely, you can create something yourself. You can develop your own or build up your own playbook to do a task and share them amongst your organization as well. But I mean, the biggest winner is its configuration as code actually realized. If you want to store your infrastructure as code and your configuration as code in some source control like Git, Ansible allows you to do that. That means you can reliably bring up and down your infrastructure and manage it with history. So it works via SSH. That’s not always true, but the majority of the time it is an SSH protocol that you need. So on your control node, which is where you’re running your Ansible from, today, that’s going to be my laptop and Drew’s laptop or Ansible automation platform, which you will come on to. That is your controller, and that is where you’re triggering the automation. And then your managed node or the system you’re automating or systems you’re automating. In our case, that’s going to be either your laptop again or a zOS system that we’re pointing to, and it’s SSH that handles that connection. It ships little packets of Python code over to your manage node, executes them, and gets a response back.
[00:09:47] – Andrew Twydell
So we’re going to jump straight into a demo of showing how you can build up an Ansible playbook. It’s going to be a very, very simple playbook to start off with. But I’m hoping you can now see VS Code. Cool. So a playbook is the list of tasks that Ansible is going to perform. It’s written in YML, so a .yml or .yml file extension. And you start by giving your playbook a name. So, Ansible 101. You then need to say, this piece of automation, where am I going to run it? For this first example, we’re going to run it on local host. So that means this piece of automation is going to run on my laptop. And finally, what we’ll do is put a list of tasks, so things to perform. Each task also has a name, so we’re going to do create a file, nice and simple.
[00:10:56] – Andrew Twydell
And you can see, if I do ansible.builtin. So you can see all of these tasks that are pre-built into Ansible. So all of these do a specific thing. And I’m not going to go through each and every one. We’re going to pick Ansible built-in file because we are creating a file. It gives me some nice options that I can use here. We’re going to start by giving a path, so where we want to create the file. And we’re going to put it alongside the playbook. So playbook directory being the directory the playbook lives. We are going to give it a mode, so the permissions that we want set. You can also hover over these and it gives you nice syntax… sorry, nice descriptions of what those are. And we’re finally going to give it a state. So a state is… Oh, this just gives you all the options. So this is the state in which the file will be. So absent will make sure the file does not exist. We’re going to use touch, so we’re going to create the file. I’m now going to go down to the terminal, do Ansible playbook, and give it my playbook name.
[00:12:18] – Andrew Twydell
And it’s going to run through, create a file, and there it is. We’ve created a file, we’ve run a playbook. We can now change the state to absent. And run it again, and we will delete it. So we have just built up our first playbook. Now, what if you want to do this on zOS? You have to tell it where to run that’s not local host. So that’s where inventory’s come in. So I got an inventory file here. An inventory is a list of hosts that you want to target. And this is where you could specify multiple if you wanted to run this against multiple systems at the same time. I’ve got one host in here called zOS, and I’ve given it some standard information like the host, my user, and then some environment variables that are required to run some of the zOS collections. They’re not too important though, we can touch on those a bit later. This is important, though, the name of the host. In our playbook, rather than run it against local host, we’ll run it on zOS, and we will create a file under myuser, User group… We also need to give it those environment variables I said about.
[00:13:42] – Andrew Twydell
The environment… And provide the inventory in our CLI command. Now, presuming I’ve not missed anything, which is possible, we will create a file. Now, I’m going to switch over to this ZOWE Explorer view here to list files under u twydellusergroup. Aha, state is absent. Well done to anybody who spotted that. I will run it again with state as touch. And if I refresh, there is our file. Fantastic. Cool. So now we’re running on zOS. However, Ansible doesn’t have built-in support for creating data sets. So what if we want to create a data set? We’re going to do create a data set and we’re going to use a collection. Now, collections are the way you extend Ansible. And there is a collection called… This is loading. Going very slow. Excuse I’m going to reload the window. Apologies. There we go. So it’s ibm.IbmzOScore. And then they have a lot of functionality that can interact with things on zOS. We’re going to use ibmzOScore.zOSdataset. It creates, deletes, and sets out a piece of data sets. We’re going to use it to create one. So we’re going to do twydell. usergroup.something.hello. We’re going to give it a data set type of sequential.
[00:15:50] – Andrew Twydell
And we’re going to give it a state of present. So we want this data set to be present and we want it to be a sequential. Let’s run it again. And we have passed, I refresh the view, and there is our data set. Again, one more. I’m going to change the state to absent, run it, and it will delete it. So using this collection, you can interact with zOS resources. They have a bunch that I’m not going to go through. I could populate the data set, but I don’t want to go over time, so we will switch back. So there is a growing set of collections for zOS on Ansible Galaxy. And Ansible Galaxy, that’s the repository where Ansible collections live. I like to compare it to Java having Maven Central or Node.Js having NPM, Node Package Manager. Ansible has Ansible Galaxy. So this is the IBM zOS Core collection. It gives you foundational experience on data sets, APF, transferring data, operator commands, that thing. They’re also part of the Red Hat Ansible certified content for IBM Z. So that means they’re also available on Ansible automation platform, which we’re about to look at, as well as some other collections on the right-hand side here, including IBM zOS CICS.
[00:17:30] – Andrew Twydell
A couple of key terms that we’ve just spoken about. So Ansible, that’s the tool that we’ve just used. It’s the run time that executes the automation driven by a Ansible playbook CLI. And that playbook is the list of tasks written in YML that you want to implement. A collection, that’s the extension to Ansible. And you install it from Ansible Galaxy, which is that online repository of those extensions and collections. So talking about CICS, what can we use Ansible for in a CICS world? So the CICS collection for Ansible is split into two, really, but it gives you ability to interact with CMCR to get CSD resources and manage those, as well as CICS provisioning. We’re going to talk about it in two separate worlds. It is being developed in the open. In open source. So that is the link on GitHub there, and it’s available to download on Ansible Galaxy there. The CICS collection is also part of that Ansible Certified Content For Z. The first half is the CICS CMCI tasks. They’re available in all versions, less than this 2. 1, which will be clarified later by Drew when he talks about more than or equal to 2.1. And they use CMCI on the target nose, so on zOS. So picture this in the land far, far away. A system programmer needs to capture the state of some CICS resources, maybe to compile into a report or daily take a snapshot of the regions or something like that. This is possible through CICS Explorer, but it would be a dreadfully slow manual process of searching and getting the right data out. So let’s try and automate this using Ansible, and this is actually one of the samples in our samples repository. So we’re going to jump over there very quickly here. So this is our samples repository that gives samples on how to do certain things. CMCI, I’m going to go through this reporting sample, but there are plenty of others about deploying programs, restarting bundles that kind of thing. But the reporting sample looks something like this. So again, this is a playbook. We’ve given it a name. We’re telling it to run on local host. We’re then prompting for variables to use. So So this is like an interactive playbook where it’ll ask you for the host and port and things like that that you want to run it against rather than using an inventory for variables.
[00:20:43] – Andrew Twydell
These are the attributes that we want to pull from CMCI, and we go down and our tasks include installing some dependencies so that the playbook can run. Actually calling this IBM zOS CICS CMCI get task to get some information, and then we provide all of our values provided by the user. And the resource type we’re pulling is CICS region. Then we format it into a template and we output a report .csv file. So I’m going to run this report .yml. It’s going to prompt me for my information. So I’ve defaulted everything for me so I don’t have to type it live and embarrass myself. But host, report, scheme. So information about the host, the CPSM context. So in this case, I’m querying a specific Plex, and then username and password. What that will do is it will install dependencies, pull from CMCI, and create this CSV report, which has now appeared. And this is information about our CICS regions with the attributes that we specified. So thinking about doing that with CICS Explorer as an example of another tool we could use. That would take a little while, maybe not hours, but longer than running that playbook.
[00:22:07] – Andrew Twydell
We can go and we can say, Okay, I also want this CIS ID for those regions as well. Run it again. Our report of CSV will update with this. Oh, sorry, I need to provide all my information. And the CIS ID will also be pulled for those regions. There we go. As I said, this is one of many samples, so go check them out. But this is a good way of automating pulling the information from live systems. Let’s find where we were. Right. Drew, should I hand over to you?
[00:22:50] – Drew Hughes
Yeah, it sounds good. Just got to find the buttons. I have checked the links to that sample that Andrew has just run as well. So if you do want to go check those out, have a look in the chat and hopefully start sharing. Okay, hopefully you can see my slides. Yeah, so Andrew’s just done a great overview of what Ansible is and some of the content that we’ve had out in the CICS collection version, which we both work on. We’ve had that CMCI content out for about, I think it’s coming up to four years actually now. So it’s been around for a while now. Had some good user testing, been around a while, had some good I’ll get the results from it. So some of the new stuff that we released. Back in June last year, we released all these modules here. I won’t give you too many guesses to what each of those modules is probably doing. They got some quite descriptive names, but basically the new content we released is focused around provisioning different CICS regions, whether that’s multiple regions with Shed CSD or just singular regions by themselves. So no CICS, plex, but just single regions, essentially.
[00:24:01] – Drew Hughes
I’ve got a couple of other modules there for creating some startup JCL for the regions and another one for stopping regions as well. So a couple of automated actions as well as creating CICS-specific and sets was the main focus of this content. And this was all released in 2. 1. 0. So any earlier releases than that was purely CMCI content. With this new version of the CICS collection release, we have both the provisioning content and the CMCI content in the same collection alongside each other. So you can interchangeably use both. So for example, if you could provision a new region, add some CSD resources, and then even if that region has CMCI enabled through SMSS or SMSSJ, you could then use the CMCI collection on top to interact with the region as well. Quick point about the version number, just in case you have seen the CICS collection before. We did do a major bump around, I think it was May last year, where we deprecated Python 2 support. On the controller, so for our demos, that’s mine and Andrew’s laptop or AAP, but that could be a build system. It could be quite a few different things.
[00:25:15] – Drew Hughes
It’s basically not to zOS. It’s the best way of describing it. On that platform, we’ve dropped support for Python 2. We’ve also introduced a new dependency purely for the provisioning content. So this doesn’t affect the CMCI content, but just the provisioning content, the new stuff. We now have a dependency on IBM zOS core and ZOAU and Python for zOS, actually on the zOS target node. A little bit about that. With latest version of zOS as well, that’s now introduced as a no-cost license. So makes it a lot easier to get for customers. But definitely check out this announcement if you haven’t seen it. I think this came out quite a few months back. But yes, so all these new modules now have that dependency on Python for zOS and Z Open automation utilities is in the documentation site. So if you have a look at installing this, if you want to use this, it’s fully docked there, which versions you need, which compatibility you need, and what we support at the moment. So I’ll definitely take a look at that if you’re interested in using these. So I think I’m going to jump into the CSD module as a good example of one of these data set modules.
[00:26:29] – Drew Hughes
So lots of these data set modules, which are the… Quite a few of these, like Aux10, Aux trace, CSD, Global Catalog, Local Catalogs, Local Request-Q, TD Intra Partitions, Translations Ups, they all around creating CICS data sets. We’ll look at the CSD one, but lots of things I’m about to show you apply across all of them at once. Just bear that in mind as I do demo. It’s a lot bigger. Okay, hopefully you can see that. Similar to Andrew’s playbooks previously, we have one here. I’ve called it csd.yml. Pretty simple. Giving it a name at the top to talk about provision CICS data sets. I’m going to read that end because that doesn’t actually happen. But all this is doing is a singular task to come along and create a new CICS data set at this DSN. We have a few parameters under this CSD task. So as Andrew previously explains, this is a collection, so an Ansible collection. This is actually the CICS collection. And then the .csd is a module within the CICS collection. This is a shipable installable unit of Ansible. Then this is just one of the things inside that shipable unit.
[00:27:55] – Drew Hughes
So yeah, just a CSD module. All this does will go along come along, create me a new initial CSD. That doesn’t mean just an empty VSAM data set. That means we’ve come along, we’ve created a VSAM data set with the right attributes to be a CSD, but also runs a CICS program to initialize that CSD to make it ready for use with a CICS region. Let’s go ahead and run it. Turn off all the verbosity because I don’t need that. Actually, I’m going to stick one on actually because I want to show the output. So with the Ansible playbook commands, if you stick something, these -V’s at the end are verbosity levels, really useful for debugging and just for seeing what’s actually going on on your host. Because you’re running code remotely over an SSH connection. So if anything goes wrong, it’s really easy to see debug login if you’re running this with a higher velocity. So I normally recommend not running it with at least one V, for example. Again, just like Andrew, I’m going to run this over this inventory. So exactly the same Lpar, actually, I think. Just different username for my user.
[00:29:07] – Drew Hughes
That took a little bit more time in the CMCR collections because we’ve got to do a few more different things. But we’ve gone ahead and we should have created a CICS 6.1 CSD because that’s based on this sdfhloads, which is a path on my managed nodes on my zOS machine. That should be an actual data And it must be otherwise this wouldn’t have worked. But yeah, we get this big long yellow output here. I’m not going to go into this because if you’re familiar with Cache, you’ll probably recognize what this is. But basically, we create this consistent output across all the data set modules where we have this big list of executions. So these are two items in it. One of them is massive, so it’s taken up all my terminal space. Other one is a bit smaller. But basically, we go ahead and run quite a few different actions on the managed nodes. We use a program called… Well, we use the IDCAMs program to create the VSAM data set. We then run a CICS program called dfhcsdup, which gave all this output to initialize the CSD. Then we go ahead and do some checks using TSO commands just to check that the data has been actually created and is there.
[00:30:25] – Drew Hughes
Otherwise, something’s gone wrong. We also return these nice… We should have an end state as well right at the top. But we have start state and end state. So you can code against these to make sure that each task in your playbook succeeds before you then go and do another task. So for example, I could have another task here. I’m going to have to actually get this right off the top of my head. Let’s just do output result or something. And then we can use that nice Ansible built in, one of those nice built-in tasks. There’s one called debug, which is really handy. It gives you a message. And then in here we can go… Let’s actually save this.
[00:31:12] – Andrew Twydell
So…What’s it? Register. Yeah, here we go. There we go.
[00:31:18] – Andrew Twydell
So we can take this variable here, result, and then we can chuck that into a debug like this. And then that will pretty much do the same thing as having the -V, where it will just output that return structure. Take a second. Oh, yeah, we should probably also have mentioned, Ansible has this thing called idempotency, which basically means you can continuously run this thing with a desired state. Our desired state here is initial. It could have been absent to delete the data sets, but basically, the playbooks are repeatable. If you run them twice, you should get the same output. You would expect something to not change necessarily if your data already in the state that we are trying to achieve. CSD is a little bit of a weird one where we do a couple of actions, even if it does already exist. We won’t see this change not be true. But if something as simple as a simple file or a simple data set, you should be able to run a task twice and first see it changed and then see it second not changed because nothing’s actually happened because Ansible didn’t need to actually do anything.
[00:32:27] – Drew Hughes
You can see I’ve got this output this time and I should be able to just access one of them by just doing something like be something like result.result. And then in the… Because it’s Python dictionary, you should be able to just do something like start state, for example. And And you can just access just this piece of information. Yeah, really handy. Completely built into Ansible. Yeah, let’s move on a little bit. Talk about a couple of other features. So Something else you can do with the CSD module, which is quite cool, is although you can just allocate and create an initial CSD, we also give you the ability to run arbitrary CSDUP commands directly against the CSD. You can extend that with your own definitions. This one, for example, is adding the terminal definitions supplied in the CICS default. I forgot which one it is anyway. You get by default with your CICS installation, but they’re not added to the dfh list, so they’re not auto-installed for you for your region. So if you did want to use the sample console definitions, you could add them to a list automatically to then just add that to your group list to auto-install them when you start your region.
[00:33:47] – Drew Hughes
But you could also do other stuff here, like actually define resources, for example. So get out of PowerPoint and go back to VS Code. I got into a CSD part 2. I have done a couple of other changes this time. So, yeah, I roughly have the same CSD task here. The only thing I’ve done this time is I’ve used a Ginger template to specify the Apple ID, so you can do that in line doing stuff like this. Ginger support is fully built into Python, which is fully built into Ansible, which is really handy. So then I could just have a top level as bars to reference my Apple ID everywhere. And then I’ve got a inline CSDUP update here where I’m adding those console terms I’ve got a couple of definitions to this, dfh list1. So I can easily install them by putting them in my group list for my region startup. I’ve also got a couple of journal-modeled dummy definitions that I’ve got in here. So these will actually create these definitions. So I can go ahead and run this one in a very similar way. Just change the name of the playbook. We’ll run CSD part 2 this time.
[00:34:53] – Drew Hughes
Actually, go ahead. Create the CSD as the first task, and then the second task, it will go ahead and add these resources to that new, the initialized CSD, which means in theory, if you run this twice, what we’ll do is we’ll create a fresh CSD, then add those definitions, and then if you run it a second time, we’ll revert that CSD back to an initial state, then add the definitions again. So you wouldn’t end up with duplicate definitions. You would just end up with this state of a CSD with these definitions. Yeah, let’s go ahead and run nicely. You can see we’ve also in-lined Ginger directly into the I’ve got a few UP scripts, which is really handy for doing stuff like making referencing everything in the same group. So I’ve got this group up here called My Group, and then I’m just adding this group to the dfh list one as well to simplify adding this to my group list. It’s really easy to do stuff like this. Previously, you would have had to do this with what? JCL template maybe and having that JCL actually on zOS. This way I can have it in an Ansible yml playbook.
[00:35:57] – Drew Hughes
I can still template it, but template it probably in in a lot easier way and then check it into a source code management system like Git and do code reviews on it and store all my infrastructure in one place. Really handy. One thing I will point out, so we did a couple of optimizations within the CICS collection, which are probably worth talking about because they’re in all our samples. But you might notice that, for example, CSD in both of these is exactly the same. So that whole region data sets key is the same. CICS data sets key is also the same because I’m pointing to the same CICS install. So what we added support for was this thing called a custom module defaults group. So Ansible has this thing called module defaults, which essentially applies those parameters across all your modules that opt into that defaults group. So what we can do is we can reference the group. I’ve got to remember off the top of my head. Might actually have to look at the slides. Luckily, I do have them in here. Yeah, there we go. It’s group slash. Always get the syntax for this wrong.
[00:37:09] – Drew Hughes
Yeah. Group slash, ibm. Yeah, cool. Okay. Ibm. So you have the reference namespace, reference the collection. Region. And by default, all of our data set modules opt into this group. The CMCI ones don’t. The CMCI ones actually have their own group, which is really handy as well. So you can share all the CMCI parameters across the CMCI group. So this is the region group. And then we can literally just come along, grab these, remove these from here, and paste them over here. And I think that one is tabbing back. And then again, likewise, I can remove the duplicate definitions from over here. Yeah, simplified my playbook nicely. That’s not tabbed right. Okay, it’s potentially still running. Interesting. All right, okay. Ansible is smart enough to work out that even with the bad tabbing. Nice. It’s handy. Run that again. Just to make sure. Cool. Anyway, yeah, the only downside to this currently is the Ansible extension is not quite as up to date with the Ansible core features. So this was a new thing in Ansible Core 2.15, I believe, which is the lowest end of the supportive versions these days, which is good.
[00:38:39] – Drew Hughes
So in support version of Ansible, you should be able to use custom module defaults like this if collections have created them for you. And yeah, massively sympathize this. Where that leads us is if we started introducing more data sets, for example, we might also… If you can think of what are some other data sets we need create a CICS region. So I’ll start one obvious one. You need a CICS global catalog, for example. Let’s make the global catalog data set. Again, we have a module for this under the CICS collection. So you can actually see all of them here. So let’s go ahead, use the global catalog. We’ll give it state initial because we want to create an initial global catalog. And what we could do is we can go DFH, GCD up here, give it a DSN, and we’ll give it a very similar DSN, but I’ll just change the end because instead of CSD, we’ll go GCD, the global catalog. And we’re I can go ahead and run this, and this will now use those region data sets, but only the ones that apply to it. So I only use this one. It will ignore the CSD one because it doesn’t care.
[00:39:54] – Drew Hughes
The global catalog module doesn’t need to know where the CSD is. But it will also use the same CICS installed to run the Global Catalog program. That runs DFH-RMUTL to make sure that the Global Catalog is in an initial state. You can change this to make a cold start Global Catalog or a warm Global Catalog as well. There’s a couple more states for the Global Catalog, and it’s a bit more heavy on what those states actually represent because it’s quite important to how your CICS region is going to start up. If you need to do a cold start or warm start of a region, you can use the state control and the global catalog to change a lot of that information. So yeah, it’s gone ahead. Created global catalog, created CSD, added the resources, which should probably actually prove that it has actually done that. So I open the time off session quickly and just make that a bit bigger. It’s taking this time to log on. There we go. So, yeah, I’m going to use a is a part of ZOAU. So this VLS command is used for listing data sets. It’s something built into Zopen automation utilities.
[00:41:09] – Drew Hughes
But, so, yeah, really handy is what the CICS collection is using under the covers and what the IBM zOS call collection uses under the covers to actually run all the interactions with zOS. So it should just be this plus the Apple ID, which is IYK2ZPZ. I think it’s seven this time. I should stick a star on that. There you go. Yeah, cool. So you can see I’ve got global catalog and a CSD on here. So let’s switch back. We’re talking about these tasks, talking about this optimization. So the last optimization we did was you might notice if you’ve ever used the sample CICS programs that come with the CICS install to create region data sets, they all use the same terminology when referring to global catalog data sets or CSD data sets. We always default them as DFH GCD or DFH CSD. And we normally find that customers tend to use these because really lots of people don’t actually care where they create what the actual data set is called. They’ll care about the high-level qualifier of where your region is going to be created under. But you don’t normally actually care if it’s called DFH GCD or global cat… Or something like that.
[00:42:32] – Drew Hughes
It doesn’t normally matter to someone. If you’re one of those people, you can use another handy optimization and simply replace this with a custom template we have. What we’ll do is we’ll pick them under the same high-level qualifier. So, IYK2ZPZ. Here we go. Apple ID. That’s about it. Actually use the variable, don’t use the value. And then this different so this is Ginger syntax as well. It’s a different form of Ginger syntax to make sure they don’t clash with each other. But yeah, we do this where you can just go data, set, name. I think that’s right. It’s going to moan to me if it’s not. But yeah. Yeah, cool. Nice. Yeah, this will go along and auto-fill those values So this is equivalent to using DFH-GCD or DFH-CSD. It doesn’t really actually… It’s a nice little optimization. Okay, and the last one we have for CICS data sets as well, which is handy when you need to refer to more data sets within your CICS install. So for example, if you’re talking about your CICS license file, SDFH-LIC or, well, NIC, or something like the SDFH_AUF one because you need to pick that, you need to use those data sets as well as SDFH_load.
[00:44:03] – Drew Hughes
Again, we support a template here which is really handy. So we can simply come along and go like this. And this one is Lib_Name, I believe. So for SDFH_load, that’ll get templated as this path. SDFH_load. Another nice little handy optimization to keep those playbooks really small. So whenever you just refer to data sets, if you just only care about the state of the data set you’re trying to make, if you’re just trying to make initial ones or trying to delete ones, you can use this really handy module defaults group and just refer to them like this. So go back to the slides. Yeah, so I spoke about this template, templateing that we built in. This is all documented really well as well. If you did get a bit lost in any of that or I didn’t explain any of that particularly well, check out documentation. We’ve also got lots of samples that use it, which is the main point. The last module that’s a bit different, well, one of the other modules that’s a bit different is this region JCL module. So This one actually generates a sample piece of JCL that you can use to actually start up your region.
[00:45:21] – Drew Hughes
In the sample, we actually go ahead and generate a piece of JCL, add it to a data set, and then submit that data set as a job, which then starts the region. It’s a full create me every single data set I need in my region, then actually start the thing sample. It’s quite full on. But you can see in this one, we’ve got those extra CICS data sets like SDFHLIC, but we’ve also got support for LE_data sets as well. The one that’s missing is the CPSM data sets as well. You can specify a high-level qualifier just for your CPSM path as well, which is handy. This one It also supports you pitting in sit parameters, which past sit overrides, so override the values in your sit table for your region. Job parameters as well for your generated JCL. You can use this. Again, it’s a bit like a data set module, so it has that state where you can create one with all your JCL content using the initial. You can also absence it to delete it if you need to do deep revision or cleanup. This is a sample of that output what that generated JCL looks like.
[00:46:32] – Drew Hughes
It’s quite concise, but it has, for example, all your CICS data sets. If you override any of these paths and you choose to not have these at this path, or maybe you’re using a shared CSD, for example, and you have them at a complete different high-level qualifier. You can provide your high-level qualifier for your CSD without actually creating the CSD. In your generated JCL, you can refer to a different CSD, use that shared CSD with this region, and it will still generate you the correct JCL, and then you can use that to start your regions. Using it this way means you no longer have to maintain this JCL tool, you just have to maintain the YML parameters that go into the JCL. It’s a lot easier to think about it like that. Where your actual change is happening for your input parameters is in the playbook and in the YML, as opposed to on platform and the actual JCL itself. A lot easier to manage. If you make any changes to the input parameters, you can simply just regenerate this JCL and not have to care too much about JCL syntax anymore. You don’t have to care about Eddstein JCL on platform.
[00:47:38] – Drew Hughes
You can just change a playbook and store it in Git. We have this provisioning sample. It’s in a similar place to where those other ones are. If the CMCI one was under this CMCI directory, we have a provisioning directory here, and we have one called full provision, and then it’s a full provision SMS. If you want to make a region with SMSS. It has CMCI in it, so you can connect to it with CICS explore or the ZOWE CICS plugin, as I think it’s called. But yeah, this one, again, quite it’s got that module defaults that I tried to explain. It’s worth going through that because our samples make use of that everywhere and they look a bit confusing otherwise for the first time. They also use top-level VAS like this for an Apple ID, for example, which then gets templated into both the region high-level qualifier, but also actually down into the generated JCL as well. So it’ll end up in the right place in the JCL. And then that very last command is actually doing a submit on the region JCL. So what we go ahead, we use this J sub command, which is another zopenautomationutilities command for submitting a piece of JCL as a job.
[00:48:53] – Drew Hughes
So go ahead and actually submit this generated DFH start JCL data set, which is the one created I like this one. This would then actually start a job on platform, so you could then log on to your region, for example, at that Apple ID. Okay, hopefully that seems all right to everyone. I haven’t lost you all. I’m not going to do a demo of this sample. We’ll come on to how you can use… We’ve got an example of that in a minute instead. It’s probably a bit of a better one. So the last main thing we wanted to talk to you about today, and was in the title, luckily, was Ansible automation platform. So in case you haven’t heard of this, this is a thing from Red Hat. Ansible is a project that’s mostly, even though it’s open source, it’s mostly sponsored by Red Hat and mostly worked on by Red Hatters. So Ansible automation platform is their Ansible as a service offering. It allows you to run Ansible playbooks from a hosted environment, not on laptops, for example, and it gives you the extra enterprise features that you need to be able to run this as a actual enterprise strategy across your organization.
[00:50:08] – Drew Hughes
You could use this instance, for example, to manage both off-platform x86 host, but also your zOS hosts all in the same way with the same access controls, the same functional identities, functional credentials to access and run automation against the machines, and also store your playbooks in a Git-SCM and pull those in using Ansible automation platform. So for example, you could have all your playbooks actually stored in something like GitHub, whether that’s an enterprise edition or even public GitHub, and then pull that content in into actual Ansible automation platform, run it in an automated manner using timers, triggers, all sorts of different ways. I can’t see that scenario in a second, actually. I’ll give you a quick tour of AAP. So We have an instance, we maintain an instance for our team that we use mostly for demos, also just to play around on and do some of our own automation as well. Here, so this is actually hosted on a Open-shift cluster. If you’ve ever heard of Open-shift, if you haven’t, don’t worry, I won’t explain it. It’s a bit more complicated. But on here we have a variety of jobs that have been running on the platform, whether they run successfully What type of jobs there are?
[00:51:31] – Drew Hughes
Are they source control updates to fetch the latest information from a SCM like GitHub to load over onto AAP? Are they actual playbook runs where we run a playbook in the same way that we run a playbook locally? We have these things called Job Templates, which link over to actual different playbooks. So I think each one of these actually directly corresponds to a playbook. And then you could have a project which you make the template from and this would refer to an actual… What’s it called? An actual Git repository in SCM. So you could have just one repository with about five different playbooks in it, for example. And then so you could have five different it’s to run at different times and just use one project. So quite handy. So we’ll show you a bit of that in a second as we go through this. This whole thing also has role-based access control. So you can set up users, for example. So you can set up lockdown users that are only allowed to run a couple of pieces of automation. And you can also limit their view to all the different types of things on this platform.
[00:52:40] – Drew Hughes
So you can stop them looking at any credentials, having any access to any host environment data. They might not even be able to see a path of where Python is on zOS, but they still have the ability to go ahead and run a curated piece of automation. This works really great for that persona split of having a developer that needs, for example, self-service access to region provisioning and needs to be able to spin up a CICS region with their application in it so they can actually run tests against it and make sure that their application works. But then also have a CIS admin who’s written all this automation content for them and has fully locked down the access controls so that they can’t come along and provision anything they want. They can only provision this exact type of CICS region with these exact type of parameters applied to it, and they can’t edit any of that. They can only run a provision or a de-provision, for example. So we do have a bit of an example of that. I’m going to pass over to Andrew to show how that roughly works. So yeah, pass over to you to get that going.
[00:53:47] – Andrew Twydell
Absolutely. So please. Here we go. Can you see AAP, Drew?
[00:53:56] – Drew Hughes
Hang on, I’ll stop sharing mine. Start showing up.
[00:54:02] – Andrew Twydell
There we go. There you go. Cool. So AAP, I am playing the developer, so I will log in as our developer. And because of role-based access, and I go to Templates. All I see are our five templates that we need to run. So let’s just go through and kick them off. So I’ll launch template number one. So this is going to build the COBOL source, which looks like this. We’re paying particular attention to this “Hello World!” Text that is going to be beautifully displayed here. Once we click launch, we get a run through, very similar to you saw in our terminals, but we see it on the dashboard instead. We’re creating a data set for the source code. I’m creating it for the load library that we’re going to compile into. Copy the source code over, run some JCL to compile it. Submit that. Compilation and we have built our COBOL from source into a data set. Fantastic. I’m going to look at that data set in a second. But first things first, I’m going to kick off our provision because it does take a minute or two. So I’ll launch that template now. And what that template does, if I go in and look at it, the Apple ID is provided here by my system programmer, so I don’t have to change that.
[00:55:43] – Andrew Twydell
I don’t have to control that. It’s already there. And if I switch to jobs, we should see an active running job. There we go. Any key points here, Drew, that I should touch on?
[00:56:05] – Drew Hughes
I think one thing to point out is Andrew is logged in as developer user, which was slightly different to my user. So it might be just worth pointing out going over to credentials or something. So this is a sample user that we have that’s fully locked down to show this split in persona. So this developer user, although he’s capable of launching those jobs, like I was saying, has no access to anything else. They can’t even look where these jobs are running, for example, that’s all pre-set up for them. All they have the ability to do is press the Run button and that’s it. They can’t even edit the Apple ID, for example, of that job.
[00:56:42] – Andrew Twydell
Yeah, and presumably that’s configurable. There would be different levels of access for different people.
[00:56:48] – Drew Hughes
Yeah, absolutely. This is a fully locked down user, but if your users, for example, were trusted enough to know what Apple IDs they should be using, you could give that to them, for example, to control. The other quick thing is the playbook that this is running through, which is going through each data set, is actually that sample playbook for the full provision. That full provision with very minor update in it to just add the COBOL load libraries into the DFH-RPL for the region. But otherwise, it’s about 95% exactly the same as the other sample in the Z, Ansible Collection sample. Yeah, there you go. We’ve got a couple, two definitions as well in there. That’s about it. Two definitions and something in your DFH RPL.
[00:57:41] – Andrew Twydell
Great. Yeah. So we are creating the CSD and then we are updating it with these two. We’re creating a program and our transaction called Drew is fitting because we’re both called Andrew. So only fair. It’s a handy four-digit name. There we go. So I reload the output and we are good to go. I will, better or for worse, connect to it. That’s the Apple ID that was provided for me, and we are in region provisioned at our local time. And if we run that transaction that is installed. “Hello, world!”, that’s the text I said we were looking out for, so that’s brilliant. It has worked. Cool. So now what if I want to make a change and redeploy? So that “hello, COBOL. Hello, world!”, That’s the bit we’re going to change. So I’ve checked out this code locally from GitHub And I want to go through and change this. User group. There we go. And I need to update this to the length. So COBOL is happy. I’m going to commit this change. Dating text, commit, and push those changes back up to GitHub. Then I’ll push. What we can do is go to our third template, beautifully provided to me, update the app in the region, launch template.
[00:59:27] – Andrew Twydell
While that’s running, we’ll go through to the playbook so you can see what it’s doing. The new source code that I’ve just changed, we are updating that in the source data set. We’re then going to run that compile job again to recompile it to the load library. And then we’re going to run a new copy on the program so that changes take effect. So that’s still running. Once again, if I run drew, we get “Hello, World!” This one’s a lot quicker, isn’t it?
[01:00:06] – Drew Hughes
Slightly quicker. Slightly quicker.
[01:00:11] – Andrew Twydell
I suppose we could add a step to run the transaction from an operator command at the bottom of the playbook. But that’s perhaps for next time. So we’ve done a new copy of the program. We’ve switched back and we will run through again. And we get our updated text. So we’ve changed the source code on my laptop, committed it to GitHub, and we’ve run our template and the changes have been reflected in our running region. There are templates here for deep provisioning and cleaning up, but I don’t think unless you feel it’s necessary, I don’t think we need to go into those.
[01:00:54] – Drew Hughes
Do you want to just show the return results of one of those tasks quickly? So if you went back to the previous job runs. Just, yeah, really handy thing with AAP is as it’s more of like an actual service compared to just running the commands on your laptop, you can go ahead look at previous past runs, but you can also click on, for example, line 8, where we’re copying JCL to data set. You can actually click on that change and then click the JSON tab to see the full output returned from that task. So this is the output that the zOS Copy task gave us. So you can programmatically program off this, but you can also look at it and just make sure that everything’s run successfully. So, yeah, that’s one of those ones. Yeah, that’d be the full operated command one as well, showing you exactly the response to doing that. Operative commands.
[01:01:44] – Andrew Twydell
I will click the deep provision, but I don’t think we should sit and wait for it unless you think so?
[01:01:59] – Drew Hughes
No. I think it’s done.
[01:02:02] – Andrew Twydell
Cool. I’ll go back to the slides. Have you done this one or are we ready to wrap up and take questions.
[01:02:15] – Drew Hughes
Yeah, let’s… Might as well wrap up, I guess.
[01:02:19] – Andrew Twydell
Brilliant. Thank you, everybody. I answered a question in chat. I’ve not seen any more, but do unmute if you’re able to.
[01:02:31] – Drew Hughes
Yeah. If you were interested in what we’ve shown today, that CMCI reporting sample is probably the easiest thing to get started with. It’s pretty low requirements. You don’t have to set up anything on zOS. You can connect to it directly from your laptop. Really handy to just go have a look at that. There’s a link in the chat to that one if you want to have a look. But yeah.
[01:03:00] – Amanda Hendley
Great. Any other questions from anyone?
[01:03:06] – Andrew Twydell
People asking for the recording.
[01:03:08] – Amanda Hendley
Yes. We will have the recording transcript and everything up in probably two weeks or less, so you’ll be able to get that. Then also make sure that you get the newsletter for the user groups because we announce meetings, we send the presentations and the recordings. Make sure you’re subscribed to our newsletter. I wanted to just pull up my slides once again, if I can find the Share button. I have our elusive Share button. Here we go. Well, Drew and Andrew, thank you again for presenting. I appreciate the demo and the opportunity for people to learn how to take advantage of this. I’m going to copy the links and notes out of our chat so that they don’t get lost, and we’ll be able to share those back with you all as well. We’ll make sure those get posted. Before we go today, I wanted to remind all of you that the Arcadi Mainframe User Survey is available online. You can find it linked over at planetmainframe.com. You don’t have to try to follow this short link here, and I’ll make sure you get it in other ways. But we’re already on track to having gotten the most responses we’ve ever seen from the survey, so we definitely want you to help make it the biggest, best survey response we can have.
[01:04:52] – Amanda Hendley
That will be open for another, I believe, two weeks. To connect, let me speak to that. Connect to us. You can check us out on X, Facebook, LinkedIn, and YouTube, where we do post content frequently. Planet Mainframe and the virtual user groups are very active on LinkedIn. It’s a great place. Find resources, connect with other people that were participating in the sessions. I want to thank our partners again, Broadcom and Data Kinetics for their continued support of the user groups and these sessions. With that, please mark your calendars for March 11th. Topic to be determined, to be a surprise to you, but we’ll announce that in about five weeks when you get the official announcement for our next event. Drew, Andrew, anything else from you all before we depart today?
[01:05:53] – Drew Hughes
Nothing for me.
[01:05:54] – Andrew Twydell
No, thanks so much. I’m sure we’ll be back.
[01:05:59] – Amanda Hendley
Yes, definitely. Thank you all for participating. Thank you all for joining us today. You have a great rest of the week.
Upcoming Virtual CICS Meetings
March 11, 2025
Virtual CICS User Group Meeting