Virtual IMS User Group Sponsors

BMC Software

Virtual IMS User Group | August 2024

Pervasive Encryption and IMS

Dennis Eichelberger
IBM’s Washington Systems Center

Regulatory and industry standards press for more and more safety around customer and company data . The Pervasive Encryption model with IMS datasets requires some planning. IMS is unique in many ways and keeping the IMS Infrastructure safe is as important as keeping its databases safe. This session discusses IMS data, logging and utility use from an encryption point of view: What datasets can be encrypted VSAM? OSAM? Why should it be encrypted? A brief rundown on implementation steps for the levels of Encryption used in an IMS world.

Dennis EichelbergerDennis Eichelberger, IBM

Dennis has over 45 years of with ‘mainframe’ operating and database systems. He has worked as a developer, a consultant, business partner and vendor to IBM. Dennis is currently a member of the IMS Support team of the Washington Systems Center. His spare time is spent practicing the Gentle Way.

Read the Transcription

[00:00:00] – Dennis Eichelberger (Presenter)
Great.

[00:00:01] – Amanda Hendley (Host)
Well, I’ve disabled the waiting room, so anyone that still joins can just hop right in. But I want to introduce myself and welcome you. My name is Amanda Hendley. I am the Managing Editor at Planet Mainframe, and your host for this Virtual User Group series. We’re here today to talk about IMS, and happy to have you join us. If this is your first time, welcome. If you’ve been around, welcome back. And we meet every other month, generally on this particular Tuesday. So I hope you’ll bookmark us and plan to join us for future events. I’ve got a couple of things to go over before we get started. So I just did a pretty brief introduction, and I do want to thank… I’ll thank a sponsor. We’ll do a presentation. We’ll have time for Q&A. We’ll talk about any news or articles and then we’ll announce the next event.

[00:01:03] – Amanda Hendley (Host)
So with that, I’m not going to take much more time on this, but I do want to thank our partner, BMC, for sponsoring this virtual user group. All of our user groups are supported by Sponsors. They help us keep it running, and we appreciate them. So give them a shout out if you’re a customer or if you are a soon-to-be customer.

[00:01:28] – Amanda Hendley (Host)
And then I want to make an announcement, too, If you haven’t seen it. Planet Mainframe acquired the Cheryl Watson tuning letter recently. We’re really excited about this partnership. And I think the thing that I took away from having conversations about it at SHARE last week was this general feeling that I agree with. It’s a sentiment that I totally get is when Cheryl announced she was going to retire, We didn’t want to lose this. So that’s why at Planet Mainframe, we went out and talked to Cheryl, and I see a terrible typo there that I’m going to blame on copy paste. But we I wanted to talk to Cheryl and wanted to be able to make sure this continued. So Cheryl and Tom have agreed to be strategic advisors for us, and Frank is going to stay on board for a little while as the author. We are hiring for a new author. So if you know anyone that’s got desire to write, you can reach out to me. I’m just amanda@planetmainframe[dot]com or shoot something through our website. But we’re also running a special on subscriptions until September first, where you get six issues for, I guess, the price of four, what would generally be a one year subscription we’re throwing in two additional issues. So with that, that’s all I’ve got before we get started. After today’s session, there is an exit survey that I hope you’ll fill out. It’s really quick on your way through. And if you just fill that out and let us know how we did, that would be much appreciated.

[00:03:22] – Amanda Hendley (Host)
So I’m going to stop my share and let Dennis take over while I introduce him. Dennis, let me know if you have any challenges with that. You should have the ability to start your share. Dennis has over 45 years of mainframe operating and database systems experience. He’s been a developer, consultant, business partner, and vendor with IBM, and he’s a member of the IMS support team at the Washington System Support Center. Dennis?

[00:03:59] – Dennis Eichelberger (Presenter)
Okay, how’s that? It should say IMS dataset level encryption.

[00:04:03] – Amanda Hendley (Host)
It looks great. If anyone is having trouble seeing it, just give us a shout. A lot of issues can be solved by just leaving and coming back in.

[00:04:14] – Dennis Eichelberger (Presenter)
Okay. Actually, I wanted to say something about that. Leaving and coming back in is a little bit like rebooting. We don’t do that on the mainframe.

[00:04:24] – Amanda Hendley (Host)
So it’s a new practice. I will also. I meant to interject and say, Dennis will take Q&A during his presentation today. So if you do have any questions along the way, you’re welcome to throw them in chat or raise your hand and come off mute and ask them.

[00:04:42] – Attendee
Amanda.

[00:04:46] – Dennis Eichelberger (Presenter)
It looks like it’s due for a computer refresh.

[00:04:53] – Amanda Hendley (Host)
Is there a trick to a computer refresh?

[00:04:57] – Attendee
Yeah, I’m looking at your computer in the background. It looks like you refresh.

[00:05:03] – Dennis Eichelberger (Presenter)
That’s funny. I was actually going to send her another slide, which was the console for Apollo’s, which we all know IMS was rather instrumental in making happen. Anyway, let’s go on. We could do Trivia for three hours here, and we only have two. I’m going to talk about IMS dataset level encryption, I have this in parentheses as pervasive, and I’ll explain that as we go through. There’s a lot of, not quite misinformation, but a little bit of confusion about what pervasive actually means versus dataset-level encryption. I wanted to clarify that for you, and that’s really what we’re going to talk about today. Feel free to jump in with questions.

[00:05:50] – Dennis Eichelberger (Presenter)
I’m going to talk about my agenda here, starting with pervasive encryption itself. All right, so it was It wasn’t all that long ago, I’m going to say less than a decade, maybe five or six years, when I got involved in encryption with different levels of it. There were some things running around at the time that were very important because the European Union came out with its GDPS structure that they wanted everyone to follow. I actually ended up reading that, and it actually was considerably pervasive. It didn’t really matter if you lived in European Union or not, but if you did business there, then were affected by it. So in the sense of what encryption is for, it’s not a matter of if, but when. When are you going to start doing this? You may have already started. You may have been in the process for quite a while, or you may be planning it. At any point, why do we do it? Well, there’s regulatory compliance. After GDPS came out, State of California, State of New York, went into competing legislation to make it worse. Or more strict. Then there’s industry standards. That’s actually what is your competition doing? Let’s take a bank as an example. One bank says, I have encryption. Your data is safe. The other doesn’t. What is your leaning towards where you want to have your account? So there’s a balance there. If one person does it, everyone does it, and it becomes pervasive under the covers. And it’s probably happening and you don’t even know about it? Customer satisfaction. Okay, we regularly get these emails instances. If you’re on one of those credit watch things, you get them via that about a data breach. I got one the other day. It happens all the time because I’m in this area of the world. I read it very carefully and went through and said, okay, that one’s a little more hype than really what happened. But I would rather know than be surprised. And I think most of you folks would, too.

[00:08:04] – Dennis Eichelberger (Presenter)
So these data breaches that we talk about is actually data theft. Someone copies the data, moves it off, and that usually happens under the covers. It could be a bad actor. It could be someone who is disgruntled, but that information gets copied, and we really don’t know about it until later in the world, time-wise. Our current level, average time is from breaching breach discovery is on the order of 30 days. Now, it’s always going to be 30 days. It could be a few days more, it could be a few days less.

[00:08:37] – Dennis Eichelberger (Presenter)
However, there are cases when it’s much later. I recall one case early on where they didn’t discover with a breach for 18 months. And even when they did a forensic analysis, they found out it was actually breached prior to that. So what we’re trying to do here is create a situation where, so what? If you’re breached, it’s unreadable. It will not help you. That’s why we encrypt data.

[00:09:04] – Dennis Eichelberger (Presenter)
And so far, outside of a couple of movies, we’ve never seen a DeLorean doing 88 miles an hour to go back in time to fix it. There’s no do-overs. Let’s get ahead of the curve.

[00:09:19] – Dennis Eichelberger (Presenter)
So briefly, my agenda is I’m going to talk about the pervasive encryption umbrella. Then I’m going to talk about IMS OSAM datasets, how that fits into this umbrella, and then what we’re going to do about it, which is essentially a VSAM linear dataset. I’ve got some implementation notes specific to IMS. IMS is its own entity and has some very interesting things that you need to be aware of when you implement it for. And then things to look for. I’ve got a couple of examples of aberrations that came up due to a misunderstanding of what was being done. So let’s do some clarification on this.

[00:09:57] – Dennis Eichelberger (Presenter)
To start with, We talk about pervasive encryption, and I have been in communication with folks who believe that compression is encryption. Stop laughing out there. Who knows better? So in compression, that’s something an algorithm that exploit redundancy of data, basically bytes. If you got a series of blanks, we compress that and represent it as a smaller space so that we can save storage and have a higher I/O rate because we are moving something smaller than what it is really planned The algorithms might be secret, but essentially compression works based with non-random or consistent data. We’ll talk about the whys of that a little bit later. Encoding, that’s a process of changing the data representation. The same data place to place might be represented in different encodings, binary, hex, decimal, base, 64, et cetera, which are not usually meant to change the data’s meaning. However, it does filter it a little bit. They’re not meant to conceal data, but they might do so if it’s done correctly. But it’s not necessarily a secret algorithm. Somebody can reverse engineer this fairly simply. The algorithms are kept secret with encoding, and the encoding must be non-random or consistent, also as it is in compression. So when we get into encryption, that’s the process of concealing the information, and it’s based upon a different value. We call that a Modern schemes allow for this functionality such as unknown data and guarantee the data integrity. The algorithms are not secret with encryption, but I’ll show you one. The keys are secret. And the encryption must be random. There we go. What the heck?

[00:12:01] – Amanda Hendley (Host)
Dennis, there’s a quick question about how long does it take? Why does it take so long for these data breaches to be discovered?

[00:12:08] – Dennis Eichelberger (Presenter)
Okay. So that’s actually a curiosity because this is something that we refer to on the mainframe as someone either disgruntled, taking it with them, or someone being compromised to get to the data, a fishing expedition, perhaps. So some of those are well-hidden. Some of them are obvious. One of the things that I want to talk about on encryption has to do with… I’ve lost my screen. Sorry. Has to do with if you rotate the keys in a regular fashion, that’s another level of keeping it safe. So essentially, if you have an SQL system and it’s open to the Internet, someone can go through doing an SQL, squeeze or press against that information to try to get something. And maybe they will, maybe they won’t. There’s other security features involved in people from the outside coming in. So why does it take so long? There’s no real answer to that. There’s a couple of things, like I said, someone involved in the company who gets compromised is probably one of the better situations of understanding how that happens. How do we get back here? Help. I am not very good with Zoom.

[00:13:51] – Amanda Hendley (Host)
We can see your presentation. I’m guessing that somewhere floating on your screen is a toolbar, and the reason Do you want to stop the share and start it again?

[00:14:11] – Dennis Eichelberger (Presenter)
I could do that, I suppose. Let’s try it that way. All right. Am I back?

[00:14:43] – Attendee
You are.

[00:14:44] – Dennis Eichelberger (Presenter)
All right. Thank you very much. I am so used to Webex that this is brand new.

[00:14:51] – Attendee
And I just posted a note about the compression because it’s only compression on VASD.

[00:14:58] – Dennis Eichelberger (Presenter)
Okay. Thank you.

[00:15:00] – Attendee
It’s expanded when it comes to memory.

[00:15:04] – Dennis Eichelberger (Presenter)
All right. So I wanted to just talk about that. I’ll give you a couple of examples, and I have to compliment the guy that I got permission from to do this because I spent several hours negotiating this. So we’re going to take a name into a field, Dennis Eichenberger. And you can see that there’s a lot of blanks in there. Compression, we’re just going to use an asterisk. Now, this is logical. There’s going to be some other things under the covers for this. But just as logical, we just shorten it. However, if you can see, Dennis Eichelberger is still in the clear. In this example, compression does no obsecation at all. In the encoding example, you’ll note that my last name and my first name both have a couple of repeating characters that are encodable. For instance, the E-R and E-R and burger is encodable. We can shorten that by one bite each with a hashtag. The double ends can be represented by a different character, and it shortens it even more, and it is somewhat unclear, but someone could probably figure out that’s the name if they’d heard it pronounced. Now, the encryption thing is a bunch of random stuff. If you just see the bottom line on that, you wouldn’t know what it meant, other than the fact that I’ve just said it was coming from Dennis Eitelberger. It’s a bunch of random stuff.

[00:16:26] – Dennis Eichelberger (Presenter)
So yes, the negotiation process was extended went many hours over the regular. I got over time for it, too. All right, pervasive encryption. Initially, when this was created by the distinguished engineer, this was our pyramid. At the bottom, we have hardware compression, disk and tape. That’s actually turned on at the hardware level, and it’s essentially to protect it if someone walks off with a piece of DASDI, or there’s a CE having to do some repair. The data itself is not visible to him. Next up above that is file dataset level encryption, which is where we’re going to talk about today. That’s at an I/O level. Dfsms, when it processes through Media Manager, will encrypt on the way out and decrypt on the way back into the mainframe. Next level above that is database level encryption. And this was created for specifically for DB2 and IMS so that the data could be encrypted at the buffer level so that if a dump was taken, the data in the buffer would be unreadable because it’s encrypted. Then at the very top, and this is very a specific type of application, although I do know companies that chose this first, the encryption is embedded in the application program. This is the most granular, but it’s also the most difficult to maintain because it is in the application, which has to include the information on the passwords and the key labels in the application, which means every time a key is rotated, it may affect the application. Not always if it’s done in some fashion, but it could affect the application and what it’s trying to do. So that means the application might have to be rewritten, re-brown, and retested at whatever key rotation period is involved, three months, six months, one year. All of that affects how the application is coded.

[00:18:36] – Dennis Eichelberger (Presenter)
So dataset level, this is where we’re focusing today. This is transparent to the application. Access types, VSAM, DB2, IMS, other middleware can use file and dataset level encryption. The access is controlled by your security access facility, RACF, ACF to Top secret, whichever one you’re using, they’re going to have the control over the dataset being encrypted or not. It’s good for bulk, low overhead. So batch jobs clearly can run quickly. IMS transactions can run quickly. There are some things involved in the particular type of key that I’ll talk about that will make it much more easy to implement in the long run.

[00:19:30] – Dennis Eichelberger (Presenter)
Now, I put this in here since we’re all IMS folks and probably overlap into DB2. The database level encryption provides the same level of encryption, the same algorithms, the same key links. However, the difference is that the data in use within the memory buffers are encrypted. And this is probably up to your management or auditors or all of those people up there who the stuff rolls downhill and you have to do the work.

[00:20:04] – Dennis Eichelberger (Presenter)
So I’m going to talk mostly about the dataset level encryption and the encryption we’re going to start with here is the algorithms. Now, there are several algorithms that are available on the mainframe. The one that is used for the SMS and hardware is AES, advanced encryption standard, as the 256-bit key. And there’s a picture of how that algorithm works. So that’s advertised. We can see that it’s an algorithm. We’re not going to tell you any key that you need to use. It will be assigned. It will be created randomly. So that’s what it does. It does all of the stuff on the left, block ciphers, rounds, four steps per round with the bite substitution, shift, row, mix column, and add a round key. And then it reverses that on the way back out. So that’s the picture. You can hold that in your mind, but the key is what you’re never going to see. And there’s a couple of things why we don’t see it as we go through.

[00:21:07] – Dennis Eichelberger (Presenter)
All right, so we have an algorithm. That’s the one that’s used on the mainframe, primarily. We have keys. Keys are what define how that algorithm is going to encrypt and decrypt. We have a master key, and that’s used to encrypt and store keys on the cryptographic key dataset, the CKDF. And That’s loaded on to what we refer to as the CEX hardware, stored nowhere else. That’s where the key is. So number one, don’t lose that key. There’s ways of keeping track of it through management things. But the idea is that this is to ensure that you have a key to decrypt the keys when you need it. We’ve got user keys, and those are generated via ICSF services. That is a ICSF have an ISPF-based tool to implement, create keys, and provide a label for that key. And they’re stored in the CKDS under the master key. And we have public or private clear or secure. They’re used with the GDEz tool also for the database-level encryption. Now, clear or secure has to do with Z on the mainframe. I’m going to talk about those specifically. But public and private up here are also involved in if we’re doing a handshake across PCPIP, across the web to someplace. For instance, if you’re logging into a website and you need to create an account to buy something, make a purchase or an order on this website. A public and private combination of keys will be used. They will send you a private key that you will log in to their private key that they just set up. That key is transient. Once you log off, it goes away. That’s one of the reasons that every time this comes up, people say, clear your browser cache, because it might be maintained based upon your own parameters on your computer. So clear or secure keys is what we’re going to talk about on Z.

[00:23:21] – Dennis Eichelberger (Presenter)
And here’s a two column comparison. A clear key that’s exposed in the storage. If you have a dump of that storage, you might see the key and And because of that, you might be able to decrypt the data. The odds are it will take a while, but it’s not impossible. The Clear key is used mostly in software-based cryptography. And what that One of the things is that since there’s no CEXn hardware required, you can use this. And one of the things we talk about using this for is testing. If you’ve never used encryption before, this is a good way of testing your implementation, making sure that the that information does become encrypted. Now, a secure key is never exposed in memory. You don’t see in storage, the dump doesn’t reveal the key. It’s held encrypted under the master key on the Crypto Express cards. And that might be based upon your level of hardware up to six, maybe even seven right now, I think. We do have APIs that can talk to it via the ICSF so that we can I’m going to access it by creating a key that’s going to be stored there.

[00:24:37] – Dennis Eichelberger (Presenter)
Now, performance-wise, a clear key elapsed times is much superior than a secure key. Based upon the explanation of the secure key, the secure key information is on the CEX card, that means we’ve got to go get it from that card every time we use it. So in a perspective of speed, the The clear key is dispatched to run over there on the coprocess, and the clear key is run on a GP. That makes it somewhat faster. In early measurements, and we’re way past that, this is probably two versions of hardware back, but I kept it here so you can see that there is a true difference based upon workload. Sql and DLite type can be 10 to 40 times faster on a clear key. So We want to be aware of that. It’s probably reduced, not quite going to be that high on a Z16 because there are some significant hardware changes from 14, 15, and 16 to allow GPs to handle this in a much faster fashion. So at this point, even though that’s true, we still want to think of it. It might not be as appropriate as it should be for online processing. Each customer needs to make their own decision based upon their auditor and security problems. But this is something to keep in mind. And to paraphrase a ginsu knife commercial, wait, there’s more.

[00:26:12] – Dennis Eichelberger (Presenter)
Because of this difference in performance, we have created something called a protected key. Essentially, it is a secure key, and that is wrapped in another key copied from the hardware, which is built based upon the store clock time at the time that it’s needed. What happens is that the protected key, which is wrapping the secure key, gets moved into memory, and it stays for the life of that job. That means that I don’t have to go get it every time. So these quasi-IO type calls go away. It’s loaded once, and I get the same performance now as I did with a clear key previously. Major improvement. This is really good for batch jobs, something that’s scanning a DB2 table or an IMS database, pulling that out and needs that key. If you’re scanning 12 million records, you don’t want to do this back and forth between the hardware to get that every time we make a call. Now it’s in memory, and we speed up the process quite a bit. Okay, so we have algorithms, we have keys, we have some reasoning behind it.

[00:27:24] – Dennis Eichelberger (Presenter)
So let’s go on to IMS-specific. Dfsms encryption is supported in VSAM. This means that anything that IMS had, specifically as a KSDS or an ESDS, was able to be encrypted. This does not need to be changed. So any indexes can be encrypted now. Ksdss containing KSDSs and ESDSs containing data may be encrypted now and have been able to be encrypted for quite a while. Now we come across OSAM. Osam is an access method that is owned by the IMS labs. It’s not owned by DFSMS. IMS owns it. IMS created it to be a very high-speed processing access to get information. And yes, it has been shown to be faster than the sample. It’s an exeP channel program that does its own thing under the covers. Therefore, it does not access DFSMS. It can’t be used. It does give some optimization for IMS, sequential buffering, et cetera. It’s goodness in IMS from the I/O standard, but it can’t be encrypted that way. So IMS 15.2, this is the trivia question, when was IMS 15.2 generally available? Someone might have an answer for that.

[00:28:59] – Dennis Eichelberger (Presenter)
If If not, I will be glad to give it. March 2020. At March 2020, OSAM may be allocated as a VSAM extended format linear dataset, LDS. It does give some advantages that a previous OSM didn’t have. High-performance FICON becomes, because it’s an extended format dataset now, the capability there for high-performance FICON is an improvement there of high single digits. Okay, may not sound like much, but any improvement is always an improvement. If you’re using mirroring, hyperwrite capability will reduce the latency of that mirroring. Dataset level encryption of them. Hey, suddenly we have that. Since we’re going through VSAM, even though we’re calling it, it’s not an ESDS or a KSDS, we can now use the SMS to get there, and we still have the sequential buffering for performance.

[00:30:00] – Dennis Eichelberger (Presenter)
If anyone’s on IMS, they’ve probably gone past 15.2 now, but in case you needed to know when it was and what PTFs, here’s the slide that shows that. Now, you just don’t enable OSAM functioning by installing these APARs and PTFs. You actually have to implement it in a certain fashion.

[00:30:27] – Dennis Eichelberger (Presenter)
The implications of that We’ve got to change some dataset allocations. We get Hyperwrite and HPF in implement dataset level encryption. Those are the things that can come out of this. But remember, we’re changing the dataset. And that means we’ve got to reallocate the dataset. Following along that path in order for it to be reallocated to get encrypted, we have to unload and reload the dataset, reload the data into the new dataset. This is likely going to require a database outage. And there are a couple of caveats around that that you may be able to utilize so that you don’t have to take the database offline. And I want to point Now, if this question comes up. Sometimes there’s these little notes at the bottom of the slides. They’re on purpose because during some of these presentations, someone will say, well, do I need to change all of my KSDS and ESDS in IMS to an LDS? No, they’re already encryptible. So it’s only the OSAM that gets changed to LDS. The KSDS and ESDS does not need to be changed. That’s encryptible as is already. So the dataset attributes. Remember, this is now an extended format linear dataset. It has to be a minimum CI size of 4K or any multiple of 4K up to 32. So that might change some allocation attributes that you would be surprised at because some of these datasets in IMS have been around for maybe even decades, and they’re not 4K.

[00:32:17] – Dennis Eichelberger (Presenter)
And we’ll talk about how that affects some things in the DB information and allocations. So to allocate a linear dataset, there’s three ways of doing this. You can do this in JCL. You can have SMS, create a storage class or a data class or a management class that automatically becomes encrypted. And that’s frankly how most people do it. They get the idea of how it’s going to work, and then they turn it over to SMS. That’s goodness for the IMS folks. They don’t have to worry about thinking about it other than their DVD information, which I’ll get to. Or IDCAMs input. Now, I’m going to run with the IDCAMs input here because it is the most We’ve easily, easily explained what’s happening when we do this. But it’s for your information.

[00:33:06] – Dennis Eichelberger (Presenter)
Talk to your SMS folks. Here is my OSAM cluster definition. I’ve got a control interval size of 4K, my usual share options, data class, store class, whatever it is. But at the bottom of it is linear.

[00:33:25] – Dennis Eichelberger (Presenter)
This is a function that wasn’t commonly used for a while, but this defines that it is going to be a linear dataset and must have the attributes of an extended attribute dataset. And not only that, it has to be on a volume that will accept that. So another note here is that someone did ask me about this, the background right option. That’s not applicable to a linear dataset, only to KSDS and ESDS. So someone asked about that, I put it in there just in case it’s a concern for So top left, first arrow pointing to the left, I’ve got a cluster, and I’m basing it upon a dataset that I had previously.

[00:34:14] – Dennis Eichelberger (Presenter)
Automatically, it was 2K. However, I did put linear on it. So VSAM, in its wisdom, seeing linear says, you don’t know what you’re doing. It has to be 4K, and it allocated it 4K. So that’s something you need to be aware of if something like this comes in, it will allocate whatever the control interval size if it doesn’t match 4K to the next highest 4K increment. So as an example, 6144 would become 8192 in the allocation. Now, as allocating a dataset, this probably isn’t a big deal, but it is important to remember this as we move forward. Database implications. So if I run into a situation here where I’ve got a DBD and I end up expanding it. I’m increasing a 1K block that’s now going to 4K. Do my root anchor points become less efficient when I do this? Maybe I need to increase them because now I’ve got a bigger CI size. And in the OSAM vernacular, that’s going to be block. But since it’s under VSAM, you’ve got to translate that in your head. So maybe my root anchor points need to be adjusted. Maybe my RBM needs to be adjusted because I’m now working with a fewer number of blocks to get to my maximum bite allocation. So all these things need to be taken into consideration when you move forward with IMS. These are all DBD informations. It’s an example. Because of my test system, this worked fine, but you need to test it on your own system. Please, when we get through this, later on, I say test, test, test. Yes, when you’re going to encryption, do test it in a test environment.

[00:36:13] – Dennis Eichelberger (Presenter)
And what are we going to look for once we allocate it? Database buffer usage, that’s a DC monitor. And this means that once it’s up and running, it’s DC monitor or after the fact, performance analyzer for IMS. There’s a couple of other tools out there that are also available to do similar functions. And take a look at how that database buffer is used. Now, why do we bring in database buffers here? Note my first example. My first example had a 2K block size. So clearly, I had to have 2K buffers. Or higher, larger, but I probably did have 2K buffers so I could assign them to this particular buffer pool that the database was using. Now that I’ve allocated as a linear dataset at 4K, I’m going to be using 4K buffers for it. That means that the database that I just had, all those 2K buffers are no longer in use. So we need to reanalyze the buffer usage at this point and take a look at what reallocations may be needed, if any, to increase buffer size in 4K and maybe even decrease them in 2K. And if there’s no one using 2K at this point, we could roll them up into 4K and increase the pool for that appropriately.

[00:37:33] – Dennis Eichelberger (Presenter)
The other thing we might want to look at is the database usage itself. Are we getting longer wrap chains? Are we getting longer twin chains? And a pointer checker would be appropriate to see if that’s the case. And we may need to readjust the wraps and perhaps even some other information about the DVD definition to get a more usable database without long change so that access does not get locked up going down that chain. And then the space monitoring. This is actually more informational because we will probably have a larger dataset at this time with the LDS. So In the database, we may not overrun it, but we might want to take a look at, do we have hotspots that we’re going after these particular segments within the database? So again, this is why we do this in testing, because the DBd information is probably going to require an unload, reload of that database, reorganization, to get the new DBd information, and that’s an outage to the user. So recommendation, perform We can test any of those adjustments in a test environment, and that’s going to reduce any potential outage to your end customers.

[00:38:55] – Dennis Eichelberger (Presenter)
So in general, implementations, general performance because of chaining. It may impact buffer usage, which because if the buffers are not allocated to an increased level, we may end up impacting locking time by holding that buffer too long. The buffer life gets stuck and it can’t be reused until that lock is released. If we have a long twin chain, that just exacerbates the performance issue to the database. So In essence, we may allocate it as a VSAM extended linear dataset. Now, the PTFs I pointed out, that says it’s available. You actually have to allocate the dataset in order for this to happen. So once we do, we get 4KCI size as multiples, high performance FICON and HPF capable, HyperWrite capable to reduce the mirroring, and potentially dataset level encryption. Now, remember, buffers CROPERS are not in memory. They’re not encrypted in memory when we’re using dataset level encryption. IMS2 does use Media Manager for all of offload, some of this processing to Zip, so there’s a slight improvement there. Now, we just talked about allocating it as linear. Let’s add encryption on top of that. Sometimes people like going one step at a time.

[00:40:28] – Dennis Eichelberger (Presenter)
They can understand linear, making making sure that works, and then adding the encryption on top of it. When we encrypt something, we’re going to encrypt it using a key label. We’re going to call a key label. That key label is a name of the key. You don’t actually get to see the key, but you do get to see the key label. And once you put that in as a key label into the allocation parameters, then it gets encrypted. Now, this can also be added by JCL, Dynamic Allocation or TSO, or IDCamp as a new KeyLabel parameter. And again, the SMS data class, which tends to be where everyone is moving to. Data is encrypted or decrypted via access methods. The buffers remain clear, but during, if I write it out, it gets encrypted. If I read it in, it becomes unencrypted. The access to this key label is controlled through security access permissions. Programs accessing these keys must also have the authority to access the dataset. So now there’s two authorities involved, one for the dataset and one for the key label. Sometimes they will be put together, but I’m going to leave that up to your security people to take care of, which is good.

[00:42:07] – Dennis Eichelberger (Presenter)
They’ll take care of it. They’re already doing the access to the dataset. So now they just added different grouping for the key labels. Here’s how I’m going to allocate it. It’s my same IDCAMs cluster. I have now added KeyLabel. This is a KeyLabel that’s going to be given to me. I’m not going to create this myself. Someone from security, someone from auditing, someone from encryption will provide that. They were very careful to make sure that I did not get too confused and gave me my KeyLabel as my initials label, DSE label. And that is going to enable encryption against this dataset. Now, remember that that just says that the encryption is available. Nothing’s encrypted yet because I haven’t written anything to that dataset. Once I do write to that dataset, it will become encrypted. If you do a Liscat, you’ll see the encryption data, dataset encryption is yes, meaning it will be encrypted. So here’s some steps, basic steps. It may not be exactly what you’re using or you’re going to use. However, this is a basic. It gives you an idea. You can add or subtract things as you need.

[00:43:23] – Dennis Eichelberger (Presenter)
Prepare an encryption key and key label. Now, security, RACF, Otter, they’ll take care of that, and they’ll give the name. Then we’re going to unload the database dataset. We’re going to define the new database dataset that it’s going to be written back to, reloaded to as a DSAM linear with the key label definition that was supplied by security. Now, if you have any updates to your DVDs, now is a good time to do those. Implement the updated DVD, regen. It’s probably pre-gen already, and your ACB make that the active one. Now, DBRC does need an update if you are using Skeleton JCL supplied by IMS, or if you have a product that is using the Recon to generate its JCL also. Because the allocation parameters are changing from OSAM to LDS, there is a new keyword in the database dataset record for the Recon that changes from OSAM LDS, so it knows that it’s going to allocate that dataset as a LDS instead of the standard OSAM. That’s a change. Highly recommended because if you’re going to test a new tool coming up, you’d want this information because it probably reads the recon in order to make those allocations.

[00:44:50] – Dennis Eichelberger (Presenter)
Now, once it’s done, it’s likely you have a PDS out there with all this information, and that’s going to remain static. But initially, if you’re using the Skeleton JCL, it reads DBRC. Now, another thing that came up, this is mostly a user implementation that they had a different high-level qualifier between OSAM and B-SAM. And that meant all of the database datasets registered in DBRC needed to be updated with a new high-level qualifier. That was their thing. It’s up to you. If you don’t have that high-level qualifier or do, it might be a time to consider making a change for consistency. And then we’re going to load the database dataset. So there’s some steps in between here. The load job must have the security access to the key label and the dataset. Otherwise, there could be difficulties, which I will address in a few slides. So does everyone have that? We all set on that one? Okay, unless someone yells, I’m just going to keep doing. So we talk about the last bullet point on the previous slide. You got to have access to the dataset and the key level, both. And this means anything that’s under RACF, any job, IMS tasks, the Dli, DbD jobs, file manager, and that’s actually sometimes under a TSO ID.

[00:46:29] – Dennis Eichelberger (Presenter)
So TSO ID needs access to the key label also. Batch, back out, log analysis, because they’re going to be encrypted in there, forward recovery, pointer analysis, and any other tools that you’ve got or user programs. Now, that’s just a small list. There may be more. You’re going to watch for IEC 161. I messages with new conditions and reason explanation codes. So by policy, we’ve got two steps here. This is for you, IMS folks who are not RACF or ACF people, to get an idea of what they are going to be doing to secure this. Likely, they’re going to create a dataset, project a data, user access, none. And then they’re going to allow certain people to look at it. In this case, the security admin creates the project A, user access, none. And then allows certain access to certain people. Alice can read and update the dataset. She has update ability. She’s allowed to read and see what’s inside the dataset. Eve, who has access altered, can read, update, delete, rename, move, scratch the dataset. And Bob No has access, read, he can only read it. So we’ve got the dataset level taken care of here via security access.

[00:48:11] – Dennis Eichelberger (Presenter)
Now, when we move on to adding encryption with the key label, we’re going to look at any user needs access to the dataset content that is in the clear, meaning they get to read the data. Must have access to the key label also. Now we’re going to go back, Security Guided defines CSF keys with user access of none. That means no one can use it at this point. He creates a key label with none. However, it also allows certain accesses of this key label to the users underneath. Alice and Bob have access to the key label. And Bob had read before, so now he can read it. Alice had read access before, so she can read it. Now, that’s good. They’re supposed to be able to read. However, Eve has no access to the key label, but she can move it around. She’ll never see what’s inside it. But for instance, if a Dasty string goes out of service and they have to move it somewhere. Eve, as a storage administration, come in and can move the dataset. She’ll never see what’s inside of it. So that’s secured from her. The data owners can still see it, and it doesn’t matter where it is for them.

[00:49:29] – Dennis Eichelberger (Presenter)
So this is the restrictions that we can do dataset-wise and key-level-wise in combination to make the data more secure. Now, that’s mostly for your information. So if some RACF person or ACF person starts explaining what they need to do, you’ll have an idea and not be totally lost in what they’re saying. Now, what happens if you don’t Now, this is somewhat entertaining. We talked about the IEC 161 I messages. Yes, if you are as old as I am, and remember hardcopy manuals and messages and codes, the IEC 161i section was approximately three quarters of an inch thick just by itself. Ibm, in its infinite wisdom, decided that that was a really good message to put all the encryption codes into also. So now if it was a hard copy, it would probably be an inch and a quarter thick. However, it’s online and much easier to find without carrying around six pounds of paper. So this is actually a true Okay, so I got a call on this, and the call was, My reload doesn’t work. And I said, What reload are you using? She said, The IMS reload, ULG0. Okay, so the last time that I heard that that particular utility didn’t work was in the middle ’70s.

[00:51:08] – Dennis Eichelberger (Presenter)
Again, I’m dating myself. However, that’s true. That particular utility has been pretty ironclad for a long time. So I said, Could you send me the JCL? Send me the job log of what happened? So it came across, and the first thing that jumps out is Insufficient Access Authority. Now, this person stated that she was able to unload and reload that dataset the next week, the previous week, and had done so as the baseline for testing some timings on us. That made sense. Now she gets into Insufficient Access Authority. Accent, Intent, Read, Access, 1,0. All right, so when this happened, below that, you see a couple of IEC 161i messages with some codes, reason codes, reload type of jobs and so forth. So I searched for this message, as any good problem solver should do, is look at the messages, and I could not find it on the IBM knowledge center. That was a bit disconcerting because, hey, they’re issuing this message. It should be out there, right? However, I did find it under Google. Essentially what it said was, yes, you do have access to the dataset. You do not have access to the That was easily corrected by their security people.

[00:52:34] – Dennis Eichelberger (Presenter)
So the next time it ran, it ran successfully. But that was something that indicated that if you’ve not seen this before, look at the messages clearly to see what’s going on. And you could probably solve this in most cases just by going back to your security person and saying, Hey, this didn’t quite work as it was suggested. All right, that’s first antidote. Compression first encrypt section. Why? All right. Compression relies on a consistent set of characters, repeating characters often. If I encrypt it first, it becomes random. I could have 18 blanks with different characters across it, and it will not compress anymore. So if I encrypt first, my compression fails. I sat next to a guy trying to resolve one of those, and it finally worked out. But that was the real key, compress first, encrypt second. So we said repeating characters, words, or snippets are compressible. Once they are compressed, then encryption can run and create your random characters. Essentially, encrypting first will nullify any attempt to compress afterwards. I I wanted to go… Let me go back here, just one thing. If you are using compression on some of your databases already, there is no reason not to encrypt it if you would like.

[00:54:11] – Dennis Eichelberger (Presenter)
And the tools base for IMS has a little product in there that allows you to create a driver that will include both compression and encryption in the correct order. Just as a side, I’m sure someone else does it, someone else has figured out how to do it, but I’m just throwing out there that’s an easy way to do it. Pros and cons. All right, con. Yes, Océam Access as a VSAM LDS may use more CPU than native Océam. We’re in low single digits once we’re on the Z16. It’s not noticeable for an online transaction and negligible on any long running BMP or batch jump, but it is significantly noticeable in your flower boxes. We do now have OSAM encryptable. Osam buffers in memory are still in the clear as our VSAM KSDSs and ESDSs. So using dataset level encryption still keeps the memory open for people to see. We can still use sequential buffering for VSAM LDSs. We can also add in improvements for HPF and hyper rate. And yes, Yes, we do have numbers out there, some from customers, showing that a VSAM LDS still outperforms native VSAM, KSDS, and ESDS, performance-wise, when encryption is turned on or when it’s not.

[00:55:44] – Dennis Eichelberger (Presenter)
The key authority may require further administration, and that’s okay because we’re going to let the SAF people do it. And the big pro is it’s application independent. They don’t need to do anything to make this work. The only thing happening in the DBA and system programmer world is that we must be aware of the IMS performance impacts in buffering and allocations of the datasets. But if we’re going to put that into a regular process of maintenance, unload, reload for reloads, for reorientization. That’s a one-time issue. And then you go on just as you did before. Okay. Before I move on, any questions? Okay. I want to point out an anomaly. This is an anomaly. It is not a consistent case. I had someone send me an email the other day, said, You said all of this will be increased by this. I said, No, this was the anomaly, and went back and explained it. Now they’re happy. We’re happier. So this is a customer task. And in their general world, they were converting a Guardian encryption at the database level to a dataset level encryption status. Seems like a good thing to do because Guardian encryption is out of marketing and intended to be out of service relatively soon.

[00:57:12] – Dennis Eichelberger (Presenter)
So this was a good move for them to maintain subscription service and still have the encryption levels that they wanted. So let’s take a look at their customer database. It wasn’t a really large database. Well, okay, larger than some. It was a P-item, which means it’s partitioned, a HALDB, with five partitions. After converting that OSAM to LDS and implementing dataset level encryption, they came back and said, We observed a 28% increase in CPU and a 25% increase in BMP run time. Now, this was just running a BMP reading. So this became an issue because people looked through our baselines at the labs, did not show anything near that, and it was much higher than expected. So they were essentially saying, We got a performance impact, we’re not happy. So this actually turned interesting. Went through level two, went through developer, and then I got cold. So stepping back to what they were trying to do as their business case and the steps they took to get from one or the other. We looked at some things. At Garnium encryption is at the database level, so it’s actually inserted as a comp routine into the DVD or Segem statements of the DVD statements.

[00:58:42] – Dennis Eichelberger (Presenter)
It’s okay. That’s how it’s supposed to be. Some segments could be encrypted, others not. And one of the talks we had said not all of those segments were encrypted, just the sensitive ones. Okay, that’s fine. That’s how they implemented the product. Now, there’s also in CompRoutine, when we get into things like that, compression and encryption are usually defined in the comp routine as for data only. Only the dataset portion, the data portion of that dataset is encrypted. No key field nodes or indexes become encrypted. So this became one of those flashing lights. Then we took a look at the run times versus the versus the dataset level encryptions. The Guardium had a run time of 27 minutes. Sms with LDSs had 37 minutes. All right, that’s a significant increase. And then the eXCPs went from 491,000 to 2.2 million. That was a 79 % increase in eXCPs. So, yeah, we saw this, that, whoa, that is not right. Let’s take a look at what really was going on here. So here’s a picture of a PI-HIDAP, five partitions. When we do it as a partition level, we have part one of data, part one of index, two data, two index, and so forth and so on.

[01:00:21] – Dennis Eichelberger (Presenter)
Plus in a HLDB, you’re going to have an ILDS. Now, if we’re not encrypting anything, that works just fine. Not a problem. Now, my middle column shows the GARDIUM level encryption. Only the data portions are being encrypted. One through five, they’re all highlighted as orange, and they get encrypted. Indexes were not encrypted, nor was the ILDS. Now, when we went to dataset level encryption, they just had SMF take everything under that high-level qualifier and put it into that particular pool to be encrypted. That means that now, in addition to all of my datasets being encrypted, all of my index datasets are being encrypted plus the ILDS. That’s more datasets being encrypted. And indexes, traditionally, indexes are not tremendously large, so there’s going to be more calls to an index per record written than into a dataset. So what that means is the time What it takes me to encrypt a data portion is going to be significantly the same as an index, even though I’m not encrypting as much in that index. Plus the ILS, which is a really small record, and that will be called over and over as a reload is being done to build the index.

[01:01:45] – Dennis Eichelberger (Presenter)
Now, they could have gotten away with saying building without the ILDS and add the ILDS later, but it’s just shifting around where that resource is going to get updated. So here’s a picture I have what it looks like. This is my Guardium encryption in blue. That’s via the Comp routine. And I’m usually not going to do my indexes or ILDSs here. Five datasets will process the encryption, even though I have 11 datasets allocated. Now, if I go to dataset level encryption, I’m now encrypting 11 datasets instead of five. We have five datasets process encryption. No, we have 11 datasets processing encryption. So that’s the anonymoly. It’s not a problem. It’s a misconception that It should be less than what it was before. We’ve added increased our datasets by 55 %, and our time increased because of that was 28 %, which when we looked at it, going through the numbers, was actually very appropriate. We said, Congratulations, you have a fast machine. You just weren’t looking at the total picture as this was being done. So as an anomaly and as a caution, be aware that when you’re doing something like this, it might include more datasets than you expect.

[01:03:15] – Dennis Eichelberger (Presenter)
And it’s a good thing to know about that because someone is going to come along and certainly say, Hey, I’ve now got 2.9 million CPUs rather than 491,000. So this is a It’s a thing to be aware of. It’s not a problem. You just have to be aware of what’s actually going on. So reiterate, it’s a VSAM extended format linear dataset. You get hyper, right? I performers, FICON, and encryption enablement. You lose very little. We still have buffering available, and it’s still faster than the KSDS and the ESDS. A couple of Some notes just to keep you aware of this. We do have a new log record 6204 for OCR control blocks that are listed as VSAM. Dfs-0455. I, which is a summary echo of the IEC 161. I. It will include some I/O information. And there’s a lot of IMS documentation with the hyperlink I put below for condition codes, reason codes, and status I didn’t put it in here because this would go from 50 slides to a couple of hundred. But you can look it up. All right. Some considerations as we go through this. What data needs to be encrypted?

[01:04:48] – Dennis Eichelberger (Presenter)
Well, an IMS DBA would probably look and say, well, you’ve got real data in here. It’s likely to be encrypted. It’s got PCI and PII content. And that may be something that you query the application folks about, too, looking at their copying books. They could say, here’s an SSN field. We want to protect that. Here’s a medical record field. We want to protect that. So likely so, and that’s going to be to meet regulatory requirements. And I want to point out at this point, our regulatory requirements are data at rest, not in motion yet. That’s not required. Many people are doing it in motion, and that’s great. They’re ahead of the People like that. But the requirement on Z is at rest. An index, that’s a maybe. It depends upon the content. Now, if your index contains sensitive fields like a Social Security number, you’re looking at people by that name, that might need to be encrypted to meet some requirements. There are other things around that, but that is something that you’re going to have to analyze. The ILDS, likely not. You I don’t really need to encrypt that. It’s an internal IMS structure, and it just contains pointers to the other stuff.

[01:06:08] – Dennis Eichelberger (Presenter)
Now, since all of this will probably be under a similar consistent naming convention, It will probably end up going into an SMS pool and be encrypted anyway. And that actually saves the DBAs a lot of time, the applications, a lot of time trying to figure out what’s what. They just close their eyes and say, let’s encrypt everything. And we’re enough that it shouldn’t be a problem for most applications in accessing. Most user transactions in accessing will not notice it. A BMP is now running fast as it could. So that may be the way to go. Many customers enjoy that. But what it gets down to, it’s probably not up to us. Yeah, we’re going to end up doing the work. But security, auditing, or management, they’re going to make the decisions and tell you what to whether it needs to be done. And they’ll probably give you a timeline to do it. So that’s my takeaway. Here’s how you do it. Someone else is going to decide when and how and what the timing is on it. Here’s my list of pros and cons. You’ve seen this before. It hasn’t changed. And thank you. So questions.

[01:07:28] – Dennis Eichelberger (Presenter)
If you’re on mute, or not, do we need to go to chat to see things?

[01:07:34] – Amanda Hendley (Host)
You might want to check out chat for some interesting back and forth conversation about encryption on indexes and Ilds. And there’s also a request to put your pros and cons back up.

[01:07:52] – Attendee
He covered his slide number 50. So slide 50.

[01:07:57] – Dennis Eichelberger (Presenter)
Thank you, Karen. I don’t know what number is it. This one or the next one?

[01:08:11] – Attendee
No, I was just responding to her comment. Okay. About why would you encrypt indexes and ILDSs? But slide 50 answered that. Yes. She said everyone wants to see your list of maybe the next stage. Not bad. Yeah.

[01:08:38] – Dennis Eichelberger (Presenter)
I do have available older numbers on performance if anyone is interested, but they’re not on the current hardware. Yes, I think you’re right. Replying to the index is yes. If you use it as a database, it has sensitive information. Likely someone up above you is going to say encrypted. So a couple of cautionary tales, an confusing anecdote about messages. And yes, I did talk to the people that this happened to. They have no problem with it being out there. I’m just not using names. So thank you for attending. I do appreciate being invited and being able to do this for you. And Amanda said some of you might have been at share. I didn’t see you because they put us what I refer to as way back in the dark corner.

[01:09:39] – Dennis Eichelberger (Presenter)
I apologize for that, too.

[01:09:44] – Amanda Hendley (Host)
Well, Dennis, thank you so much. I’m going to take over your screen. I can find it. So just a couple of last things for me. I wanted to let you know where Planet Mainframe will be in the next couple of months.

[01:10:10] – Amanda Hendley (Host)
In September, Planet Mainframe is going to be our Db2 and COBOL month. So you can check out planetmainframecom.com for some themed content on those topics. And then the next time to see us in person, we’ll be at IDUG in Valencia, and we’ll be exhibiting at GSE UK this year. So I hope you’ll check that out. Our partner again for this event is BMC. If you’re interested, we also have the Db2 and CICS user groups. And these are all open and available and free for attendees, for anyone here that wants to join. And then Lastly, I’ll share with you how to get involved and stay connected. If you’re not subscribing to our newsletter for the user groups, do that, because in the off months, we share a recap letter and announcements about our next events. So here are a few ways to connect. Honestly, you could scan these QR codes or you could just go to LinkedIn.Com and find us. We’ve got a pretty robust user group on LinkedIn. I do have one other thing for our next meeting date. We are meeting October 8th, and Kevin Stuart from BMC will be our speaker. So with that, Dennis, is there anything else you need to address in the chat? I saw a comment come in It doesn’t look like it was a question, so I think we’re all set.

[01:11:49] – Attendee
I have a question if Dennis is still on.

[01:11:55] – Dennis Eichelberger (Presenter)
I am on. I just was muted.

[01:11:58] – Attendee
Is there any reason to Do you consider moving OSAM to VSAM linear, even if you’re not going to do encryption?

[01:12:07] – Dennis Eichelberger (Presenter)
You do gain the performance increase in mirroring if you’re doing that. The latency is reduced considerably. And the other thing is high performance, FICON now will handle it more efficiently. If it’s just average access, you’re not going to get a benefit there unless you get into encryption and want to use transfer rates for Hyper-Right or HPF. They are noticeable and helpful, but on the other hand, OSAM is already pretty quick. So if you’re not going to encrypt them, keep it in And you’re back pocket in case someone says you need to.

[01:12:48] – Attendee
We’re doing extensive compression, but with custom dictionaries.

[01:12:55] – Dennis Eichelberger (Presenter)
Okay. As I said, People debate whether that’s in thorough obfuscation or not, but you can do both, too. One does not negate the other.

[01:13:17] – Attendee
We’re using the compression to extend the life of this application because it was written in around 1974.

[01:13:26] – Dennis Eichelberger (Presenter)
Cool. That’s actually a great comment on the viability of IMS over time. It still works. You might be in a position where you might want to take a look at HLDB if you’re not using that, if it’s getting that big.

[01:13:45] – Attendee
I used compression to keep from going to HLDB.

[01:13:49] – Dennis Eichelberger (Presenter)
That is an option. But we all know data grows anyway.

[01:13:59] – Attendee
The The extensive use of logical relationships and secondary indexes was the reason they didn’t want to go to LDV.

[01:14:05] – Dennis Eichelberger (Presenter)
I understand that. Yes. I have worked with many customers over the past few years in getting that taken care of and used by them. It is not an easy process, and I understand the consternation of trying to go there.

[01:14:24] – Attendee
Especially in the middle of the merger project.

[01:14:27] – Dennis Eichelberger (Presenter)
Oh, yeah. You don’t want to confuse one of those. You don’t want to do it during your peak periods, things like that. Yes. But if you do go there, reach out, and we’ll get something. We can talk about it if it’s a viable solution or not.

[01:14:50] – Attendee
Thanks.

[01:14:52] – Dennis Eichelberger (Presenter)
No problem. No problem. Thank you for attending.

[01:14:56] – Amanda Hendley (Host)
Dennis, if you have one more minute, there’s a question about what type compression are you using? Hdc question.

[01:15:03] – Attendee
That was actually me, and I hope you can hear me. I was asking to Jeff, I jumped through hoops to avoid going to HALDV, I use compression. I had dataset group when I can. I was just curious what type of compression he enabled and what did he think of it?

[01:15:28] – Dennis Eichelberger (Presenter)
Oh, that was for the other user? Yes. Okay.

[01:15:32] – Attendee
I use the native hardware-assisted compression that comes with COS.

[01:15:40] – Attendee
So you built a dictionary and everything?

[01:15:44] – Attendee
Yeah, I built a dictionary from an unload file and used to… I did 18 dictionaries for 18 most important databases, and I’ve since then compressed a few more boosts by using an existing one of those dictionaries because it frequently works very well. You don’t have to always build a dictionary.

[01:16:03] – Attendee
Yeah. So you decided ultimately, because I’m forever doing this, trying to put the old 10 pounds of stuff in a five pound bag. So did you ultimately go to a generic dictionary? Is that what you said instead of- No, I stuck with the ones I developed the most, the 18 databases that were in a large logical relationship. Okay.

[01:16:29] – Attendee
I’ll put a custom dictionary on each one of them. The best result is that after increasing the amount of data by roughly three and a half times through the merger of two companies. That three and a half times increase resulted in compressed databases that were smaller than the original data. I’ll go ahead and tell you what business we’re in. The smaller airline bought the larger airline and moved an additional 700 aircraft into databases that contained 300 aircraft. And at the end, we were still smaller.

[01:17:16] – Dennis Eichelberger (Presenter)
I think we could have guessed that. What business?

[01:17:24] – Attendee
And yes, it’s always the smaller airline with the cash that buys the bigger airline that’s broke.

[01:17:32] – Attendee
Thanks. I’m forever trying to avoid going to LDVs and do compression dataset groups. Just recently, I’ve started employing more HDC than character counters, stuff like that. It seems to work pretty well.

[01:17:54] – Attendee
Well, I chose to do this. I’ve done it one place before, and it’s basically the free compression that comes with COS and IMS, and you don’t have to buy a tool.

[01:18:05] – Attendee
You just have to build a dictionary. That was the only- It’s a very quick build it, link it, use it.

[01:18:12] – Dennis Eichelberger (Presenter)
Yeah.

[01:18:13] – Attendee
All right. Thanks.

[01:18:15] – Amanda Hendley (Host)
All right. Well, Dennis, again, thank you so much. And thank you all for attending and participating in the discussion. It’ll probably be a little over a week, but we’ll get that video posted an article out to you. Thank you.

[01:18:34] – Dennis Eichelberger (Presenter)
Thank you all.

Upcoming Virtual IMS Meetings

December 10, 2024

Virtual IMS User Group Meeting

Understanding What IMS Applications Can Do and How You Can Benefit From REST APIS

Suzette Wendler
IBM

Register Here

February 11, 2025

Virtual IMS User Group Meeting

IMS Catalog implementation using Ansible Playbooks
Sahil Gupta and Santosh Belgaonkar
BMC

March 8, 2024

Virtual IMS User Group Meeting

Dennis Eichelberger
IBM