Virtual Db2 User Group Sponsors
Virtual Db2 User Group | July 2023
Db2 12+/13 for z/OS Database Design and Application Performance: Features and Usage
Susan Lawson
Db2 z/OS Specialist
YL&A
Read the Transcription
[00:00:00] – Amanda Hendley – Host
Thanks so much for joining us for today’s user group session. This is the virtual Db2 user group presented by iTech-Ed. My name is Amanda Henley, and I am Managing Editor over at planetmainframe.com, and I’ve been helping Trevor out with these user groups for about a month and a half now while he takes care of some things at home. And today he is letting me take over and present Susan Lawson with you. So I’m excited to have you here. We have a pretty straightforward agenda if you’ve been on these sessions before.
[00:00:41] – Amanda Hendley – Host
(Slide 2- Agenda) We’re going to have our introductions today. We’re going to do a presentation. There’ll be some time for Q&A. We’ll talk about news and articles and talk about things that are coming soon. So you’re welcome to drop your questions in the chat function during the presentation as they come to mind. We’ll get to them once the presentation is over with. So don’t be shy. Be sure to put your questions in the chat.
[00:01:17] – Amanda Hendley – Host
(Slide 3 – Partners) I want to thank our partners. So IntelliMagic is one of our sponsors. I encourage you to check out all their Db2 resources on their blog. And Planet Mainframe – my organization- is also a partner. We’re a community of mainframe content. So if you are interested in reading about Db2 and CICS, we’ve got the articles over at www.planetmainframe.com and I’ll plug it again but since we are a community publication, we’re always looking for contributors. So if you’ve got a great success story you would like to share, we’d love to read it. And with that – pretty straightforward introductions, right? – we’re going to move into our session.
[00:02:06] – Amanda Hendley – Host
(Slide 4 – Db2 12+ and 13 for z/OS Database Design and Application Performance Features and Usage) So let me introduce Susan. Susan Lawson is an internationally recognized consultant and lecturer with a background in Db2 z/OS system and database administration. She works with several large clients on design, development, and implementation of some of the most complex Db2 databases and applications. She performs performance and availability audits and health checks, and you’ve probably run into Susan and her work before because she’s a fantastic speaker and presenter and very active in the community. And she’s an IBM gold consultant and IBM champion. So, Susan, thank you so much for joining us today.
[00:02:49] – Susan Lawson – Db2 z/OS Specialist – YL&A
(Slide 5 – Db2 12+ and 13 for z/OS Database Design and Application Performance Features and Usage) Thank you very much, Amanda. Okay, well, welcome everybody. This presentation is kind of one that’s been evolving. You may even have seen a version of it over the last few years because I’ve been trying to keep up with all of the continuous delivery function levels since v. 12 came out. And of course, now that v. 13 is out, we are looking at the features of v.13, but we can’t discount all of these features that have come out in v.12 – did I say eleven in v.12 – that have been coming out since 2016.
[00:04:08] – Susan Lawson
(Slide 6 – Db2 12 and 13 Function Levels and APARs) Since the announcement of v.12 in 2016, there have been several Function Levels. There’s been ten different Function Levels, several APARs, and it’s hard to keep up. Especially if you’re a day-to-day DBA and programmer. You got your own issues and keeping up with all of these new features and functionalities, it’s tough. So this presentation is designed to do just that. I have made it my goal in life to make sure I’m up to date on every single APAR that comes out, Function Level, what’s delivered in it that I can use personally, that a DBA or an application person can utilize. So not everything is in here, but everything that we can get our hands around. So on the right-hand side of the slide you can see the progression over the years of when each Function Level came out and what’s coming. So we’re right now in 2023 and we’ve got of course v.13 that came out last year, but there’s already been one Function Level delivered – that’s 503 – that was delivered in this year, this past year, in February, I believe, and 504 that’s going to be delivered in October.
[00:05:33] – Susan Lawson
(Slide 7 – Performance and Availability “Opportunities”) So what we’re going to look at is all the different things – the “Opportunities” in other words – that some of these features bring to us. Because one thing about every new release of Db2, not just every new Function Level, but every new release, there’s things you can take advantage of and there’s things you want to stay away from. And not everyone’s going to see the same cost savings, the same availability features. Not everyone’s going to realize the same functionality or same advantages. But you got to do your own testing. You got to look at your environment, what looks right for you, what makes sense for what you’re trying to achieve.
[00:06:12] – Susan Lawson
(Slide 8 – Db2 z/OS 12 Database Design Performance Features and Usage) And hopefully I can give you some insight into some of these features and where they can be used and where you need to actually be careful with some of them.
[00:06:23] – Susan Lawson
(Slide 9 – Relative Page Number PBR Tablespaces) So, first of all, let’s start with some of the stuff that came out in v.12 that was really key to the progression of v.12, the Function Levels and on into v.13. And the first one being the relative page numbering for PBR tablespaces. This was a game changer because in the past we had ran into limitations when building large tablespaces. Right? We want to go over 256 partitions, we want to go into the terabytes of data, we want to have large dataset sizes. And there were limitations. And so that was holding us back. Because of absolute partition numbering. Well, those barriers were broken in v.12 when they introduced the relative page number. So that means there is no dependency on the dataset size, the DS size, and the number of partitions that you can have, as well as the page size.
[00:07:25] – Susan Lawson
So that means we can comfortably grow beyond 256 and so forth. So that was a big change. But we’re not here to talk about that change specifically. But in v.13, starting in v.13, this relative page numbering is going to be the default. So there’s of course a zparm, the PAGENUM zparm, that will be, by default, relative. So it will be by default. And keep in mind, if you’re not already here yet, you already don’t have your datasets or you’re not defining your datasets is enabled extended addressability, you need to be. Because this PBR, partition by relative page numbering, requires that you have extended addressability on your datasets.
[00:08:22] – Susan Lawson
(Slide 10 – PBR Table Space Creation – RPN Default) So here’s just a quick look at some of the defaults that have changed in v.13. This is a look at an install panel if you’ve never seen one. This is where all of your defaults for your zparms are set. So the one that I’m mentioning right now is number three down there (PAGE SET NUMBERING) is now relative. So, by default, it will be relative. When you install v.13, used of course be ABSOLUTE, but there’s a number of changes that have come on v.12 and v.13 where defaults are getting set differently. No longer are they being set at the lowest common denominator or the safest. They’re being set more for the ability to exploit new features, the ability to grow, the ability to have better performance. So there’s been a lot of that over the last two releases and I will mention more of it as we go forward because there were some more changes in v.13. But this one here, again, this is something for you to be aware of. When you create a tablespace in v.13, your DEFAULT SEG size is going to be 32, it’s going to be partitioned by range using relative page numbering if you’re not specifying PBG. So that’s just something to be aware of.
[00:09:45] – Susan Lawson
(Slide 11 – Improved INSERT Performance in Data Sharing with RPN) Now, talking about performance with data sharing and relative page numbering for PBR datasets. This is really not new necessarily. This has to do with false contention in data sharing when using row level locking. It just becomes more prevalent with relative page numbering. And it all has to do with the fact that, when you’re using row level locking, even though you’re taking an L Lock at a row level, if you have concurrent access to that page across Db2s in data sharing, you got to keep in mind that even though you’re going after a different row, each application on a different Db2 subsystem, if those rows are on the same page, page level P-Locking still is there, and you’re still going to have contention with page level P-Locking. And that causes what they call false contention. Which is not a big deal, it just is overhead. And apparently, it’s more so due to the generation of the hash values, more so with RPN with relative page numbering than it has been with absolute page numbering.
[00:11:08] – Susan Lawson
So in v.13, bottom line is, behind the scenes, this is nothing that we have to necessarily be overly concerned about because Db2 is taking care of this. It’s going to introduce a new hash algorithm for page P-locks which probably will help us even if we’re not utilizing RPN at this time. Because the hashing value, the algorithm has never been perfect so we experienced false contention before RPN, but this new hash algorithm is going to help us across the board, really, with a better hashing algorithm so that we don’t have so much false contention. Just keep in mind if you are using RPN (PBR RPN), that you will need to run a REORG if that tablespace was created prior to v.13, you will need to do a LOAD REPLACE or REORG in order to enable this better algorithm for the hash values.
[00:12:16] – Susan Lawson
(Slide 12 – PBG to PBR Conversion) Now this is a big one. We knew this was coming, right? We’ve got partition-by-growth (PBG) and we’ve got partition-by-range (PBR) tablespace types for a while. And in my opinion, I still don’t. like partition-by-growth. People use them a lot because it’s easy. In my opinion again, it’s lazy, because you don’t have to have a partition range value defined for good partitioning. You just simply let Db2 partition and grow your data and you really give no thought to your limit keys. And that in my opinion, can lead to some issues and that’s what some people have found. So now we’re wanting to look at, okay, I need to convert to PBR. I want to give some thought to my partitioning ranges, want to be able to do better distribution of my data values, have better sizing opportunities, have better free space allocations according to how my data is distributed. Well, how do you convert from PBG to PBR? Well, it’s a multi-step process prior to v.13 and of course it comes with an outage because you have to LOAD, UNLOAD, and so forth. Well, in v.13, in Function Level 500, which is fully functioning 13, then you can convert a PBG to PBR with multiple impact.
[00:13:43] – Susan Lawson
There is new DDL that allows for this. So you have a new ALTER PARTITIONING to PARTITION BY clause. Now you can take your PBG tablespace and you can start defining limit keys, or partitions, and you can specify the number of partitions you want, the values of the partition key ranges that you want. And Db2 will do this through the ALTER TABLE statement. So the conversion is immediate. If you have your datasets already defined, which you probably do, then of course this is a change that will be materialized by a REORG. So now you’re converting again to a PBR. And of course this will be a tablespace using relative page numbering (RPN) because, remember, that is now the default in v.13.
[00:14:37] – Susan Lawson
(Slide 13 – Remove Stack Limits for Pending ALTERs) Another really cool feature that actually came out in an APAR in v.13 last year (around December) is to allow us to stack pending changes. So let’s say we’re converting from partition-by-growth to PBR and we want to do some other alters. So one of them that comes to mind obviously is DSSIZE or BUFFERPOOL size because now I have control over my size of my partitions and I also have control over where my tablespace is going to go. And maybe I want to make some changes during this PBR conversion. So before you would have to do that in two steps. You would have to issue those ALTERs, but then you’d have to have two REORGs to actually materialize those changes. And that can be a lot of work, especially if you’re dealing with large tablespaces. So now this removes the stacking limits for those conversions. So you can stack certain changes. And as you can see there on the right-hand side, I’ve listed the changes that are supported when you are doing a conversion from PBG to PBR. And then, as I was saying before one REORG will materialize them all.
[00:15:55] – Susan Lawson
(Slide 14 – Max Partitions 254 Default Change) In v.13, another limit was lifted in a way, or actually a default was changed I should say. I had mentioned early on, the first thing talking about today was the limitations prior to relative page numbering, and that was if you had a 4K page size and a 64 DS size, growth could only be of 256 partitions and then you had problems. Well then things changed in v.12 where we could have different defaults or different DS sizes and page sizes and we could start to grow, but we could have problems if we went beyond 254. We went to 256, we get into the problems with not having our datasets, again, defined as extended addressability if we wanted to go beyond a 4G DS size. So in v.13, the defaults change again, just to keep some consistency in our lives. That’s really all that feature is about. It’s just simply that when you have a MAXPARTITIONS of 256 specified, your defaults will now vary from 4G to 32G then depending on your page sizes. So again, a default, that’s changing – something to be aware of when you’re creating larger tablespaces.
[00:17:36] – Susan Lawson
(Slide 15 – Increased number of open Datasets) So DSMAX is what we set to limit the number of open datasets we can have. And that of course has been substantially increasing over the last few releases. And of course where we stand now – in v.11 it went from 100,000 to 200,000. And it is increasing again. The reason it’s been able to be increased is because we’re able to use more areas of memory, we’re able to have more memory to have these open data sets, have more data sets open concurrently. But the problem is, do you have that memory to support it? So this has been a problem anyway even before v.13. So even though I specify 100,000 or 200,000 open datasets, I may not be able to support it. So a lot of people are still keeping those numbers low. The problem is when you keep it low, if you want to have a lot of open datasets when you start to reach DS Max, when you get within 3% of it, regardless of your close option, CLOSE YES CLOSE NO, Db2 will start closing your datasets. Okay? So you still can experience some thrashing of opening and closing datasets for highly active datasets supporting your applications. And the problem we’re seeing now more and more today is as we’re getting into all this PBG and PBR datasets, and we’re having larger objects, larger tablespaces, larger indexes, we need more open datasets than ever to support these applications. And we’re still finding out we don’t quite have the memory; we don’t quite have the support we need for these large number of open datasets.
[00:19:29] – Susan Lawson
Well, v.13 starts to move some of those control blocks needed the memory needed for those open datasets above the bar. Before, it was still trying to use areas of memory below the bar. Well, now some of those control blocks have been moved above the bar, opening up more opportunities to have more open datasets. So in the same time frame, DSMAX also increased from 200 to 400,000. So Db2 has this extra room now he can utilize to have more concurrent open datasets. And later on today, we will also see that same area can also be used for other activities, allowing for more concurrent threads, and I’ll explain that later. But this allows us to increase that open dataset limit again, provided we have the memory to support it. And again, we’re on z/OS 2.5 and where Db2 has moved those control blocks above the bar.
[00:20:36] – Susan Lawson
(Slide 16 – Improved DATA CAPTURE) So, another thing we see a lot of in the last few years is a lot of DATA CAPTURE, a lot of QREP type processing or IIDR processing, where we’re replicating changes from our Db2 tables for whatever reason (might just be to populate a warehouse, might be to support a migration effort), all sorts of reasons. The problem is, when you want to make changes to DATA CAPTURE, maybe you want to turn it off or turn it on. There are various reasons why you would want to stop your data changes being captured and then turn it back on. Well, the problem with that is it wasn’t real dynamic. You had to QIESCE static packages and cache dynamic statements. So in a high-availability environment, that’s an issue, right? Anything that causes QUIESCE-ing of packages and statements can be problematic in a high-availability environment. So in Db2 v.13, DATA CAPTURE no longer waits so it’s no longer going to quiesce your packages and your cache statements. It’s simply going to run the DDL – you can run successfully even if you’ve got static or dynamic DML running against the same table that you’re altering. So this allows us to turn ALTERing of DATA CAPTURE on and off much easier and in a less invasive manner.
[00:22:16] – Susan Lawson
(Slide 17 – Longer Column Names) So we’ve seen this over the years where Db2 is trying to keep up basically with other platforms because other platforms have had longer table names and longer column names forever. And that tends to be a problem when we’re trying to be compatible with other platforms, even other Db2s (Db2 on Linux and so forth). So now we’re able to support longer column names because right now in v.12, you were limited to 30 bytes of EBCDIC. And in v.13 you can support up to 128 bytes. So there’s a new zparm again – that will be off by default in v.13 – but if you turn it on, you can have column names greater than 128 bytes. So obviously you can use that in your CREATES of your Table Spaces Indexes views, it can be used in your INSERTS, any of your DML and so forth. So now you can pretty much use it anywhere that you could use a column name. However, that being said, this is still the mainframe Db2. We still have our own structures, our own quirks that other platforms don’t, and one of them being the SQLDA. The SQLDA, if you were using that with something like SPUFI, QMF, DSNTEP1, DSNTIAUL, and DCLGEN, none of those will be able to support a column name larger than 30. So they will only be able to recognize and return a column name up to 30 bytes. So be aware of that. That could create some limitations for you if you’re trying to use column names Greater than 30 bytes.
[00:24:12] – Susan Lawson
(Slide 18 – Non-Deterministic Expressions for Column Auditing) This is a cool feature that actually came out in Function Level 503, and I believe this was right around 2020 or 2019/2020 in v.12 and Function Level 503, and actually you can retrofit it all the way back to v.11. So this is the ability to track who, when and where made a change to our column, and we’ll do it within Db2 – within the table.
[00:24:47] – Susan Lawson
(Slide 19 – Non-Deterministic Expressions for Column Auditing) And I’m going to show you the example because that’s much easier than reading through that. So here I have a, in this case here, I have a system temporal setup right where I’m creating active data as well as history data to capture changes as they occur. Well, now I can actually capture who made the change, so I can find the current SQLID of the person that made the change. I can then see what kind of change it was with the data change operation. So INSERT, UPDATE or DELETE and then, of course that will be captured also in my history table – not just my primary table, but my history table. And so then I also have the ability now to capture the deleted row. So we can say on DELETE ADD ROW, so I can add the row to my history table if by chance that row had been deleted. So I get the “Who” changed it, what “TYPE” of change they did. So, very nice feature that you can use. This is very helpful with using system temporal setups because it helps you keep an even more detailed auditing trail of what’s changed and, in this case, now also what’s been deleted.
[00:26:08] – Susan Lawson
(Slide 20 – ROW CHANGE TIMESTAMP) Here’s a default that changed in v.13 in Function Level 503, which came out in February. And this is kind of a minor default change. I don’t know of a lot of people using this feature – the ROW CHANGE TIMESTAMP – but the ROW CHANGE TIMESTAMP is something we’ve been using for years to do “optimistic locking”, right? So you select a row and then before you go and do a change UPDATE, you select it again to make sure that it hasn’t changed, and you compare a timestamp so that you are updating the data you saw. You know, using a ROW CHANGE TIMESTAMP Is helpful for “optimistic locking”. The problem was in data sharing, that default for ROW CHANGE TIMESTAMP was based upon an internal mapping between the LRSN (the log record sequence number) and a timestamp. And it could be a bit unpredictable because of the way this internal mapping table worked. However, in v.13 it’s going to be more consistent because the new default is going to be introduced for ROW CHANGE TIMESTAMP columns. So it’s going to be a constant default, data- sharing, non data-sharing. It’s no longer based upon that mapping table and that LRSN value. So it’s a consistent static value for the ROW CHANGE TIMESTAMP. So that’s v.13 again, Function Level 503.
[00:27:43] – Susan Lawson
(Slide 21 – Movement from Segmented Multi-Table Tablespaces to UTS) An RPG in v.12 into moving from a single tablespace to a Multi-Table simple or segmented tablespace. So, as you know, we knew this was coming, the day was coming when support for multi-table tablespaces was going to be deprecated – we can no longer create them. They’re supported, obviously, but we need to get away from them. And that’s not easy, especially if you’ve been abusing multi-table tablespaces by having hundreds, even thousands of small tables in these tablespaces. But now it can be done a lot easier. So in the past before Function Level 508 and v.12, this was difficult to be able to move to a UTS because you had to do it one table at a time, took a while, and you had to run utilities and so forth. And depending on how many, this could be a weekend for you. So in v.12 and 508, you’ve got an ALTER TABLESPACE MOVE TABLE option. So it moves tables from a multi-table tablespace to a PBG tablespace. So you have a pending change, then you run the REORG and Db2 moves the table to your new source, single table tablespace. Now, I’m making it easier than it sounds right now. It is a multi- step process, all detailed out in the Admin Guide. I’ve given you a few details in here though, don’t worry. Some considerations. The recommendation is that you move to a PBG tablespace with a partition of “1”. That’s been the recommendation by IBM. Also size it accordingly and 64G should be enough, right? Because even if you’re in a multi-table tablespace that’s segmented, the limitation for the entire tablespace was 64G. And don’t forget, you’re going to have to REBIND, of course.
[00:29:58] – Susan Lawson
(Slide 22 – Moving Multi-Table Tablespaces to UTS – Steps) Now here’s some things that I pulled out of the Admin Guide just so that you’re aware and you can kind of prep for this. So for a MOVE, you first need to identify those tablespaces that are holding multiple tables. If you don’t know them all right off the top of your head, which you might, you can find them. So that first query on the left side of the slide here allows you to query SYSTABLESPACE and see if you can find the tablespaces that have multiple tables. And then if you want to find the tables in that multi-table tablespace you could use the query there on the right-hand side and so that will allow you to identify both the number and the names of the tables that need to be moved.
[00:30:52] – Susan Lawson
(Slide 23 – Moving Multi-Table Tablespaces to UTS – Number of Tables) Now, how many should you move at a time? Well, that depends, right? It depends on two things. You have the time it takes to actually issue the ALTER TABLE MOVE TABLE statement.
[00:31:20] – Susan Lawson
So that’s cool, right, that you can do that so it’s no longer one at a time like it was prior to this change. But you got to consider, first of all, how long it’s going to take. for that to process – that ALTER TABLE. So that’s one part of it. And then you have to consider the time it’s going to take to materialize it via REORG. So that’s your second part. Then you have to worry about the time it’s going to take for the REORG and SWITCH for all shadow datasets. So in other words, you still want to do this in a slow time so that you’re not interfering or waiting on other processes. But again, plan for this. And the recommendation of course is don’t do too much at once, do this in small increments. There’s no reason why everybody has to go at once, so move a few, maybe 20 or 50 at a time, but just plan that out. And then don’t forget your REBINDs.
[00:32:29] – Susan Lawson
(Slide 24 – Multi-Table Tablespaces to UTS – Invalidations/REBINDs) So here’s some queries that help you to find out who’s affected by these types of MOVEs and so that you can organize REORGs accordingly. And of course, if you’re afraid of access path changes, you can of course mitigate that risk by taking some steps to plan (get ready to run some run stats before you do your REBINDs. You might want to do it with APREUSE(WARN) to minimize any access path changes). Again, plan for this. Don’t just jump into this blindly because you are going to have to identify and REBIND those packages affected by the MOVE TABLE.
[00:33:14] – Susan Lawson
(Slide 25 – UTS PBR Tablespaces and Hidden ROWID Support) So UTF PBR ROWID Support. This came out in v.12, actually relatively early. I want to say it was 2017 or 2018 with those APARs listed. And we’ve had ROWID support since v.6. ROWID is a 99.99% randomly generated number that is primarily there to support LOBs, to tie a LOB back to the base table. But in v.12, they’ve decided to use this ROWID to do partitioning. Because it’s a 100% random number, people are using it for partitioning keys across PBR tablespaces because they don’t want to have to think about or implement a partitioning key. So if you don’t have a partitioning key, it allows you to distribute your data in a random fashion across partitions even without a key.
[00:34:18] – Susan Lawson
Well, again, I think that’s lazy. If you can define a key, define a key. I personally want to know the distribution of my data. I want to be able to do good free-space analysis. I want to be able to do partition independence, have utility independence, data sharing, affinity routing opportunities, all kinds of things. But if you choose to have a random ID distribute your data, go for it. What this feature is it allows you to hide that ROWID so that your applications don’t see it. Well, that’s fine, but still think twice about this. You might get some benefits of having randomly distributed data because maybe you won’t be hitting the same data at the same time but the flip-side of that is maintenance, clustering, your sequential processes are going to choke your WRITE I/Os are going to be more expensive, your free space is going to be more expensive, index look aside is not going to be as efficient and so forth. So weigh your options before you do something like this. But that is an option and as of v.12, you can hide it.
[00:35:31] – Susan Lawson
(Slide 26 – Faster PBG INSERT) So the problem here prior to v.12 is that here again is another issue with PBG. Partition-by-growth, if you’ve been using them, they come with a lot of their own quirks. And this was one of them where when you were trying to do INSERTs, if your INSERT process could not get a lock and where it wanted to do the INSERT, it could not get a lock on the partition to do the INSERT, it would just go on to the next one and the next one. It didn’t wait for the lock to be released, and it did not also go back and retry. So what would happen? It would get to the end of the chain and just fail. So now in v.13, there’s better INSERT algorithm. So there’s enhanced retry logic, first of all, because these typically aren’t long locks, so there’s no reason why they shouldn’t just retry for that lock. And then also now they’ve introduced bi-directional searching for your INSERT. So it introduces retry logic and retry and bi-directional INSERTs. So that should help your INSERT times for your PBG INSERTs.
[00:36:46] – Susan Lawson
(Slide 27 – Improved Index Look-Aside) Index Look-Aside is a key to performance in Db2. It’s been around since v.4, but it’s had its share of limitations. So Index Look-Aside means that you’re doing a lot of INSERTs, means that Db2 can look to the next page in the index, the next leaf page, to see if that’s where that INSERT should go versus going back up the entire tree and coming back down to find a space for the index. So it worked very well for highly clustered indexes and so forth. That’s where you got the biggest benefit and saved you a good number of GETPAGES. The problem is, up until v.13 it only worked for INSERTs and DELETEs – they were the only ones that could use index Look-Aside as well as it only worked on the clustering index as well as indexes that were highly clustered that Db2 could determine based upon catalog statistics. And hopefully your statistics were accurate. It was not supported for UPDATEs. So that kind of left out a few processes that could have benefited from this. Well, now you can use Index Look-Aside or Db2 will use Index Look-Aside for INSERTs, UPDATEs and DELETEs.
[00:38:11] – Susan Lawson
That’s going to be a great benefit now for UPDATEs, and the even better benefit is that it’s regardless of the cluster ratio or whether or not it’s the clustering index or your statistics aren’t up to date. So basically, Db2 is going to look and see after he does three INSERTs if he can utilize Index Look-Aside, okay, and it will dynamically adjust if your patterns start to get random and he sees that it’s not going to be a benefit. So Db2 has got a little more sense and predictability on this to help you gain better use of Index Look-Aside. Now, one thing you might hear as advertised about this feature, is it’s going to help with maintenance cost of your INSERTs, UPDATEs and DELETEs? Well, what that means is it helps with the overhead that you’ve introduced by introducing an index. Maintenance means Db2’s ability to take your INSERT, UPDATE or delete and UPDATE the index. The more indexes you have, the more cost is associated with it. So don’t think indexes are free. Every index you add to a table adds 25% overhead generally to your index for every INSERT, UPDATE and DELETE. So the fact that you can have better Index Look-Aside is going to help reduce that overhead. That’s all that is. But here’s the other thing. Think about your indexes, don’t have too many unless you need them.
[00:39:53] – Susan Lawson
(Slide 28 – FTB Support of Non-Unique Indexes) So let’s look at Fast Traverse Block. This came out in v.12, Function Level 508 improved it. So v.12 brought FTB into our life, which allows us to have a sort of Index Look-Aside functionality for random indexes. So when it has a little area of memory where they SAVEPAGES and it supports random indexes, so you can reduce your GETPAGES. But the thing is, it’s had its share of issues. It works well when it works well, but there’s a few challenges with it. One, as a v.12, it did not support non unique indexes. Well then with 508 there’s a change to that. You can support non unique indexes. There is a zparm that you can set to YES. However, you were still limited on the column size, still limited to 64 bytes or less.
[00:40:56] – Susan Lawson
(Slide 29 – Fast Traverse Block – Eligibility Requirements) However, v.13 comes along and enhances this a little bit more, says “Okay, we’re going to make this the default, first of all going to have non unique index support be the default so that zparm has changed and we’re going to allow for unique indexes to be up to 128 bytes.” So the limitations have somewhat been removed for the length and the size of those indexes.
[00:41:22] – Susan Lawson
(Slide 30 – FTB Usage Recommendations – Selective Index Usage) However, that being said, you still want to be careful with FTB. I don’t know if you’ve tried using it or not, everyone’s got different experiences with it. But in v.12, when it first came out, FTB was system wide. It was enabled system-wide and controlled by Db2 through automation. So basically it was on by default in v.12 and Db2 would choose when and where and how to use FTB. Well, then you could change that through the INDEX_MEMORY_CONTROL zparm. But a lot of us like to see what Db2 is going to do – we left it at AUTO. Well, that became not so cool to do anymore. So now v.12, they introduced an APAR that allowed us to say not for everything, but for selected index only. Turn that on by YES, and that is a new zparm. And also In-Memory Control was enhanced to say “Do we want to select the indexes? Do we want Db2 to select the indexes?” And of course, we can still use the INDEX_MEMORY_CONTROL catalog table to control which indexes can use FTB and when. So we can do that. We can control when Db2 uses FTB or when it does not, and which indexes it tries it on.
[00:42:56] – Susan Lawson
(Slide 31 – In Memory Indexes – Usage Recommendations (cont…) If you’re going to try to utilize FTB, make sure you’re current on maintenance. At the top here is a list of all the APARs that are out at this time. When this whole thing started to change (to be able to say selective indexes only) there was a lot of APARs that came out. And IBM’s recommendation was that you went towards index granularity so that you chose the indexes that could take advantage of FTB, made sure all these APARs were ON (so it’s not ON globally anymore). And that you can control it through SYSCONTROL through that catalog table. You can say FORCE FTB creation, DISABLE, or AUTOMATIC per index. So that’s something you might want to look into. Now, whether or not Db2 uses FTB, that’s still debatable, even if you say FORCE my understanding, that’s not really a guarantee.
[00:43:56] – Susan Lawson
(Slide 32 – DISPLAY STATS – INDEXTRAVERSECOUNT) But if you want to see if Db2 got any benefit from it, there’s also an enhancement, an APAR enhancement in v.12 that allows you to do a DISPLAY, and you can see the INDEXTRAVERSECOUNT, and that allows you to see the usage of FTB – whether or not Db2 is actually using it. So that came out in May of 2021 to be able to get a better idea of how your FTB is working.
[00:44:29] – Susan Lawson
(Slide 33 – Mirror and Clone Tables – Past Solutions) So another feature that came out in v.12 is the ability to support switching of tables. So in the past, let’s say we wanted to have tables that were highly available. I want to be able to make CHANGEs or do LOADs into a table, but I want to have my users reading another table. And then I want to be able to switch between those tables once all the updates are done to the original. So in the past, we had two different ways of doing that. I like the mirror table approach, which is where you use two different summary tables that are identical, and you have one process loading or updating or whatever to one of the summary tables, and then you do a SWITCH behind the scenes. You could do that in a query using a LEFT JOIN, OUTER JOIN process as you can see there on the right-hand side, and you can SWITCH and it’s transparent to the user. They don’t know that they’re now reading updated data.
[00:45:35] – Susan Lawson
The other process that you probably are more familiar with is in v.9. we got clone tables. Clone tables are great, right? Do the same thing. We’ve got clones of Db2 tables. We’re updating or loading one table using all the users are reading the other. And then we issue the EXCHANGE and the EXCHANGE allows you to exchange between the base-table and clone-table. Well, that’s great, except that EXCHANGE causes an outage. And that outage depends on two things. 1) The number of partitions in your tablespace and, 2) the number of users. So what it’s trying to do, that EXCHANGE tries to do is drain all those partitions before it can do the EXCHANGE. So if you’ve got a ton of users out there and you’re not able to get a drain, that EXCHANGE could sit and spin for a long time and that’s not helpful for availability which is what we’re trying to achieve here.
[00:46:34] – Susan Lawson
(Slide 34 – Online LOAD REPLACE) So better option, here you go, v.12 and actually retrofit back to v.11. Online REORG SHRLEVEL REFERENCE for LOAD Utility. What is it? Online REORG. Okay, so, this allows us to have the same capability so we can do a LOAD REPLACE SHRLEVEL REFERENCE on our LOAD, which basically gives us an Online REORG capability. It introduces shadow datasets. We introduce a SWITCH phase to the LOAD, but it gives us a much more transparent ability to LOAD data into one table and seamlessly switch to another.
[00:47:19] – Susan Lawson
(Slide 35 – System Temporal and Archive Enabled Tables – TRIGGER Issue) They’re nothing new, but the limitation for both of those was that you could not have a TRIGGER. You could not reference a System Temporal Table or an Archive Enabled Table in your WHEN clause.
[00:47:39] – Susan Lawson
(Slide 36 – TRIGGERS, Temporal and Archive Transparency) V.12, Function Level 505, you can. Okay, so it’s a pretty basic enhancement, but a nice usability enhancement for you if you have a lot of System Temporal Archive tables being used. So you can now use reference them in a TRIGGER.
[00:48:01] – Susan Lawson
(Slide 37 – Real-Time Statistics Scalability Improvements) Real-time statistics, not much here except for the data types in the RTS Stats have been enhanced to be larger because we have to account for more INSERTs, more UPDATEs, more DELETEs. So values such as REORG INSERTs has been increased to a BIGINT instead of an INTEGER data type so that we can account for the billions of INSERTs that we’re doing.
[00:48:29] – Susan Lawson
(Slide 38 – LOB Compression Improvements) So LOB Compression came out in v.12. It was based upon the IBM zEDC Express Card to be able to do this. This is a hardware-based compression, it’s not dictionary based like our typical tablespace compression. This is a good candidate for some of your documents, text, XML and so forth. The difference in v.13, the improvement comes with the hardware, so it comes with the z15 hardware using the Integrated Accelerator for EDC Compression. So using the Integrated Accelerator is going to give us better performance for our compression, offers lower latency, higher bandwidth, and again, it is good for, good for text formats, PDFs, JPEGs, and it’s going to have (potentially) a 70% faster compression in some cases. Again, it depends on what your compression and so forth. But that again, v.13, but it’s also with the z15 Integrated Accelerator.
[00:49:49] – Susan Lawson
(Slide 39 – Transparent Data Encryption) This came out in v.12 and Function Level 502. This uses an encryption method that various orders of encryption or ways to encrypt. You got RACF dataset profiles, you’ve got KEYLABELs, got attributes on your SMS DATACLAS. You must be extended in order to be encrypted. So there’s all kinds of ways to use encryption. You will need to have the use of the Crypto processor in order to do this. Basically, you can implement it fairly easily as a 502. You’ve got some DML support where you can define the key label as part of your CREATE or ALTER or on your STOGROUP. If you have any archive or history tables, those have to be done independently. And after encryption is enabled, then of course you have to REORG your objects to get them to be encrypted. But some people have had very good success with this type of encryption. And again, 502 is when this was introduced, v.12.
[00:51:01] – Susan Lawson
(Slide 40 – Functions – ENCRYPT_DATAKEY – DECRYPT_DATAKEY) In v.12, Function Level 505, a couple of functions built-in functions in Db2 allowing for encryption and decryption. Okay, so you can see in this first example here, we’re updating our customer. We’re able to specify a KEYLABEL to allow us to UPDATE the protected customer in our dataset.
[00:51:30] – Susan Lawson
(Slide 41 – IBM z15 Huffman Compression) So FL509 introduced Huffman compression in Db2 v.12. And Huffman compression is just simply another type of compression. Not saying it’s any better, any worse than the typical fixed-length compression we have today. Keep in mind, the fixed-length compression we have today we’ve used forever is just some pretty basic, right? It’s the Lemple-Ziv-Welch algorithm. It’s basically PKZip for the mainframe and the Huffman Compression gives us a little bit different options. And the only thing about the Huffman Compression so if you’re asking me “You know, which one should I use, should I use Huffman, should I use traditional fixed length?” Well, it depends. Keep in mind that Huffman Compression does not support partial decompression, which is a feature that came out a few releases back, which can have effect on a query performance. Because what happened is with this feature of decompression, selective decompression, Db2 would not decompress your data if you weren’t selecting it because otherwise it’s operating on the entire row. But Huffman doesn’t have the ability to do that. So you could lose a performance benefit from that. So just something to think about when you’re comparing them.
[00:52:50] – Susan Lawson
(Slide 42 – DSN1COMP Usage for FIXED vs. Huffman) But if you want to know who’s going to give you your best bang for the buck in terms of compression, use that DSN1COMP utility. It’s still there, still waiting for you to use it. And it will actually tell you what your stats would look like for Fixed-Length vs. Huffman vs. Uncompressed. COMPTYPE (ALL) on DSN1COMP will give you those stats.
[00:53:19] – Susan Lawson
(Slide 43 – Improved Index Page Split Info) Index page splitting is terrible for performance. And the problem is how do you know when you index page split? Well, you feel it. Your INSERTs start to slow down is basically the best way I can put it. And that’s what happens, is Db2 wants to split a page. He wants to do your INSERT, but that next page is full. So he wants to split. Well he needs a new page to split into and if that doesn’t exist, he goes and starts searching the free chain and it can get ugly. But knowing is half the battle. And IFCID 0359 came out, I think it was v.9, that allows us to have some information about index splits. Okay, but you had to have this ON it’s enabled under trace-class 3, performs trace-class 6. It would give you the PSID, the member ID, start, stop time, page number, etc… Okay, good information, if you had this stuff turned on. And it also didn’t record. It wasn’t recorded all the time. You had to have a total lapse time greater than 1 second for the split to have been recorded. So it wasn’t capturing everything.
[00:54:35] – Susan Lawson
(Slide 44 – Improved Index Page Split Info (cont.) Well, v.13, this is pretty cool. Starting in 501, RTS is going to have stats on Index Page Splitting. So now we get, in the index space stats, we can see the number of times you’ve had page splitting since the REORG. I mean it’s not as detailed as the 509 (if that’s being captured), but it is certainly going to give us an idea of which indexes are experiencing page splitting. We might want to get some REORGs into those guys or do some better job at our free space. So at least give us an idea of who’s experienced a lot of page splitting and when they’re experiencing it. So you’ll know what processes are kind of influencing those page splits.
[00:55:21] – Susan Lawson
(Slide 45 – Deletion of Old Statistics When Using Profiles) This is a RUNSTATS kind of a maintenance thing. If you’re using statistic profiles in RUNSTATS, it’s important that you keep your RUNSTATS from becoming stale and inconsistent. And of course over time executions with different options and so forth, or even manual UPDATEs, can cause problems with that. So when you’re using EXECUTE or you’re executing RUNSTATS with a profile, you can do this and opt to DELETE the current statistics that are not part of the profile. So it allows you for a new way to clean up your stats.
[00:56:01] – Susan Lawson
(Slide 46 – SQL and Applications)
[00:56:02] – Susan Lawson
(Slide 47 – REBIND Phase-In) So Phase-In REBIND was a cool feature that came out in v.12, Function Level 505, that allows us to phase in REBINDs. So you’ve got a REBIND occurring and you need to be able to break in and that time is going to probably time-out, could be disruptive. It’s a problem, right? So now we have a new feature that allows us to phase in the REBINDs so new threads can get a new package copy immediately. The old ones use the old copy and then of course this is that phase-out process so we’re no longer waiting and we can all get our work done.
[00:56:48] – Susan Lawson
(Slide 48 – REBIND Phase-In – Concurrency Issue) A couple of problems though with this. One, there was a concurrency problem right off the bat that was fixed in 505, but the issue was there was timeouts and package locks that could have resulted. So transactions starting slightly after the REBIND could experience slower time due to a timeout on the package lock. This has been fixed – so this is kind of an internal problem that we didn’t really need to worry about being fixed. So they fixed this internally, removing the conditional SIX lock.
[00:57:24] – Susan Lawson
(Slide 48 – REBIND Phase-In – Storage Issue) The other one is more interesting, which was a storage issue. So when we’re phasing in these package copies, doing this phased in REBIND, so we’re introducing more copies, which means we need more storage to store these copies. So this could become a problem when you start to, especially in data sharing, when you’re starting to acquire these control blocks in ECSA, but they’re not necessarily getting freed. So getting all these package copies and they’re not getting freed. And the option there was to recycle Db2, which we’re not going to do, right? So in v.12 (in 505), they modified the use of existing storage, which was one part of this. But in this subsequent APAR, there’s a FREE PACKAGE command now that allows us, with a phase-out option. So those packages that were created for a PHASE-IN REBIND, we can now free those packages from the catalog and directory and so forth. So it allows us to free packages and release that space.
[00:58:35] – Susan Lawson
(Slide 49 – Profile Table Enhancement)
[00:58:37] – Susan Lawson
(Slide 50 – Controlling RELEASE of Packages Via Profile Tables) There’s an enhancement to Profile Tables in v.13 and allows you to better control RELEASE(DEALLOCATE) in v.13. Okay, if you’re using Profile Tables, you’ll recognize this. If not, I’m going to go quickly over this. But if you’re using Profile Tables for better management, then you’ll have the ability now to control the RELEASE package in a Profile Table.
[00:59:07] – Susan Lawson
(Slide 51 – Application Granularity for Locking Limits) This is the one I wanted to get to because more of us that are going to have a use for this. And this is Application Granularity. When you are talking about locking, you have your number of locks per user and your number of locks per tablespace that are controlled by two zparms right up there at the top (NUMLKUS, NUMLKTS). And they control the amount of time an application will wait for a lock before it times out – either by user or by locks per tablespace. Well, this was set at the zparm level and there was only one setting for the entire subsystem. Well, that may not work for all applications. So now you’ve got two built-in global variables that allow you to control these. First of all, there’s two new global variables that allow you to set these values for an execution of an application. And you would override what was set with the system zparm. So you’ve got your MAX_LOCKS per tablespace, which corresponds to the NUMLKTS, and you’ve got your MAX-LOCKS per user that corresponds of course to NUMLKUS. And so you can set this specific, again to your application needs, that will override what is set in the system zparm.
[01:00:42] – Susan Lawson
(Slide 52 – Application-Level TIMEOUT Control) In v.13, we also have the ability, we can do a SET CURRENT LOCK TIMEOUT value. So we have a TIMEOUT subsystem parameter that’s new. And the max we can set the CURRENT TIMEOUT, the default value, is going to be negative one, which allows you to take the default. And here’s an example. You can say in your application SET CURRENT LOCK TIMEOUT equal to 30 seconds. So this is another way to set the timeout value per your application. And this one overrides the 60-second timeout that is set by the IRLMR Wait Time (IRLMRWT) zparm, which has defaulted to 60 seconds since 1983. So you wanted to change that zparm to begin with, but most people don’t because it affects all applications in the subsystem. But this way we can at least bring that number down per application because there’s no way you want your application waiting 60 seconds for a LOCK. You want it to time out a little bit quicker than that – you don’t want applications waiting.
[01:01:53] – Susan Lawson
(Slide 53 – Deadlock Priority Control) This one’s kind of interesting because this allows us to increase our DEADLOCK PRIORITY so that we don’t become the victim. Now, this one’s going to be fun because here I’m saying, “I’m better than you and I’m not going to be the victim of a deadlock.” But here’s why you might want to use it. Because batch jobs that try to get in during a heavy application period often become the victim. They fail, their DDL statements fail instead of other application SQL. So, often batch jobs don’t finish because they become victims of deadlock. Well, that becomes problematic especially nowadays when we have competing windows of applications and batch jobs. More and more we don’t have isolated windows for batch jobs. So we could introduce retry logic, which is not often done. We could do scheduling, which again gets hard because we don’t have these windows anymore to schedule into. Well here I can say as of v.13, I can say, hey, I want priority, my batch job needs priority. I don’t want to become the victim. So there’s a new built-in global variable called DEADLOCK_RESOLUTION_PRIORITY. So it allows you to set a priority when Db2 is trying to determine who the victim is going to be in a deadlock. So the higher the value, the less likely that application or that job will become the victim in a deadlock. So it’s going to be fun competing with everybody to see who is not going to be victim.
[01:03:37] – Susan Lawson
(Slide 54 – Lock Avoidance for Singleton SELECT) This is a feature that came out in twelve. It’s a new Lock Avoidance zparm setting just to allow for Lock Avoidance on a cursor with a singleton SELECT.
[01:03:52] – Susan Lawson
(Slide 55 – Greater Number of Concurrent Threads – Utilizing ATB Storage) This one’s interesting because this has to do with Above The Bar (ATB) storage. So as I mentioned earlier with the DSMAX, this also applies this movement of these local agents for threads to ATB storage and is going to free up a lot more space for us. So it’s going to allow for, in v.13, a larger number of concurrent users. So statements with PREPAREs and EXECUTEs are now stored ATB. It optimizes also the interface between DBM1 and our distributed address spaces (DIST) because that storage is also going to be shared storage and will avoid the cross-memory operations which between DIST and DBM1 can be very expensive for high-volume distributed applications. So that movement of that storage above the bar is going to help us with concurrent threads as well as DIST processes.
[01:04:49] – Susan Lawson
(Slide 56 – BTB Storage Reduction – Application Impact – Example) And this is just kind of an example of how things build up. Before v.13 you’re doing these prepares. You’re trying to get storage for your SQL which can be up to 2MB, right? Well, it is 2MB when you’re trying to acquire the storage. Then you’ve got stored procedures which they want to allocate space also for dynamic SQL and all that prior to v.13 was below the bar, which is a limited amount of space. Now, all of that, again, this is all piling on one after the other but now that all that is ATB, it’s going to actually allow for more concurrent threads that are doing these types of processes.
[01:05:33] – Susan Lawson
(Slide 57 – Sort Performance Improvement with Long Varchars) Sort performance with long VARCHARs. You have an ORDER BY. You should be leaving VARCHARs out of ORDER BYs. Period. Okay, I can’t stress that enough, but you’re not, right? Well, here’s the thing. A VARCHAR column, even if one byte is used, and let’s say it’s defined at 255, it will pad to the full length in a SORT. So stop putting them in a SORT – It’s making your sort keys and your SORT work files long and it creates problems for your performance. Bottom line. So in v.13, they’re going to check out exactly what you are using in that VARCHAR column – when your SORT is being allocated – they’re saying “Oh, you’re only using one byte of a 255-byte VARCHAR.” so Db2 is not going to allocate a file to hold all that. It’s only going to allocate storage in a work file large enough to hold what you are trying to really truly SORT. So hopefully this is going to save you CPU and save you elapsed time. But if things are fluctuating and change a lot, it may not. So again, your mileage will vary with this.
[01:06:50] – Susan Lawson
(Slide 58 – List Prefetch for MERGE) List PREFETCH is now allowed in a merge. I’m not going to spend too much time on that.
[01:06:55] – Susan Lawson
(Slide 59 – SELECT INTO Support for OPTIMIZE FOR n Rows) In v.13 and 503 in February, you had the ability to support OPTIMIZE FOR n ROWS on a SELECT INTO.
[01:07:05] – Susan Lawson
(Slide 59 – LISTAGG Function) LISTAGG is a function that came out in 501. I find really interesting because it is a built-in function that allows us to do a pretty cool kind of thing. Let me show you the example real quick. I promise you I’m almost done.
[01:07:20] – Susan Lawson
(Slide 60 – Application/SQL vs. LISTAGG Performance – Example) So in 501 this LISTAGG function came out and, let’s say, I want to build a list of employees by each department. Now, in the past we would have had to do that in an application process. We would have had to define a cursor, get the employee’s name, the department from the employee table, fetch data under the cursor, use some logic to find out what the employee, what department it’s under, perform the concatenation, and so forth. And anytime you do a lot of stuff like that in an application, it’s going to cost you money and it’s prone to error in use of resources – extended use of resources. Well, LISTAGG is a function that does the same thing. So here I’m saying SELECT DEPARTMENT and then get me all the employees WITHIN and GROUP them by department and then WITHIN that list, ORDER them and concatenate them with a semicolon. And again, grouping by department. So you can see in the green box there, I’m grouping by department. And then in the employee list name, I’m listing out all the employees that belong to that department. So I’m kind of horizontally listing all the employees in that department. And the IBM numbers down there, they were talking 97% CPU reduction and 94% elapsed time reduction. And that’s simply because you’re not going in and out of Db2 and doing this in an application. You’re doing this all in a built-in function in Db2. So it’s a pretty cool feature.
[01:08:57] – Susan Lawson
(Slide 61 – SUBSTR Performance with LISTAGG Function) And in v.13 you got the ability to substring this LISTAGG because it kind of had a little flaw where it would allocate 4000 bytes for the result of that function. Well, now you can substring (SUBSTR) it so that you can better minimize the amount of storage in the workflow that Db2 is trying to allocate for that LISTAGG result.
[01:09:23] – Susan Lawson
(Slide 62 – Hashing Function) HASHing function came out in version 501. Only reason I mention it is because we’ve been using hashing functions for tens of years and it’s the ability to hash your data, maybe use it for security reasons, all kinds of reasons you hash data. Well, as of 501, it’s built into Db2, in case you didn’t know that. And there’s various versions of it that are out there. But Db2 provides a built-in function to do hashing.
[01:09:56] – Susan Lawson
(Slide 63 – Create/Replace for Procedures) In v.12 and 507 you have the ability to CREATE or REPLACE stored procedures. So you can reuse your original statements, make changes, so it doesn’t require the old way of managing stored procedures. This is a much easier, much less intrusive way, of making changes to a stored procedure with the REPLACE function.
[01:10:23] – Susan Lawson
(Slide 64 – SQL Data Insights) I’m just mentioning this because I can’t do it justice in this presentation because it is a topic in and of itself. And it is the ability of having artificial intelligence functionality in the Db2 engine. And as I’ve mentioned a couple of times already, anytime you do something in the engine versus outside of it or moving the data outside of it, it’s going to be much better performing for you. And you can see down there at the bottom there’s three built-in functions that will allow us to identify relationships and allow queries to infer hidden relationships between our data with various tables. So those are going to be three new functions built into SQL Data Insights and they are Db2 built-in functions. SQL Data Insights is quite a beast. It is something you have to install. It is optional. You don’t have to pay for it, it’s not that. It’s another whole kind of thing that you have to install and plan for Db2, so it’s not automatic. It comes with a GUI and all kinds of things. But you no longer need an ETL to move off your data to do AI type of functionality. Or you don’t have to have really complicated machine learning models designed by an expert. Db2 has that capability now. It’s pretty cool stuff. If you’re interested, there’s a whole book on it matter of fact in the Db2 manuals.
[01:11:54] – Susan Lawson
(Slide 65 – Db2 12+/13 Database/Application Design/Performance) And in summary, like I said, what I’ve tried to do is keep track of the really usable features that have been out since v.12 and in the function levels of twelve and the function levels of 13. Trying to keep track of usability of these functions where they can make our life easier in many cases. But as you can see also in many cases, the warnings are still there that you need to test and plan for some of these features. They’re not as magical as you’d like, maybe for them to be, but it’s good stuff.
[01:12:32] – Susan Lawson
This presentation will be available for you. I’m going to give it to Amanda and she’ll make it available for you and you can take a look at it. And so all of these numbers and function levels will be there for you to reference. I’ll turn it back over to Amanda. Thank you.
[01:12:50] – Amanda Hendley – Host
Thanks Susan. Well, I hope you enjoyed that presentation. I’m going to go ahead and share my screen and we did have a question that came in during the presentation. Susan, I don’t know if you want to read that, or I can read it.
[01:13:24] – Susan Lawson
So Stephen Newton asked about “Regarding PBR with relative page numbering created converted in Db2 v.12, will tablespace level REORG be needed in Db2 v.13?” And “Yes. That REORG is going to be needed.”
[01:14:21] – Susan Lawson
The next question is “I would like to check with you about getting resources.” Of course, this presentation has a list of references, a list of resources, but of course Planet Mainframe also has a list of good resources again just like for things we covered today. Start there and you could also check out the IBM TechXchange community, Planet Mainframe and again any of the IBM reference books which I have listed again in the back.
[01:15:04] – Amanda Hendley – Host
So before we depart, we have a couple of articles, they are general news this month, so I thought there were some great statistics and numbers in the Mainframe Essentials article and the Mainframe market research was just put out about the Mainframe market’s growth. So actually I believe that might be a typo and it should be billion by 2032. So please do check out those articles. You can snap those QR codes, or we’ll include them in the newsletter that you’ll get next month. I’ve started featuring a job posting so there is a database administration specialist at Marshmallow Companies. They are hiring for multiple positions if you are looking for your next endeavor.
[01:16:00] – Amanda Hendley – Host
And again I mentioned that Planet Mainframe is doing a call for contributors. So if you’ve got a story you want to tell the Mainframe community, we’d love to read it. We have everything from highly in-depth feature articles to “getting to know you and your experience” in Mainframe articles. So if you want to submit, you can submit on that page or you can reach out to me. I’m ahendley@Planetmainframe.com. As far as getting involved and staying involved with Db2 so we’ve recently merged our Twitter accounts into one account @Mainframevug for Virtual User group so you can get all the mainframe user group information in one place and we’re doing the same with the YouTube channel as well. So on the YouTube channel we post all of our videos as you know, and we’re reorganizing that right now, so you’ll actually be able to find things a little bit easier. They’ll be in buckets with tags and categories, but I hope you’ll come join us on the LinkedIn group. That’s where we’ve got a place where we can have a little bit of conversation and collaboration and I would be remiss if we departed without me. Thanking again our sponsors, especially Intellimagic for their support of the Virtual User Group. So with that, thank you all for attending. Susan, thank you so much for presenting for us today. And I hope you all have a wonderful rest of the week. Bye.
Upcoming Virtual Db2 Meetings
November 19, 2024
Virtual Db2 User Group Meeting
November 19, 2025
I REST my case! Exploit API's for productivity - Toine Michielse