ProVIP Content Collection:
Systems Management
i
Contents Chapter 1: Save New Spooled Files..................................................................................1 The QSRSAVO API.............................................................................................................. 1 RcdLen and DataLen Fields................................................................................................. 4 The SPLFDTA (Key #35) Parameter....................................................................................... 5 Save Your Spooled Files....................................................................................................... 8
Chapter 2: IBM i Access for Windows: Silent Install, Selective Install..............................9 How IBM i Access Is Distributed....................................................................................... 10 Passing Parameters to the Installation Program................................................................... 10 Custom Installations.......................................................................................................... 12 Using the setup.ini File...................................................................................................... 12 Many More Options.......................................................................................................... 14
Chapter 3: IBM Systems Director 6.2.1.........................................................................15 Enhanced Install and Update Experience........................................................................... 15 Service and Support Enhancements................................................................................... 17 Extending Systems Director............................................................................................... 17 Active Energy Enhancements............................................................................................. 18 VMControl Enhancements................................................................................................. 18 Storage Enhancements....................................................................................................... 19 Performance and Security Enhancements.......................................................................... 19 Other Enhancements......................................................................................................... 19 For More Information........................................................................................................ 20
Chapter 4: How to Personalize IBM Systems Director...................................................21 Personalize Startup............................................................................................................ 21 Personalize Resources....................................................................................................... 23 Personalize Task Lists......................................................................................................... 25 Customization................................................................................................................... 25
Chapter 5: How to Implement Open-Source Solutions: Laying the Linux Foundation...27 Exploring the Capabilities.................................................................................................. 27 Simplifying the Linux Installation....................................................................................... 28 Operating System Replication............................................................................................ 28 What About VIOS?............................................................................................................. 29
ii Storage Management......................................................................................................... 29 Management..................................................................................................................... 33 Ready for More?................................................................................................................ 33
Chapter 6: Optimize System i Performance Adjuster and Shared Memory Pools...........35
Some Background............................................................................................................. 35 Buying vs. Tuning.............................................................................................................. 36 What Performance Adjuster Does...................................................................................... 36 Managing Activity Levels................................................................................................... 37 Rationalizing and Implementing Shared Pools................................................................... 38 Initializing Performance Adjuster....................................................................................... 39 An Explanation of Parameters............................................................................................ 40 One Note.......................................................................................................................... 41 In Conclusion.................................................................................................................... 41
1
Chapter 1:
Save New Spooled Files The QSRSAVO API lets you select spooled files to save by Peter Levy As far back as the System/38, I’ve wanted the O/S to have the functionality to save spooled files. I even created a requirement and defended it in a special session at a COMMON conference. Given the reaction, I think most people there thought I was a little batty. I didn’t understand why, though, because at every company I’ve ever worked, the users considered spooled files (especially the month and year-end reports) to be just as important as the database. As far as IBM was concerned at that time, the best way to backup spooled files was to print them. (There are, of course, ways to accomplish this outside of the O/S by copying the spooled files to physical files or user spaces and then saving those files, but that kind of thing has always been kludgey and causes problems when restoring the files to other systems that don’t have the requisite software.) When Rochester finally provided support for the functionality to save spooled files in i 5.4, I immediately started making the necessary changes that would enable our backup programs to save them during the daily and weekly saves. Imagine my annoyance when I found that the Spooled file data (SPLFDTA) parameter didn’t exist on any of the SAVCHGxxx commands. Thankfully, IBM did provide a way to accomplish this task using the Save Object List (QSRSAVO) API. It basically mirrors the Save Library (SAVLIB) and Save Object (SAVOBJ) commands, but has an added feature that lets it select which spooled files to save. Using this feature you can save just the spooled files that have been created after a specified date/time. (Of course, this begs the question: If I can do this in ILE RPG, why couldn’t IBM do it in the SAVCHGxxx commands?) You can download the Save New Spooled Files program from the System iNetwork code website (systeminetwork.com/code) or go to peterlevy.com and click Downloads and then SaveNewSploolFiles. zip. Once you download the file, decompress and view the ReadMe.html file in your web browser. All instructions for uploading the source and creating the objects using the MAKE CL program are contained within that document.
The QSRSAVO API
The QSRSAVO API has just two parameters: a fully qualified user space name and an error code data structure. Figure 1 shows the prototype for ILE RPG. Any program using this API must store the parameters for performing the save in the user space before calling the API. (These are somewhat different
Systems Management
2 Chapter 1: Save New Spooled Files from the SAVxxx command parameters, Figure 1 : but more on that later.) The error code Prototype for the QSRSAVO API parameter will be filled with a message D SavObjLst pr extpgm('QSRSAVO') D UserSpace 20a const ID and message data if an error occurs D ErrorCode 65535a options(*varsize) during the save. The first thing you need to do is create the user space. This has already been covered (see “Auto-Extending a User Figure 2 : Space,” SystemiNetwork.com, article ID Basic data structures for most parameters 19590), and I include the necessary serD SavObj ds qualified based(SplfUsrSpcPtr) vice program in the downloadable code. D RcdNbr 10i 0 Suffice it to say that it takes three system D SavObjParms ds qualified based(SavObjParmsPtr) D RcdLen 10i 0 APIs to create the user space, change it to D KeyNbr 10i 0 automatically extend its size, and retrieve D DataLen 10i 0 D* Data ??a a pointer that is returned to the calling D SavObjParmList ds qualified based(SavObjParmPtr) program. This pointer points directly to the D ElemNbr 10i 0 data space in the user space and will be used to build the parameters for the save. The next thing to do is add those parameters to the user space. They consist of a set of variable-length records that correspond, most times exactly but other times loosely, to the parameters of the various SAVxxx commands. Also, instead of parameter names, the API requires “key numbers.” For example, the LIB parameter is key number 2, the DEV parameter is key number 3, and the OBJ & OBJTYPE parameters are combined into key number 1. Figure 2 shows the two data structures that I use for all parameters. The first four bytes in the user space constitute a binary integer that must contain the total number of records (SavObj.RcdNbr). Following that is the first record. This and every record that follows have the same base data structure (SavObjParms), consisting of three integers and then the data. The first integer contains the length of the entire record (RcdLen), the second houses the parameter key number (KeyNbr), and the third is the length of the data Find Out More (DataLen). Note that the basing pointer for the SavObj “Auto-Extending a User Space,” SystemiNetwork.com, article ID data structure, 19590 SplfUsrSpcPtr, is also where the pointer to the user QSRSAVO API documentation, IBM i 7.1 Information Center, space is stored and tinyurl.com/QSRSAVO is never changed. The pointer to the SavObjParms data structure is incremented for every new record. (I suppose that one could create a giant data structure where all the positions are hard coded and that would avoid the need to use and understand pointers, but it’s also quite inflexible if the program ever needs to be modified in the future.) Now you build the first parameter, which will be the list of objects and object types. Figure 3 shows the RPG code that is used, which executes the following steps:
Systems Management
Chapter 1: Save New Spooled Files 3
1. Initialize the SavObj.Rcdnbr integer to 1 for the first record. 2. Calculate the pointer for the SavObjParms data structure (SavObjParmsPtr) to position it just past the SavObj.Rcdnbr integer. 3. Initialize the RcdLen, KeyNbr, and DataLen fields to the record length, the key number for the OBJ/OBJTYPE parameter (1), and the length of the data, respectively. 4. Calculate the pointer for the SavObjParmList data structure (SavObjParmPtr) to position it just past the DataLen integer. 5. In this case the data is a list, so the first item of that data is a four-byte integer that specifies how many elements there are in the list. Here it’s initialized to 1 (ElemNbr) as there is only one element in the list. 6. Calculate the pointer for the ObjObjTypeParm data structure shown in Figure 4 (SavObjElemPtr) to position it just past the ElemNbr integer. 7. Finally, initialize the two elements of the ObjObjTypeParm data structure to *ALL objects and *ALL object types.
Figure 3 : Free-format ILE RPG code to build the first parameter SavObj.Rcdnbr SavObjParmsPtr SavObjParms.RcdLen %size(SavObjParmList) +
SavObjParms.KeyNbr SavObjParms.DataLen %size(ObjObjTypeParm); SavObjParmPtr SavObjParmList.ElemNbr SavObjElemPtr ObjObjTypeParm.Name ObjObjTypeParm.Type
= 1; = SplfUsrSpcPtr + %size(SavObj); = %size(SavObjParms) +
%size(ObjObjTypeParm); = 1; // OBJ/OBJTYPE parameter. = %size(SavObjParmList) + = SavObjParmsPtr + %size(SavObjParms); = 1; = SavObjParmPtr + %size(SavObjParmList); = '*ALL'; = '*ALL';
Figure 4 : Data structure for the OBJ/OBJTYPE parameter D ObjObjTypeParm D Name D Type
ds
qualified based(SavObjElemPtr)
10a 10a
Figure 5 : Free-format ILE RPG code to build the second parameter SavObj.Rcdnbr SavObjParmsPtr SavObjParms.RcdLen
+= 1; += SavObjParms.RcdLen; = %size(SavObjParms) + %size(SavObjParmList) + %size(LibParm); SavObjParms.KeyNbr = 2; // LIB parameter. SavObjParms.DataLen = %size(SavObjParmList) + %size(LibParm); SavObjParmPtr = SavObjParmsPtr + %size(SavObjParms); SavObjParmList.ElemNbr = 1; SavObjElemPtr = SavObjParmPtr + %size(SavObjParmList); LibParm.Name = '*SPLF';
Even though the program only saves spooled files, the first parameter specifying *ALL objects and *ALL object types must be specified. The code to build the second parameter for the list of libraries is in Figure 5, and it’s similar to the code in Figure 3, except for the following: • The SavObj.Rcdnbr integer is incremented by one for the second parameter record. • The pointer calculation for the SavObjParms data structure (SavObjParmsPtr) uses the RcdLen value from the previous parameter record to set it to a new position just past it. • The KeyNbr is initialized with the key number for the LIB parameter (2). • The data portion contains a list of libraries to be saved. Because I’m only saving spooled files, I substitute the special value *SPLF. Figure 6 shows the data structure for the LIB parameter data.
Systems Management
4 Chapter 1: Save New Spooled Files This way of adding parameters to the user space continues until you’re finished. Just about any valid parameter can be used for backing up new spooled files, and you can see them all in the Valid Keys chart (tinyurl.com/ValidKeys) for the QSRSAVO API documentation. Some parameters make more sense than others. For instance, you can use savewhile-active to backup spooled files; so the SAVACT and related parameters can be useful. On the other hand, storage for saved spooled files is never freed after it’s backed up, so including the STG parameter would be a waste of code.
Figure 8 : Hex dump showing structure with four-byte boundary alignment 0000 0010 0020 0030
>.0.1.2.3 >00000002 >00000001 >0000001A >E3C1D7F0
.4.5.6.7 0000001A 6CE2D7D3 00000003 F1404040
.8.9.A.B 00000002 C6404040 0000000E 40400000
.C.D.E.F< 0000000E< 40400000< 00000001< 00000000<
… … … … …
>0123456789ABCDEF< >________________< >____*SPLF __< >________________< >TAP01 ______<
Field
Address Value in Hex
Human Readable
SavObj.RcdNbr SavObjParms.RcdLen SavObjParms.KeyNbr SavObjParms.DataLen SavObjParmList.ElemNbr LIB() parameter value SavObjParms.RcdLen SavObjParms.KeyNbr SavObjParms.DataLen SavObjParmList.ElemNbr DEV() parameter value
x’0000’ x’0004’ x’0008’ x’000C’ x’0010’ x’0014’ x’0020’ x’0024’ x’0028’ x’002C’ X’0030’
2 28 2 (LIB parameter) 14 1 ‘*SPLF ‘ 28 3 (DEV parameter) 14 1 ‘TAP01 ‘
RcdLen and DataLen Fields
x’00000002’ x’0000001C’ x’00000002’ x’0000000E’ x’00000001’ x’6CE2D7D3C64040404040’ x’0000001C’ x’00000003’ x’0000000E’ x’00000001’ x’E3C1D7F0F14040404040’
Why, you might be asking, does the SavObjParms data structure need a Figure 9 : size for the data and the whole record? SetIntBoundary() function After all, the API knows the size of the P SetIntBoundary b RcdLen and KeyNbr fields; it would be D SetIntBoundary pi 10i 0 D ElementLength 10i 0 const easy enough to subtract them from the record length to get the data length, or D BoundarySize c 4 vice-versa. /free It’s a good question, and it mostly if %rem(ElementLength: BoundarySize) = *zero; has to do with boundary issues. Fourreturn ElementLength; else; byte binary integers are handled more return (%int(ElementLength / BoundarySize) + 1) * BoundarySize; efficiently if they’re all on a four-byte endif; boundary. For example, if you’ve got /end-free data that is 14 bytes (say, a list of device P SetIntBoundary e or library names with a single element), then together with the three integers the record length comes out to 26 bytes, which would put the next record in the middle of a four-byte boundary (Figure 7). At address x’0021’ in the dump you can see that the x’1A’, which is the end of the integer, is in the middle of the first column. This is because it starts at address x’001E’. To maintain a four-byte boundary for better efficiency, you could increase the record length to 28 bytes while the data length would, of course, remain at 14 bytes (Figure 8). This leaves an extra two bytes unused, but that’s really no big deal. Now in the dump, that same integer starts at x’0020’ and ends at x’0023’, and all is right with the world. To make it even more efficient I’ve also created an internal function called SetIntBoundary() (Figure 9). If the length isn’t evenly divisible by four, then it will return an increased length that is. If it Systems Management
Chapter 1: Save New Spooled Files 5
is evenly divisible, then it will return the original length. Now the length calculation in Figure 3 can be changed to include this new function (A in Figure 10).
The SPLFDTA (Key #35) Parameter
Figure 10 : Figure 3 modified to use the SetIntBoundary() function from Figure 9 SavObj.Rcdnbr SavObjParmsPtr SavObjParms.RcdLen
= 1; = SplfUsrSpcPtr + %size(SavObj); = SetIntBoundary(%size(SavObjParms) + %size(SavObjParmList) + %size(ObjObjTypeParm)); SavObjParms.KeyNbr = 1; // OBJ/OBJTYPE parameter. SavObjParms.DataLen = %size(SavObjParmList) + %size(ObjObjTypeParm); SavObjParmPtr = SavObjParmsPtr + %size(SavObjParms); SavObjParmList.ElemNbr = 1; SavObjElemPtr = SavObjParmPtr + %size(SavObjParmList); ObjObjTypeParm.Name = '*ALL'; ObjObjTypeParm.Type = '*ALL';
Most of the parameters added to the user space are mundane single or list parameters that are easy to figure out. SPLFDTA (Key #35) is more complex because there are multiple ways to select spooled files. Figure 11 shows the data structures used to handle this parameter, and Figure 12 shows the code. The first pieces of data are the same as the other parameters Figure 11 : (i.e., the RcdLen, KeyNbr, and DataLen The structures needed for the spooled file data (SPLFDTA) fields) in the SavObjParms data structure. parameter D SplfDtaParm ds qualified based(SavObjParmPtr) At A in Figure 12 the DataLen is the sum D SplfData 10i 0 D SplfHdrLen 10i 0 total of the sizes of the SplfDtaParm, D SplfOffset 10i 0 SplfDtaSelect, and SplfDtaAttr data D SplfDtaSelect ds qualified based(SavObjElemPtr) structures because that is all you need D Length 10i 0 D Offset 10i 0 to save new spooled files. Other uses D Include 10i 0 of this parameter can be quite long and D Format 10i 0 D SelectOffset 10i 0 complex, and you can get as granular as D NewAttrOffset 10i 0 necessary about what you save, but you D SplfDtaAttr ds qualified based(SavObjElemPtr) don’t need to get that in-depth for this D Length 10i 0 D OutqName 10a program. D OutqLib 10a D SplfName 10a Once the SavObjParms data strucD JobName 10a ture is initialized, the program then D UserName 10a D JobNbr 6a calculates a pointer to just past it so that D UserData 10a D JobSysName 8a it can start filling the SplfDtaParm data D FormType 10a structure (B in Figure 12). The first field D StrCrtDate 13a D EndCrtDate 13a to be initialized is SplfData, and it can have one of three values: *ZERO (no spooled files are saved), 1 (for every out queue saved the spooled files contained within will be saved), or 2 (only selected spooled files are saved, and additional selection criteria is required). The program example is using option 2. The next field to be initialized is the SplfHdrLen field, which is the length of the SplfDtaParm data structure. It can have two possible values depending on what is specified in the aforementioned SplfData field. If *ZERO or 1 is specified, then this value must be 8, which is the length that would cover only the SplfData and SplfHdrLen fields in the data structure. (I’m not sure why the developers put the SplfHdrLen field second in the data structure. It would have made more logical sense if it had Systems Management
6 Chapter 1: Save New Spooled Files come first, but that’s an argument Figure 12 : Code to set up spooled file selection for SPLFDTA parameter for another day.) The other allowable SavObj.Rcdnbr += 1; length value is 12, which covers not SavObjParmsPtr += SavObjParms.RcdLen; SavObjParms.RcdLen = SetIntBoundary(%size(SavObjParms) + just the first two fields, but the last %size(SplfDtaParm) + %size(SplfDtaSelect) + field in this data structure as well. At %size(SplfDtaAttr)); B in Figure 12 the program is speciSavObjParms.KeyNbr = 35; // SPLFDTA parameter. SavObjParms.DataLen = %size(SplfDtaParm) + fying the full size of the SplfData %size(SplfDtaSelect) + %size(SplfDtaAttr); structure, which is 12. This last field is the one that’s A SavObjParmPtr = SavObjParmsPtr + %size(SavObjParms); SplfDtaParm.SplfData = 2; // Using selection list. specifically required when SplfSplfDtaParm.SplfHdrLen = %size(SplfDtaParm); SavObjElemPtr = SavObjParmPtr + %size(SplfDtaParm); Data is set to 2: the SplfOffset field, B SplfDtaParm.SplfOffset = SavObjElemPtr – SplfUsrSpcPtr; which is the offset to the spooled SplfDtaSelect.Length = %size(SplfDtaSelect); file Selection Criteria. In IBM parSplfDtaSelect.Offset = *zero; // Last selection criteria. SplfDtaSelect.Include = 1; // Include? = Yes. lance, an “offset” is a starting posiSplfDtaSelect.Format = 2; // Spooled File Attributes Format. C SplfDtaSelect.SelectOffset = SplfDtaParm.SplfOffset + tion from the beginning of a user %size(SplfDtaSelect); space—not just from the beginning SplfDtaSelect.NewAttrOffset = *zero; // No attributes to be set. of the current section within the SavObjElemPtr += %size(SplfDtaSelect); SplfDtaAttr.Length = %size(SplfDtaAttr); user space; that distance is known SplfDtaAttr.OutqName = '*ALL'; as a “displacement.” (The length of SplfDtaAttr.OutqLib = '*ALL'; D SplfDtaAttr.SplfName = '*ALL'; the various parameter records in the SplfDtaAttr.JobName = '*ALL'; SplfDtaAttr.UserName = '*ALL'; user space is an example of how SplfDtaAttr.JobNbr = '*ALL'; displacements are used.) The easiest SplfDtaAttr.UserData = '*ALL'; SplfDtaAttr.JobSysName = '*ALL'; way to calculate an offset when SplfDtaAttr.FormType = '*ALL'; SplfDtaAttr.StrCrtDate = StrCrtDate; you’re deep in the middle of it is to SplfDtaAttr.EndCrtDate = '*ALL'; set a new pointer using the displacement to the new position and then calculate the offset by subtracting the base user space pointer from the new pointer value. The spooled file Selection Criteria is a list of data structures for which you can select or omit specific sets of spooled files for or from the backup. You can see the SplfDtaSelect structure in the example at B in Figure 11 and the code that initializes it at C in Figure 12. Using this structure you can select or omit individual spooled files or entire collections of them. (You can also use it to change the expiration date for each set of spooled files after they’re saved, though I don’t discuss that in this article.) The first field in this data structure is named Length, and it’s the length of the SplfDtaSelect structure. The allowable values are 20 (if you exclude an offset for setting the new attributes) or 24 (to include it). Even though it’s set at 24, I’m ultimately going to set the offset to *ZERO. The second field is called Offset, and—appropriately—it’s the offset to the next spooled file Selection Criteria structure. But because I only need one Selection Criteria data structure, I set it to *ZERO to tell the API that this is the last or only one in the list. The third field is a numeric indicator named Include that tells the API whether to include the spooled files or omit them. *ZERO will omit them, and 1 will include them, and I’m obviously doing the latter. Systems Management
Chapter 1: Save New Spooled Files 7
The fourth field is called Format, and it’s the numeric format of the structure that provides the selection criteria for selecting (or omitting) spooled files. The allowable values are 1 and 2. Specifying 1 tells the API that I’m going to provide the spooled file ID (i.e., full job name, spooled file name, number, etc.) to select a single spooled file, which would be cumbersome for this application. In the code I specify 2 (C in Figure 12), which tells the API that I’m going to provide selection criteria. The fifth field is named SelectOffset, and it’s the offset to the selection criteria. I calculate it by adding the size of the SplfDtaSelect structure to the SplfOffset field. Finally, the sixth field is called NewAttrOffset, and it’s the offset to the aforementioned new attributes, which I set to *ZERO because I don’t want the spooled file expiration dates to be changed after they have been saved. The last data structure that needs to be initialized for this parameter is named SplfDtaAttr, and it’s similar in concept to selection parameters on many of the spooled file commands: Work With Spooled Files (WRKSPLF), Hold Spooled File (HLDSPLF), Release Spooled File (RLSSPLF), Delete Spooled File (DLTSPLF), etc. You can see the data structure at C in Figure 11 and the code to initialize it at D in Figure 12. The selection field names should be pretty self-explanatory. Most of the fields expect a name, generic name, or other appropriate special value. (If, for example, you specified MYOUTQ* in the OutqName field and *LIBL in the OutqLib field, then it would save all spooled files found in output queues in the library list whose names started with MYOUTQ.) The only exceptions are the fields StrCrtDate and EndCrtDate, which would be used to enter a date/time range and select spooled files based on their creation date/time. They are both 13-byte fields, and the date/time data must be in the format CYYMMDDHHMMSS (where C is the century digit: 0=19xx, 1=20xx, 2=21xx, etc.). Regardless of what would normally be stored in these fields, the special value *ALL can be specified in any of them to tell the API to disregard it when making the selection. At D in Figure 12 I specify *ALL in every field except StrCrtDate, which is initialized by a date/ time field that was calculated previously. In the command and program source that you can download (from systeminetwork.com/code or peterlevy.com), the value that is inserted into StrCrtDate is built from two command parameters: REFDATE and REFTIME. If REFDATE(*SAVLIB) is specified, then the program will calculate it by retrieving the last save date/time from the QUSRSYS library using the Retrieve Object Description (QUSROBJD) API. If a date or time has been provided in the parameters, then it will use those values instead. Either way, this criterion is used by the QSRSAVO API to save all spooled files created on or after this date/time. If the last save date/time in the QUSRSYS library doesn’t match the last time that all spooled files were backed up, then your program should pass the real date/time in these two command parameters. After the program is finished the spooled files will appear on the media (or in a save file) as if they were saved from an output queue named *SPLF in library *SPLF. This, of course, is all a front, because once you take option 5 to display the spooled files, you’ll see that they all came from separate output queues. Another possible source of irritation is that when the SAVCHGxxx commands are used with the SAVNEWSPLF command , all of the saved spooled files will end up at the end of the media instead of peppered throughout like they would be with the SAVxxx commands. I also tried to determine whether you could combine both saving changed objects and new spooled files using the QSRSAVO API, but alas, it can’t be done. It’s a little too ironic that the API can save new spooled files but not changed objects, while the commands can save changed objects but not new
Systems Management
8 Chapter 1: Save New Spooled Files spooled files. It would be much easier for all concerned if IBM added support for both the REFDATE/ REFTIME parameters in the QSRSAVO API and the SPLFDTA parameter on the SAVCHGxxx commands. If you download the example code, you’ll no doubt notice that it’s lacking in the myriad of parameters that I could have included from the SAVxxx commands. The reason for this is not out of laziness but because I took a working program that I had written for my company and turned it into an example program for this article. That program actually doesn’t have a command front end and it does some other activities that weren’t germane to the article. If you want to use the example program in production, please feel free to expand the command and program as you see fit because it’s relatively easy to do. You can add parameters and help text to the command. The extra code needed, to include the added parameter values in the user space for the API, can be copied from the code of other parameters and altered to fit the new values. (The ENDOPT and VOL parameters are just two off the top of my head that might be very useful.) Email me any changes you make; I’ll include them in new versions on my website and give you the appropriate credit.
Save Your Spooled Files
You may not realize it but, as I wrote in the introduction, your users do consider spooled files to be very important. If your shop isn’t saving them, then I encourage you to start. If you’re only saving them on weekends, during the full system backup, you can now save the new ones during the daily SAVCHGxxx with the QSRSAVO API. From then on, if important spooled files get accidentally deleted or purged, you’ll be the hero because you’ve been backing them up all the time, not just on the weekends. ■ Peter Levy (
[email protected]) graduated with a computer science degree from Rutgers in 1982 and has been working on the System/38, AS/400, IBM i platform since 1984. He has worked for companies in printing, consumer electronics, chemicals, apparel, transportation, and computer consulting.
Systems Management
9
Chapter 2:
IBM i Access for Windows: Silent Install, Selective Install Control which installation panels display and which features to install by Craig Pelkie Starting with 6.1 of IBM i Access for Windows, IBM uses Windows Installer technology for performing initial and update installations. When you install the product on your PC using the default installation options, a series of up to a dozen panels and dialogs appear, most of which require a response. On most of the panels, a default install option is already entered; you simply click the Next button to continue. Installing IBM i Access for Windows 6.1 or 7.1 on a small number of PCs that you have easy physical access to (for example, PCs in your local office) is usually uneventful. With only a few PCs, you can install the product using the IBM install image DVD or from the IBM i IFS. Because an initial or upgrade installation takes only a few minutes, you likely won’t need to follow the installation options described in this article. Instead, simply respond to the prompt panels and manually select features to add or remove during the install. But what if you need to install IBM i Access on many PCs or lack ready access to the PCs? You may not want your users to perform the installation themselves, whereby they might respond to the prompting panels and possibly change defaults or select features they’re not intended to have. The solution may be to investigate the silent and selective install features provided with the product. Using silent install, you can control which installation panels appear; with Figure 1 : IBM i Access for Windows Install Image directories in selective install, you choose which the IFS features to install.
Systems Management
10 Chapter 2: IBM i Access for Windows: Silent Install, Selective Install
How IBM i Access Is Distributed
IBM i Access is distributed both on the installation media (DVD) and in the IBM i IFS. Working with the DVD install is easy; simply insert the DVD, and the setup program starts automatically. This option is good if you do not have a network connection to the install image in the IFS. The IFS install requires a network connection from the PC to the installation directory /QIBM/ProdData/Access/ Windows, shown in Figure 1. Once you’ve opened that directory, you can run the cwblaunch.exe program or navigate to the version-specific directory and run the setup.exe program, as in Figure 2. When you run cwblaunch, the program identifies the correct version of the product to install on your PC or server. Figure 3 shows the directories and the available versions of IBM i Access for Windows.
Figure 2 : Each Install Image directory contains setup.exe program and setup.ini file
Figure 3 : IBM i Access for Windows installation directories Directory
Description
Image32
Used for any of the supported 32-bit versions of Windows.
Image64a
Used for any of the supported 64-bit versions of Windows on a PC that uses AMD 64-bit or Intel Xeon 64-bit processors.
Image64i
Used for 64-bit versions of Windows on a PC that uses Intel Itanium 64-bit processors (typically only server systems). This directory is not provided with IBM i Access V7R1.
Passing Parameters to the Installation Program
Regardless of how you install IBM i Access for Windows (DVD or IFS, cwblaunch or setup), you can pass parameters to the installation program. Figure 4 shows a summary of the command line parameters for the programs. The two general categories of parameters are: • parameters that control the user interface level displayed during the installation. • parameters that specify what is installed and the installation options. Simply starting cwblaunch or setup. exe without specifying any parameters
Figure 4 : Command line parameters that can be passed to cwblaunch or setup.exe
Systems Management
Chapter 2: IBM i Access for Windows: Silent Install, Selective Install 11
displays the entire sequence of installation panels and dialogs. By default, the program performs a complete install of all the components of IBM i Access for Windows. Parameters that control the user interface. Figure 5 shows the parameters that you can pass to either cwblaunch or setup and the effect of the parameters. Figure 6 is the Choose Setup Language dialog that you can suppress with the /S parameter. Note that if you use the /v/qn (no user interface) parameter, users are not prompted to reboot their PCs; however, if any open Windows programs have unsaved work, users are prompted to close the open programs so that the reboot can occur. Parameters that specify what is installed. In addition to controlling how the installation program interacts with you, you can specify which features of IBM i Access for Windows to install. In the IBM documentation, the parameters are called “public properties” (this is based on the Windows Installer terminology). Figure 7 shows the CWBINSTALLTYPE public property and the three values that you can specify for the property. The property value corresponds with the Setup Type dialog in Figure 8. To combine the user interface parameters with the public property, you can specify multiple /v parameters like this: cwblaunch /S /v”/qn CWBINSTALLTYPE=PC5250User”
Note that the parameters following the /v are enclosed within doublequotation characters.
Figure 5 : Parameters that control the installation user interface Parameters
Description
(none)
Default, displays all installation panels and dialogs. You can change any of the installation options, select features to install and cancel the install when using the no-parameters install.
/S
Suppress the Choose Setup Language dialog (see Figure 6). The language to install is selected based upon the Windows default language selection.
/v/qr
Reduced user interface. Displays a progress bar during the install, prompts for reboot at end of install.
/v/qb
Basic user interface. Displays a progress bar during the install, prompts for reboot at end of install.
/v/qn
No user interface displayed during install. Does not prompt for reboot at end of install; reboot occurs automatically. If there are any open programs, the user is prompted to end the open programs.
Notes • You can include the /S parameter with any of the other parameters. Example: cwblaunch /S /v/qn Performs a “truly silent” install. • Do not enter a space between the /v characters and the characters that follow. • There is no practical difference between the /v/qr and /v/qb options.
Figure 6 : Choose Setup Language dialog is displayed first in the install process Figure 7 : Parameters that control the features that are installed Parameters
Description
(none)
Default, installs all features of IBM i Access.
/vCWBINSTALLTYPE=Complete
Installs all features of IBM i Access (same as default option).
/vCWBINSTALLTYPE=Custom
Indicates that the Custom Setup dialog (see Figure 9) is to be displayed, allowing selection of features to install.
/vCWBINSTALLTYPE=PC5250User
Installs the PC5250 Display and Print Emulator.
Notes • The CWBINSTALLTYPE values correspond to the options on the Setup Type dialog shown in Figure 8. • The property name CWBINSTALLTYPE is case-sensitive.
Systems Management
12 Chapter 2: IBM i Access for Windows: Silent Install, Selective Install
Custom Installations
Selecting the installation type option “Custom” in Figure 7 (or choosing the Custom option from the Setup Type dialog in Figure 8) displays the Custom Setup dialog in Figure 9. But what happens if you also use the silent install feature, for example, by running the following command? cwblaunch /S /v”/qn CWBINSTALLTYPE=Custom”
The answer is that the silent install proceeds, and the default features specified in an interactive custom setup are selected. Figure 10 shows the features included with each of the three standard setup types. Sometimes, you will want more control over the features that are installed. Using the ADDLOCAL public property and the identifiers in Figure 10, you can specify a comma-delimited list of features to be installed. For example, to install the 5250 emulator, SSL, data transfer, and ODBC, use a command such as this:
Figure 8 : Setup Type dialog correlates to the CWBINSTALLTYPE property
cwblaunch /S /v”/qn ADDLOCAL=emu,ssl,dt,odbc”
If you use ADDLOCAL, do not use the CWBINSTALLTYPE property. The Required Programs feature (identifier “req”) will always be installed automatically, so you need not specify it in the ADDLOCAL list.
Using the setup.ini File
The previous examples show how to pass parameters to the cwblaunch program. To run it, enter the command directly in the Windows Run program or in a Command Prompt window, or
Figure 9 : Custom Setup dialog used in an interactive install to select features to install
Systems Management
Chapter 2: IBM i Access for Windows: Silent Install, Selective Install 13
create a batch file that contains the program name and the parameters. Another technique is to embed the parameters in the setup.ini file associated with the install image. For example, Figure 2 shows the location of the setup.ini file used with the 32-bit install image. There are corresponding setup.ini files for the 64-bit install images, as well. Before making any modifications to a setup.ini file, you may want to make a backup copy of the file. Figure 11 shows an excerpt of the default setup.ini file for the 32-bit install image. The following are keys in the [Startup] section that you can modify:
Figure 10 : Features of IBM i Access for Windows Feature
Setup Type Identifier Complete Custom * PC5250User
Required Programs
req
AFP Workbench Viewer Toolbox for Java 5250 Display and Printer Emulator Secure Sockets Layer (SSL) Operations Console Directory Update Incoming Remote Command
✔ ✔ tbj ✔ emu ✔ ssl ✔ oc ✔ Optional Features
✔ ✔ ✔ ✔
dir
✔ ✔
viewer
✔ ✔ System i Navigator irc
(Base Support)
inav
Basic Operations
inavbo
Work Management
inavwm
Configuration and Service
inavcfg
Network
inavnet
Integrated Server Administration
inavisa
Security
inavsec
Users and Groups
inavug
Databases
inavdb
File Systems
inavfs
Figure 12 shows the modifications to setup.ini to perform the following during the installation:
Backup
inavback
Commands
inavcmd
Packages and Products
inavpp
Monitors
inavmon
• suppress user interface dialogs and panels (/qn parameter added to CmdLine value). • install the PC5250 emulator, SSL, data transfer, and ODBC (the ADDLOCAL property added to the CmdLine value). • suppress the language selection dialog (EnableLngDlg value set to n).
Logical Systems
inavlog
AFP Manager
inavafp
Application Administration
inavad
Data Transfer Base Support
dt
Data Transfer Excel Add-in
dtexcel
ODBC
odbc
OLE DB Provider
oledb
.NET Data Provider
dotnet
Lotus 123 File Format Support
lotus123
AFL Printer Driver
afp
SCS Printer Driver
scs
• CmdLine: Modify this to specify the user interface level and the features to install. • EnableLangDlg: Modify this to suppress the language selection dialog (see Figure 6).
After modifying setup.ini, save it to the installation directory. (Do not change the name of the file; the installation program looks specifically for a file named setup.ini.) You can now run the cwblaunch or setup programs without
✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
✔ ✔
✔ ✔ ✔ ✔ ✔ ✔
✔ ✔ ✔ ✔
Data Access
Printer Drivers
Systems Management
✔ ✔
✔
✔
14 Chapter 2: IBM i Access for Windows: Silent Install, Selective Install passing any parameters. The values that you specified in setup.ini will be used for the installation. You can also burn a DVD image from the install image directory that includes your modified setup.ini file. The install image directories contain an autorun. inf file that runs the setup program. If you burn a DVD directly from one of the install image directories, you should be able to provide the DVD to your users as an auto-run DVD that will launch the installation process.
Figure 10 : (Contibue) Programmer’s Toolkit Headers, Libraries and Documentation
hld
✔
Java Programmer’s Tools
jpt
✔
Note: Custom setup type assumes that no additional features are selected on the Custom Setup dialog.
Many More Options
The installation process provides many more options and features that you may want to investigate. One helpful tool for doing so is IBM’s PDF manual called IBM i Access for Windows: Installation and setup (publib. boulder.ibm.com/infocenter /iseries/v6r1m0/topic/rzaij/rzaij.pdf). Using the examples I’ve shown in this article and the additional information from the IBM manual, you can create tailored installation images for your IBM i Access 6.1 or 7.1 users. ■
Figure 11 : The default setup.ini file
Craig Pelkie (
[email protected]) is a technical editor for System iNEWS and has worked as a programmer with IBM midrange computers for many years. He has also written and lectured extensively on IBM i technologies, including client/server programming, Client Access, Java, WebSphere, .NET applications for IBM i, and web development.
Figure 12 : The modified setup.ini file Systems Management
15
Chapter 3:
IBM Systems Director 6.2.1 The latest release provides improved functionality by Greg Hintermeister IBM Systems Director 6.2.1 was recently released; this iteration focuses on addressing specific customer requests and improvements, such as requests for new functionality and complaints about how things worked. This article highlights the changes in IBM Systems Director 6.2.1.
Enhanced Install and Update Experience
If you are already using Systems Director 6.2, then updating to version 6.2.1 is quite simple. Just select Update IBM Systems Director from the top of the Systems Director welcome page. This action will connect you to ibm.com and show all the available Systems Director updates. Click Download and Install, and you’ll be updated in no time. If you are new to Systems Director, then you will want to use the IBM Systems Director Pre-Installation Utility. This tool analyzes the physical or virtual server that you selected to install the Systems Director Server onto, then shows the results (Figure 1), to ensure that all the requirements are met. This tool works for every OS on which you can install the server: AIX, Linux, and Windows. Although Systems Director Server does not install on IBM i, Systems Director can manage IBM i systems very well. The following are all the elements analyzed to ensure the Systems Director Server installation will go smoothly: • Runtime authentication • OS compatibility • Host architecture • Processors • Disk space available • Memory available • Software required • Port availability • Promotion validity • Intelligent Platform Management Interface (IPMI) status (Linux only) • Security-Enhanced Linux (SELinux) status (Linux only) Systems Management
16 Chapter 3: IBM Systems Director 6.2.1 • Migration information • Performance information • User name check • RSA check (Windows and Linux only) • Short name (8.3 names) check (Windows only) • Paging size check • Locale check (Linux only) • File size limit check (AIX only) If you are installing on Windows, a tool is included with the IBM Systems Director Pre-Installation Utility to clean up older versions of the Systems Director server and agent. Although you should first use the default uninstall process, the SySDirRemoval.exe tool might help you clean up all files. This tool is located in the \bin folder. The Systems Director DVD media Figure 1 :IBM Systems Director Pre-Installation Utility output includes an Install launchpad. If you order the media directly, or if you purchased any of the Systems Director editions, then the DVD provides the Install launchpad to guide you through the installation process. For ease of setup on managed systems, the following agents are automatically embedded into the following IBM OSs: IBM Operating Agent Level System: Shipped in OS: AIX 7.1 TL00 (710) 6.2.0.1 + CIM update AIX 6.1 TL05 (61L) 6.2.0.1 + CIM update VIOS 2.2.0 6.2.0.1 + CIM update VIOS 2.2.0.11-FP24 SP01 6.2.1.0 In Systems Director 6.2.1, Update Manager is enhanced for a better experience when updating managed systems. A variety of usability enhancements include providing settings customization where you need it, auto refresh on update pages, and enhancements to provide more detail in error messages. Power System users will be happy to learn that Update Manager now updates Power I/O firmware. A new wizard asks which target device you want to update, and filters to show only those devices that can be updated using the selected patch.
Systems Management
Chapter 3: IBM Systems Director 6.2.1 17
Service and Support Enhancements
Although Service and Support Manager is not included in the Systems Director base installation, there is no additional charge to use it, so I highly recommend downloading and installing it. Service and Support Manager analyzes events received from your managed systems—then, if they are deemed serviceable, it automatically collects data and creates a service request at IBM. In Systems Director 6.2.1, Service and Support Manager also collects performance management data for Power Systems with an AIX OS. Once collected, the data is securely transmitted to IBM support. In addition, with Systems Director 6.2.1 you can now manually open a service request through Service and Support Manager. If you determine that an event is serviceable but has not been processed, then you can collect service data and have it sent to IBM along with a service request. A common request from customers is that although they want to monitor many kinds of systems, they also want different kinds of data collected on specific systems. Systems Director 6.2.1 makes this possible. In the properties of each system, the Service and Support tab is enhanced to show selections for problem reporting, inventory reporting, and performance management data reporting. An important change in Systems Director 6.2.1 is that as soon as the plug-in is installed, serviceable problems are monitored, and if problems are found, data is collected. You no longer have to activate specific systems. However, in order to have the data sent to IBM support, you need to open the Service and Support Getting Started wizard.
Extending Systems Director
Increasing numbers of customers are interested in using Systems Director through their own custom scripts and applications. There are two ways to do this. The first is through the commandline interface. A newer method is through the IBM Systems Director Software Development Kit (SDK). Although you must register to use the SDK, there is no fee. To register, go to ibm.com /vrm/4api1; to learn more about the SDK, go to publib.boulder.ibm.com /infocenter/director/sdk/index.jsp; for access to the SDK forums, go to ibm. com/developerworks/forums/forum. jspa?forumID=1852. The SDK lets you use web-based APIs to pull data from Systems Director or push down to run Systems Director tasks. You can also register your own applications in the Systems Director UI through the External Application Launch. This lets
IBM Systems Director Website Links IBM Systems Director website: ibm.com/systems/software/director IBM Systems Director download site: ibm.com/systems/software/director/downloads IBM Systems Director demo site: ibm.com/developerworks/mydeveloperworks/wikis/ home?lang=en#/wiki/W3e8d1c956c32_416f_a604_4633 cd375569/page/IBM%20Systems%20Director IBM Systems Director Facebook page: facebook.com/pages/IBM-Systems-Director/ 193362963483 IBM Systems Director SDK: ibm.com/vrm/4api1 publib.boulder.ibm.com/infocenter/director/sdk/index.jsp ibm.com/developerworks/forums/forum.jspa ?forumID=1852
Systems Management
18 Chapter 3: IBM Systems Director 6.2.1 you customize a context menu to launch your own web UI. Another way to extend Systems Director is to use hierarchical management. This means that you can have a global Systems Director server installation discover and manage as many as four domain-specific Systems Director servers. This capability was introduced in Systems Director 6.2 and enhanced in version 6.2.1.
Figure 2 : IBM Storwize V7000 inventory view
Active Energy Enhancements
Active Energy Manager adds support for the latest systems, as well as additional UPS and PDU power management devices. The hardware list can be found in the Information Center. Active Energy Manager also adds new hardware monitoring features such as the ability to monitor power usage of attached I/O drawers. In addition, you can monitor aggregated power values across the whole server. One of the top customer requests has also been added: the ability to manage power usage per partition. This allows processors allocated to some partitions to be dynamically throttled but processors allocated to others to run at full capacity. Other attractive enhancements that I’ll cover in more detail in future columns include: • Cost calculator: Provides for additional calculations to help determine how much money you are actually saving by using power savings and power capping. It also now provides estimates of future savings with continued use of power savings mode. • New performance monitor views.
VMControl Enhancements
Almost all the enhancements in VMControl 2.3.1 are based on direct customer requests, which are the most valuable improvements for real-world customer needs. I’ll go into more detail in a future column, but the following is a summary: Deploying images in a virtual appliance. VMControl Standard Edition is dedicated to deploying new workloads onto target systems. VMControl 2.3.1 adds new and simplified deployment methods. Now with VMControl 2.3.1 you can deploy AIX without needing to configure Network Installation Manager (NIM) by using a storage-based image repository. Simply select an existing Virtual I/O Server (VIOS) as your repository, and you can capture and deploy AIX or Linux virtual servers (partitions), then deploy them later. The only requirement is that your VIOS needs to be configured to use SAN storage. The VIOS repository also adds fast copy capabilities, but I’ll discuss those in a future column. You can also use an existing partition with a SAN Volume Controller to achieve similar results. For those using VMware and Hyper-V, VMControl 2.3.1 has started integrating Tivoli Provisioning Manager for Images. z/VM users will see enhancements, as well. System pools. VMControl Enterprise Edition has additional capabilities for adding pre-existing virtual servers into system pools. This can be done for multiple virtual servers by grouping them into
Systems Management
Chapter 3: IBM Systems Director 6.2.1 19
a workload. In addition, you can now choose to optimize a server system pool manually or on a repeating basis. This ensures that all the servers in the pool are being utilized equally.
Storage Enhancements
The biggest change in storage is the new plug-in called Storage Control. If that sounds familiar, it is because Storage Control was embedded in VMControl. However, customers asked for many storage enhancements that did not directly relate to VMControl, so Storage Control was enhanced and can be installed on its own. Storage Control uses Tivoli Storage Productivity Center technology to discover and manage midrange storage, including the new IBM Storwize V7000. With this integration, you now get a common management interface for storage, along with your server and network management for midrange and most high-end storage systems, as well. One look at the inventory for an IBM Storwize V7000 (Figure 2) shows that a broad set of data is collected and can be used to monitor and manage storage that your servers are using.
Performance and Security Enhancements
Although Systems Director had some performance issues in early releases, each new release is focused on improving that. In Systems Director 6.2.1, you can customize how many job instances to save when you schedule a job. For example, if you schedule to collect inventory every week, after a year you will have 52 job instances stored in Systems Director. To improve performance and reduce the database storage needs, you could reduce that number to four so that you have the previous month’s worth for troubleshooting. The Performance Tuning and Scaling Guide for IBM Systems Director 6.2 is available at www-03. ibm.com/systems/software/director/downloads/mgmtservers.html. This guide is kept up to date and is a good resource if you have questions or need tips on performance and scalability. One of the security enhancements in Systems Director 6.2.1 is the requirement for configuring a 1:1 credential mapping for single sign-on (SSO) when launching the Hardware Management Console (HMC). Otherwise, you are prompted for a password.
Other Enhancements
All the new functionality in Systems Director 6.2.1 comes with new command-line interfaces. A command that was long overdue is the new Revoke Access command. If you need to write a script to revoke access to a system, you can use the following command: system/revokeaccesssys
Finally, network enhancements have been made for both the base Network Management function and the Standard Edition’s Network Control plug-in. In the base network management, enhancements are added to support stacked switches as well as other BladeCenter configurations. In addition, third-party switches can be configured by downloading partner plug-ins. This was already supported for BladeCenter switches but is now supported for standalone switches. For those who have Network Control, version 1.2.1 includes new features, as well. Here is a quick summary: • Support for BladeCenter Power Blade virtual switches and virtual network adapters without the HMC Systems Management
20 Chapter 3: IBM Systems Director 6.2.1 • Support for VIOS managed with the Integrated Virtualization Manager (IVM) • Discovery of virtual network adapters • Port-level topology display, with relationships between the virtual server, the virtual switch, and other servers
For More Information
Systems Director 6.2.1 includes numerous updates and enhancements. For more information about these improvements, or to provide feedback, see the sidebar “IBM Systems Director Website Links.”■ Greg Hintermeister (
[email protected]) works at IBM as a user experience designer and is an IBM master inventor. He has extensive experience designing user interaction for IBM Systems Director, IBM Virtualization Manager, System i Navigator, mobile applications, and numerous web applications. Greg is a regular speaker at user groups and technical conferences.
Systems Management
21
Chapter 4:
How to Personalize IBM Systems Director Customization tips by Greg Hintermeister I’m going to be a little bit two-faced in this article. I typically tell you how great it is that IBM Systems Director can manage many kinds of servers, storage, and network systems, and I explain how much you can do with the more than 500 tasks (and counting) that are built in to Systems Director. But in this article, I’m going to tell you that sometimes it can be annoying to have to wade through all those systems and tasks that you don’t care about just so you can manage the systems you really do care about. Let’s look at how you can personalize IBM Systems Director to make it quite fast to manage your systems by customizing what you see when you log on, grouping your systems based on interesting criteria, and removing tasks you don’t want to see.
Personalize Startup
The first thing I suggest is to define your startup pages, or those tasks you commonly use and want to see immediately after you log on. My startup pages include Welcome, Health Summary, Monitors, and Virtual Servers and Hosts, as Figure 1 shows. These tabs show up when I log on, and I can minimize the navigation area so that I have more space to work in and can get to what I need more quickly. I chose these four tabs for the following reasons: Welcome. I really like the Welcome page’s Manage tab because it shows me the categories of tasks, or activities, I can work in. Although in many cases I like to get directly to a particular system to work with it, in other cases I prefer to, for example, click on Update Manager to be guided through managing updates across multiple systems. Health Summary. Health Summary is where I can see the all the resources I’m interested in. I have a whole section on this topic later in the article, but in one screen I can see a dashboard of important metrics, my favorite systems and groups, any system with problems, and other custom groups added as thumbnails.
Systems Management
22 Chapter 4: How to Personalize IBM Systems Director Monitors. The Monitors tab gives me fast access to view real-time metrics from systems I’m interested in. The drop-down in the tab shows recently viewed systems, and the collection of views lets me see common monitors for an OS or a detailed list of monitors for specific OSs. Note that when I start out, I usually right-click a system and select System Status and Health, Monitors. This way the recent systems dropdown fills up and will bring this tab into focus for that system. Over time, I can use the recent systems drop-down Figure 1 : Sample startup pages and quickly switch between systems to view active monitors. Virtual Servers and Hosts. The Virtual Servers and Hosts tab is a great place to view at-a-glance utilization for your physical servers hosting virtual servers. I can see multiple platforms (PowerVM, VMware, Hyper-V, etc). It also shows real-time CPU utilization directly from the hypervisor (i.e., there’s no agent required). The tab also Figure 2 : Adding the Welcome task to the startup page lets me see allocations for processor and memory. From here I can quickly edit the allocated resources, view a topology map, or drill into energy management for the physical host. IBM Systems Director Website Links To add these tabs as your startup pages, select Find a Task at the top IBM Systems Director website: of the navigation area. Enter the task ibm.com/systems/software/director name—for example, “Welcome”—in the table search entry. After you find a IBM Systems Director download site: ibm.com/systems/software/director/downloads task, click the link to open it, as Figure 2 shows. You can follow these steps for IBM Systems Director demo site: each of the tasks I list. ibm.com/developerworks/mydeveloperworks/wikis/home For the Welcome task, you’ll notice ?lang=en#/wiki/W3e8d1c956c32_416f_a604_ that there are a few to select from. Select 4633cd375569/page/IBM%20Systems%20Director the one in the General category. After IBM Systems Director Facebook page: the tab for that task is open, in the top facebook.com/pages/IBM-Systems-Director/ right corner of the screen, select Add to 193362963483 My Startup Pages from the Select Action Systems Management
Chapter 4: How to Personalize IBM Systems Director 23
drop-down menu. You need to do this in the order in which you want the tabs to appear. You can also find the tasks in the navigation area on the left. However, the Welcome page doesn’t show up as a tab—so my instructions are the only way to make the Welcome task a tab. To determine which tab should be the default to display after logon, select My Startup Pages from the navigation area, as Figure 3 shows, then select the default. My suggestion is to make the Health Summary tab your default, which will give you instant access to your personalized resource list; in addition, the dashboard graphs will automatically start collecting data.
Figure 3 : Configuring the default tab to display on the startup page
Personalize Resources
After you personalize what you see when you log on, you’ll want to personalize which resources you see. To do this, I suggest opening the Navigate Resources task from the navigation area and clicking All Systems. Browse the list; when you find a favorite, right-click that system and select Add To, Favorites. This will instantly add your system to the Health Summary tab’s Favorites section. You can also add groups to your favorites list. As an example, select Find a Resource and enter HMC and Managed Power Systems. Select the group name and add it to your favorites list. An added feature is that if a group is in the favorites list, the Problems and Compliance columns aggregate the problems of any member in that group. Next, create a dynamic group. A dynamic group analyzes the criteria you specify. When it finds a hit in the database, it adds that system
Figure 4 : Creating a dynamic group
Systems Management
24 Chapter 4: How to Personalize IBM Systems Director as a member of the group. Dynamic groups are kept current by listening for any change in the database. I like the idea of tagging my systems by keyword. I use the description as my tag field. I just add the keyword “Greg” to the description of any system and it instantly shows up in the dynamic group. To create this group, follow these steps: 1. Go to Navigate Resources and click Create Group. 2. In the Name page of the wizard, give it a name such as “Greg Systems.” 3. In the Type page, select Dynamic. Also, make sure you select your group location as Groups. That way it will appear in the first page of Navigate Resources. 4. In the Define page, select Add. This will bring up the Add Criterion dialog box, which Figure 4 shows. 5. Select Any System for the type of system. This way you can add any kind of OS, server, blade, chassis, Hardware Management Console (HMC), etc. 6. Open the System Properties folder and select Description. 7. Select the contains operator and enter “Greg” (or your keyword of choice). 8. Click OK.
Figure 5 : Adding groups to the Health Summary task
Figure 6 : Editing permissions for tasks
After the group is created, you can edit the description of any system. The system will then instantly appear in your new group. Finally, right-click the group itself and select Add To, Health Summary. This will add this new group as a thumbnail in the Health Summary task, as Figure 5 shows. You can also personalize the list of key metrics that appear in the dashboard. I use this space to monitor my management server’s CPU utilization. Notice in Figure 5 how my AIX partition is using shared processors to add resources when necessary and take them away when no longer needed. To add metrics of interest, click the Monitors tab, select your management server OS, then rightclick CPU Utilization %. Select the Add to Dashboard option. Systems Management
Chapter 4: How to Personalize IBM Systems Director 25
One last thing about the Health Summary: You can personalize how many rows appear in the embedded tables. Open the navigation area and select Navigation Preferences in the Settings category.
Personalize Task Lists
Now that you’ve personalized the systems and groups you care about, let’s prune the tasks you don’t care about. To do this, you need to create a user role and assign it to the user ID you use sign in to Systems Director. In the navigation area, select Roles in the Security category, then click the Create button. Once you’re in the wizard, give the role a name such as “Greg Tasks.” On the Permissions page you can select all tasks, then remove the tasks you don’t want to see, as Figure 6 shows. I selected many categories in this example, but the categories not selected (still in the Available list) won’t show up in the UI after I assign this role. Another idea to consider is to create different user profiles for different tasks. For example, if you want to focus on managing updates, you could create a role that has permission to view Release Management, General, Inventory, and System Status and Health. This will make the UI much more streamlined for your update tasks. After a role is created, go to your Users list and assign the role to a user. After the role is assigned, simply sign off and then sign on again. When you sign on you’ll see the subset list of tasks in the navigation area and in your context menu.
Customization
With just a bit of work on your part, you can personalize how Systems Director looks. You’ll get a lot more out of using Systems Director if you customize it to suit your needs. For more information about using Systems Director, see the sidebar “IBM Systems Director Website Links.” ■ Greg Hintermeister (
[email protected]) works at IBM as a user experience designer and is an IBM master inventor. He has extensive experience designing user interaction for IBM Systems Director, IBM Virtualization Manager, System i Navigator, mobile applications, and numerous web applications. Greg is a regular speaker at user groups and technical conferences.
Systems Management
27
Chapter 5:
How to Implement Open-Source Solutions: Laying the Linux Foundation by Erwin Earley This is the first in a series of articles covering the implementation of open-source solutions in IBM i shops. A special emphasis of this series will be to highlight features of the virtualization capabilities of IBM Power Systems as well as management features of each particular open-source solution being reviewed. The intent of this series is to provide methods for establishing open-source solutions that will limit the skill gap for persons responsible for the ongoing care and feeding of the application. Put another way, the goal of the implementations will be to have open-source solutions that will be transparent to the end user and have little management overhead. Topics planned for this series include: Setting up File Serving. This article will focus on how to establish an open-source file server to replace an existing Windows-based file server. Included will be steps for integrating different authentication mechanisms, simplified migration of existing data, and ongoing management of the file server, including adding and modifying file shares. Network Services. This article will focus on how to setup a Domain Name Server (DNS), as well as a Dynamic Host Configuration Protocol (DHCP). Email Services. This article will focus on using open-source solutions for email filtering, including spam blockers and virus checkers. Firewall. This article will show how to protect your network infrastructure from unwanted traffic. Each of the above articles will highlight features of virtualization provided by Power Systems as well as the IBM i operating system features that allow these functions to be deployed as network appliances. Additional articles in the series will focus on providing a robust and fault-tolerant environment for each open-source solution including: High Availability. This article will cover open-source tools for the implementation of HA solutions including clustering and data replication. Backup/Restore. This article will focus on open-source tools that can be used to backup and restore application data, including supporting file level restore and file versioning.
Exploring the Capabilities
In this first article I want to take some time to go over capabilities of Power Systems and IBM i that make the implementation of network appliance-type solutions attractive as well as practical. Let’s start Systems Management
28 Chapter 5: How to Implement Open-Source Solutions: Laying the Linux Foundation by talking about allocation of processor and memory resources to the workload. These workloads are going to run on Linux in an LPAR. As such they will be assigned a certain amount of processor and memory resources. My experience with open-source solutions on the Power platform shows that infrastructure-type solutions need little in the way of resources. Typically shared processor resources will be assigned with the desired resource being less than 1.0 processor units. Likewise for memory, we can allocate the appropriate amount of memory needed for the solution being implemented. In the workload-specific articles, I will provide more information on recommended settings for these resources. Since we control the resource allocation at the LPAR level, we can re-size the workload— that is, we can assign exactly the amount of resource required by the workload. Additionally, we can take advantage of uncapped processors to allow the partition to be allocated exactly the amount of processor required and for processor resources to be allocated or de-allocated as workload requirements change. Similar capabilities with memory allocation are available to a Linux LPAR on the POWER6 and POWER7 models through Active Memory Sharing.
Simplifying the Linux Installation
A Linux installation on POWER using IBM i for Virtual I/O requires a number of steps including: • Logical Partition Creation • Virtual I/O Definition • Virtual Network Support • Linux Installation • Installation of Service and Productivity Tools I am not going to attempt to go over the steps needed to create the logical partition, define the irtual I/O, or how to setup support for virtual networks; however, I would like to discuss how to simV plify the installation of Linux itself as well as the Service and Productivity Tools. The Service and Productivity Tools (www14.software.ibm.com/webapp/set2/sas/f/ lopdiags/home.html) provide a number of additional functions and capabilities specific to the POWER architecture, including the ability to respond to a power-off request with a clean shutdown of the operating system, as well as supporting Dynamic LPAR functions. One can certainly perform the Linux installation and then after the installation is complete, download and install the utilities; however, an easier way would be to take advantage of the IBM Installation Toolkit for Linux (www14.software.ibm. com/webapp/set2/sas/f/lopdiags/installtools/home.html). In a nutshell, the PowerPack CD essentially front-ends the Linux installation with a number of questions related to the desired configuration of the Linux instance being built. From the answers, a response file is built that is used to perform a silent (no user interaction) installation. PowerPack CD also takes care of installing the necessary Service and Productivity Tools. One additional benefit of PowerPack CD is that you have the same look and feel for the Linux installation regardless of whether the RedHat or Novell/SuSE Linux distribution is being installed.
Operating System Replication
By having the operating system on its own disk, we can have an environment in which we can install the operating system a single time and then copy it as we wish to implement additional open-source solutions. There are a couple of keys to making this work. First, the installer generates the network configuration based on the MAC address of the network adapter. If you’re using virtual Ethernet, the Systems Management
Chapter 5: How to Implement Open-Source Solutions: Laying the Linux Foundation 29
MAC address will be different for any replicated images that are associated with a different LPAR. This will result in a new device handle being created for the Ethernet adapter /dev/eth#, and the original device handle will remain on the system even though it is no longer valid. This can be corrected by renaming the network configuration as well as removing the name association. First, let’s change the name of the network configuration file. Currently, the network configuration file’s name will include the MAC address: cd /etc/sysconfig/network ls ifccfg-eth*
Rename the resulting file: mv ifcfg-eth ifcfg-eth0
Also, edit the ifcfg-eth0 file and comment out the UNIQUE entry (put a # at the start of the line) Now let’s remove the mapping of the MAC address to the device handle: rm /etc/udev/rules.d/30-net_persistent_names.rules
This file will be re-generated the next time the system boots with the correct MAC address of the Ethernet adapter. In addition to the network device mapping, the reference by the boot loader to the bootable disk needs to be changed. The installer configures the boot loader to point to a specific SCSI device/ address to boot from, the following steps will change this to a generic name: • Edit the /etc/lilo.conf file • Ensure that the ‘boot’ line indicates ‘boot = /dev/sda1’ • Ensure that the ‘root’ line indicates ‘root = /dev/sda3’ • Re-generate the yaboot.conf configuration file with the ‘lilo’ command That’s it! The disk with the Linux OS is now unique. The partition should be shutdown and the storage space saved for later use.
What About VIOS?
Keep in mind that the above steps are for Linux implementations that are using I/O hosted by IBM i. If your storage is hosted by VIOS, you can accomplish the same thing—a replicable Linux image— by using the capture and deploy features of VMControl in IBM Systems Director (I will cover the VMControl capture and deploy in a future article).
Storage Management
One of the keys to implementing these workloads will be to make the resource allocation as flexible as possible. We already mentioned configuring the partition for memory and processor flexibility; now let’s spend some time looking at how we can make the storage for the workloads flexible as well. With Linux as the operating system and IBM i providing the storage virtualization, we have the ability to leverage Logical Volume Manager (LVM) along with Network Server Storage Spaces (NWSSTG) to implement a storage scheme for the open-source solution that can grow over time. I recommend that the installation of Linux and the open-source application being implemented be stored on a “disk” that is separate from the storage that will be used for the data. As an example, when we implement a file serving solution we will have one Network Server Storage Space that Linux and SAMBA (the opensource file server) will be installed on and a second storage space that the file resources being shared Systems Management
30 Chapter 5: How to Implement Open-Source Solutions: Laying the Linux Foundation will be stored on. Separating the operating system from the data will make it easier for backup/restore, operating system updates, as well as usage of LVM. Logical Volume Manager is a software solution available in Linux that allows one or more disk resources to be logically combined, or pooled, into a single logical disk resource that can then be acted upon by the operating system. Essentially, LVM provides an abstraction layer between the File System and the physical media. As an example, we can take two virtual disks and put them into a single Logical Volume Group (LVG) and then from that Logical Volume Group we could create one or more Logical Volumes (LV) that could then be formatted as a file system. You can think of a Logical Volume Group as a “physical” disk and the Logical Volume as a partition on the disk. Keep in mind that it is difficult, sometimes impossible to change the size of a disk partition. The benefit of LVM is that if additional space is required in a Logical Volume we can simply add a new disk resource (physical disk or disk partition) to the Logical Volume Group and then use that new resource to increase the size of the Logical Volume. Combine that with the ability to create the virtual disk and add it dynamically to the Linux partition and we end up with an incredibly flexible storage solution for our open-source implementations. Let’s walk through the steps for creating the initial LVM configuration. These steps assume you already have Linux installed. I provide the command-line commands as they are the same regardless of which Linux installation you are using. The first several steps will actually be performed on the IBM i that is hosting the Virtual I/O for Linux. First a new virtual disk needs to be created: CRTNWSSTG NWSSTG(DATA01) NWSSIZE(10240) FORMAT(*OPEN)
The above command creates a new virtual disk called DATA01 that is 10GB in size. Now the storage space needs to be linked to the Network Server being used to provide I/O to the Linux operating system: ADDNWSSTGL NWSSTG(DATA01) NWSD(LINUX)
The above command links the vitual disk DATA01 to the Network Server LINUX. You can think of the link process as inserting a device onto a SCSI bus. The link is done dynamically so the disk is immediately available to the Linux operating system. Now in Linux we need to re-scan the SCSI bus in order to discover the new disk: echo “- - -” > /sys/class/scsi_host/host0/scan
While the above command may look a bit cryptic, it is actually quite simple. The three dashes indicate the starting SCSI address, ending SCSI address, and device type that you want to scan for. In this case we are scanning from the starting address to the ending address for all device types. The / sys/class/scsi_host/host0/scan is simply a handle in the operating system that represents the scan command for the first SCSI bus (host0). Now we need to put a partition on the disk. Since we are going to use the entire disk for LVM we will simply create a single partition on the disk. We will use the fdisk command to create the partition: fdisk /dev/sdb
The above statement starts fdisk on the second disk (sdb) on the system. This assumes that there was only a single disk on the system prior to creation and linking of the new virtual disk. To make the disk usable in LVM it first needs to be initialized—this is done with the pvcreate command: Systems Management
Chapter 5: How to Implement Open-Source Solutions: Laying the Linux Foundation 31
Administration Tools
W ●● ●● ●●
hile the series of articles on implementing open-source has an emphasis of ease of administration, the reality is that you will still need to perform some level of administration in three keys areas:
Application Administration Operating System Administration Environment Administration
Administration of both the Operating System as well as the Environment can be handled via IBM Systems Director. The intent of IBM Systems Director is to provide a single-pane of glass for administration of a company’s IT environment. When it comes to Linux on Power there are a number of functions that can be performed with IBM Systems Director. As an example, one can take advantage of the Update Manager in IBM Systems Director to build compliance policies that would be used to check software levels of the installed software against a known list of updates. The updates themselves come from the distributor-provided update process and the Linux distribution must be registered with the distributor’s update server; however, by using System Director’s update function you have a unified method for checking for and applying updates across all of your Power operating systems. Another useful function from Systems Director for the Linux environment is the ability to monitor the health and status of various aspects of the server. As an example, a monitor can be established for file system usage that would send an alert when file system usage reaches a certain point, as shown in Figure 1. In the above scenario, the I/O for the Linux partition is being virtualized from the Linux partition. An event monitor could be established in Systems Director that would trigger when the Linux file system reaches a defined threshold—at that point an event is raised in Systems Director. The event trigger could cause a script to be started in the Linux partition. The script in the Linux partition could then make an ssh call to the IBM i partition to create a new virtual disk and link it to the Network Server. Finally the script could then take the additional virtualized storage, add it to the Logical Volume and increase the size of the file system. Talk about seamless integration and autonomics—with IBM Systems Director and a bit of scripting, it’s possible to make the Linux server self-healing for storage (and other) related issues. Another cool thing that IBM Systems Director brings to the Linux on Power Figure 1 : File System Usage environment is the functions provided by the VMControl plugin, which gets us into administration of the Environment. The Express edition of VMControl provides the ability to create and modify the Logical Partition that the Linux instance will run in. Where it starts to get interesting is in the Standard Edition, which provides the ability to capture and deploy Linux instances. This greatly enhances the ability for IT environments to implement Linux-based network appliances. Again, if we have built a Linux instance as a base operating system installation (without the installation/configuration of a solution) then we can use VMControl to capture that instance and when we are ready to deploy a solution (like File Serving) we can use VMControl to deploy the Linux instance. The deployment function would create the Logical Partition, restore a new Linux instance (based on the captured image), configure networking in Linux, and start the new server—all at the click of a button! The captured image doesn’t need to just be the operating system; it can in fact be the operating system and any software applications you wish to have installed. As an example, if you want to have a captured file serving appliance you could establish a Linux partition with the SAMBA file server installed and configured and then capture that image as a deployable file server appliance. VMControl has the ability to capture the Linux image either to a Linux image repository or to a VIOS server. A future article will delve more deeply into how to use VMControl to capture and deploy Linux-based network appliances. There are a number of tools and utilities that can be used for Application Administration. Each distributor provides its own set of tools. As an example, Novell/SuSE provides yast (Yet Another Setup Tool) with their SuSE Linux Enterprise Server (SLES) distributions, and RedHat provides a number of separate utilities that are all pre-pended with the characters ‘system-config-’. Additionally, many applications provide their own administration tools—as an example, the SAMBA File Server (which we will cover in the next article in the series) provides a web-based tool called SWAT (SAMBA Web Administration Tool) for working with the overall configuration of the file server as well as configuration of the file shares. A free web-based tool that brings together a lot of the operating system management as well as application management is WebMin (webmin.com). The idea behind WebMin is to remove the need to edit configuration files directly (which is exactly what we want to stay away from) and manage the system from a console or remotely. I will be highlighting the WebMin functions throughout the Implementing open-source applications series to show how it can be used to simplify the management and configuration of the specific application being discussed and provide a unified management tool for each Linux-based application you decide to implement within your environment.
Systems Management
32 Chapter 5: How to Implement Open-Source Solutions: Laying the Linux Foundation
pvcreate /dev/sdb
With the above command, LVM will now recognize the disk as a Physical Volume (PV). Now the Volume Group itself is created with the vgcreate command: vgcreate datavg /dev/sdb
The above command creates a volume group called datavg using the physical volume on the disk (sdb). We are now ready to create the logical volume itself. In this case we are going to use the entire space in the volume group so we need to find out exactly what space is available: vgdisplay datavg
The above command will display information about volume group datavg including the free space. Finally, let’s create the logical volume: lvcreate -L10G -ndata datavg
The above command creates a logical volume called data in the logical volume group datavg. The size of the logical volume is 10GB. To make the logical volume available to Linux it needs to be formatted with a file system and mounted: mke2fs -j /dev/datavg/data mkdir /mnt/data mount /dev/datavg/data
Notice the device path. The Logical Volume Group is the second element in the path and the logical volume is the third element in the path. So now that we have the structure for the Logical Volume Group in place, we can take advantage of it to create the size of the resulting file system dynamically. First a new virtual disk will need to be created and linked to the network server in the hosting IBM i partition. Once the virtual disk has been linked, the SCSI bus needs to be re-scanned in Linux using the same command I showed earlier. In Linux, the new disk will need to be initialized using the ‘pvcreate’ command shown earlier but this time replacing the disk identifier with the new disk name. As an example, if this is the third disk in the system, the path would be /dev/sdc. To add the disk to the volume group, the vgextend command is used: vgextend datavg /dev/sdc
The above command adds the physical volume created on /dev/sdc to the datavg Volume Group. Now the newly created free space in the Volume Group can be added to the Logical Volume: lvextend -L+10G /dev/datavg/data
The above command adds an additional 10GB to the data volume in the datavg volume group. This assumes that the virtual disk created was 10GB in size. Finally, to make the additional space available to Linux and the application using it, the file system needs to be resized: unmount /mnt/data e2fssck -f /dev/datavg/data resize2fs /dev/datavg/data mount /dev/datavg/data /mnt/data
In order, the above commands do the following: Unmounts the file system (so any application that makes use of the file system should probably be stopped prior to the resize); checks the file system for errors; resizes the file system to use all available disk space; and finally remounts the file system. Systems Management
Chapter 5: How to Implement Open-Source Solutions: Laying the Linux Foundation 33
In addition to the commands used above you could also setup and maintain LVM through a number of GUI and web based administration tools.
Management
There are several good tools for day-to-day management of the Linux environment. Two that I recommend are IBM Systems Director and WebMin. IBM Systems Director (www-03.ibm.com/systems/ software/director/downloads/index.html) can be used to administer Linux as well as the Logical Partition Linux is running on. WebMin is a free open-source utility that provides web-based management of the Linux operating system as well as numerous open-source applications. For an example of using WebMin and LVM, see the sidebar in the online version of this article at SystemiNetwork.com.
Ready for More?
In this article I walked you through how to create a Linux environment that can be replicated as well as leveraging capabilities of Linux and IBM i to provide a dynamic storage environment. I know some of the commands may have seemed a bit daunting; however, once you’ve done them once or twice they are fairly straight forward. This lays the ground work for implementing open-source solutions that will be free of the necessity to become a Linux guru. ■ Erwin Earley (
[email protected]) is a managing consultant at IBM who has worked with the Rochester, Minnesota, development lab since 1996. Erwin currently heads up the Open Community Center of Competency in the IBM i Technology Center. He has worked in the IT industry since 1980 and has experience with several Unix variants as well as Linux and IBM i.
Systems Management
35
Chapter 6:
Optimize System i Performance Adjuster and Shared Memory Pools Learn how to correctly initialize Performance Adjuster to get the most out of your System i investment by Tom Reilly Since its introduction in 1988 as the AS/400, System i has had the reputation of being an extremely productive, low-maintenance, almost self-managing platform, which simply needs to be unpacked, placed on the floor, and turned on. Good, bad, or indifferent, that reputation is true to a certain extent, but one problem that’s persisted is that the server is usually put to work without being tuned efficiently. It’s been my experience over the years that “out of the box” generally means “out of tune.” It’s also been more the rule than the exception that because of System i’s minimal maintenance requirements, most System i shops are dominated by developers and don’t have internal technical expertise to effectively tune server processing resources. There’s a misconception that simply changing system value QPFRADJ and enabling the prepackaged Performance Adjuster tool will resolve performance problems and make the server run efficiently. This article will discuss facts and misconceptions about Performance Adjuster and detail how to correctly initialize Performance Adjuster in conjunction with shared memory pools to optimize performance and get the most out of your System i investment, preventing the Performance Adjuster from overreacting and doing more harm than good. Rationalizing your server workload and initializing Performance Adjuster minimum and maximum ranges requires minimal time and effort and can improve and stabilize performance dramatically. This has the effect of stabilizing operating system faulting as well as application shared memory pool faulting, making performance more predictable.
Some Background
The physical resources that contribute to performance of any platform are CPU, memory, disk arms, and network bandwidth, although the latter is external and out of scope for this discussion. Server performance can only be as good as its weakest processing resource, so it’s important to measure and tune them all effectively. Some non-physical resource factors that can negatively contribute to Systems Management
36 Chapter 6: Optimize System i Performance Adjuster and Shared Memory Pools performance are database indexing Figure 1: and exceptions such as seize/lock Work with Shared Pools thresholds contention and authority lookups. The out of the box ranges generally default to ~ 5% min and 100% max allowing for overreaction.) These non-physical factors require a Shared Pool *MACHINE used by the OS has a min % 10 and max % 20 Shared Pool *BASE used by the OS and other subsystems by default has a min different skill set to remediate and are % 10 and max % 40 better left to be considered after base Shared Pool *SPOOL used by subsystem QSPL for spooling has a min % 1 and max % 2 tuning of physical resources has been Shared Pool *INTERACT used by Subsystem QINTER has a min % 10 and max % 40 Shared Pool *SHRPOOL1 used by Subsystem QBATCH has a min % 10 and max % 60 completed. Shared Pool *SHRPOOL2 used by Subsystem &ASYNCSBS has a min % 10 and max There’s also a very common mis% 20 Shared Pool *SHRPOOL3 used by Subsystem QHTTPSVR has a min % 10 and max % conception with all platforms that a 15 CPU running at 99% is always a bad thing. That’s not necessarily true as long as interactive response time is acceptable and batch work is completing within the maintenance window. Once either of the above is no longer true, the server needs to be more thoroughly tuned and re-evaluated and/or the underperforming components must be upgraded. Other considerations such as work management, batch concurrency (too many jobs running simultaneously) and the associated diminishing returns, performance system values, IFS optimization techniques, database journaling optimization, and housekeeping best practices, among others, are also important contributing factors but also outside this discussion.
Buying vs. Tuning
A lot of shops without adequate technical expertise tend to assume that poor performance is an indication a CPU upgrade is needed and don’t re-evaluate their I/O, which is the most commonly untuned resource. There can be an impulsive tendency to invest capital in unnecessary processing resources, which may or may not help, as opposed to effectively measuring and tuning all existing resources. When tuning a server, you need to keep in mind that these various resources are equally important and that relieving one resource bottleneck, like memory faulting or disk arm utilization, can create a bottleneck in another resource, like CPU, which had been previously underutilized or had been running efficiently Imagine you’re grinding wheat to produce flour at a mill driven by a water wheel. The water wheel is either underutilized or functioning within spec but needs to be spun faster to increase flour production. The spin of the wheel depends on the amount of water passing through it, but a dam upstream is causing a restricted flow of water. Until that dam is cleared, it wouldn’t make sense to upgrade the water wheel to a larger size. If you consider the water wheel as a CPU unable to be driven at capacity, the water as your workload, and the dam as an I/O bottleneck, you can follow the analogy and see that it also doesn’t make sense to upgrade to a larger CPU until the associated I/O bottlenecks are remediated. Once you break I/O bottlenecks, such as memory faulting and excessive or unbalanced disk arm utilization, the work will flow faster and drive the CPU. Until that happens, you can’t accurately measure the CPU to determine whether it also needs an upgrade. In other words, you wouldn’t want to upgrade to a POWER7 water wheel until you relieve the I/O dam upstream.
What Performance Adjuster Does
System i’s Performance Adjuster can help you manage this situation. Performance Adjuster is enabled by setting system value QPFRADJ, which is dependent on the thresholds, as shown in Figure 1, Systems Management
Chapter 6: Optimize System i Performance Adjuster and Shared Memory Pools 37
within the Work with Shared Pools screen, as shown in Figure 2. Performance Adjuster constantly measures these shared pool thresholds and dynamically reallocates memory resources to relieve faulting or adjusts activity levels to stabilize transitions. When the faulting and/or transition thresholds are reached, Performance Adjuster reassigns memory resources based on the minimum and maximum ranges defined for each shared pool. The two issues that limit Performance Adjuster’s effectiveness out of the box are the fact that most work executes by default in the *BASE pool, and Performance Adjuster ranges default to very low minimums and high maximums, which are too open ended and Figure 2 : The Work with Shared Pools screen can allow an overreaction that can do more harm than good. Performance Adjuster is dependent on shared pools to be most effective. Allowing unrestricted memory ranges is like arbitrarily opening and closing the dam in the analogy above, which makes resource utilization unpredictable. Open-ended adjustment ranges can cause Performance Adjuster to overreact to temporary events, such as an ad hoc interactive query. Performance Adjuster, for example, could react to an interactive event, reassigning memory to shared pool *INTER even after the ad hoc event ends, only to react the other way to put things back as they were. This situation can be exacerbated further if the upper range (Max %) *INTER is too high, causing the other pools to go too low. These transitions can create unpredictable results and make performance difficult to measure. Establishing accurate minimum and maximum ranges enables the server to gracefully transition from an interactive-intensive workload during the day to a more batch-intensive workload after business hours and over the weekend.
Managing Activity Levels
Performance Adjuster is also tasked with managing activity levels, which is the only thing it efficiently performs out of the box. Back in the day, before Performance Adjuster was available and manual tuning was required, an inadequate activity level would cause Wait to Ineligible (Wait-Inel) and Active to Ineligible (Act-Inel) transitions on the Work with System Status (WRKSYSSTS) screen, as shown in Figure 3, which resulted in serious performance problems caused by jobs unable to get access to the CPU. Although it’s extremely important to maintain activity levels in proportion to the number of active jobs and/or threads in a memory pool, Performance Adjuster adds very little additional value out of the box because by default, the server ships with all work running out of the *BASE memory pool, Systems Management
38 Chapter 6: Optimize System i Performance Adjuster and Shared Memory Pools so there are minimal memory pools to adjust. For Performance Adjuster to be most effective, separation of unlike work into separate shared pools is an important prerequisite. Imagine now if your community built a dance hall containing a single, large floor space available to everyone including ballroom dancers, line dancers, jazz dancers, disco dancers, etc. Each dance style performs to different music at a different speed with a varying number of participants and differing needs for space and duration. Imagine the chaos that would ensue if these different dancers simultaneously competed for the same floor space. It would make more sense to carve the building into separate and appropriFigure 3 : The Work with System Status screen ately sized rooms where the dancers shared smaller floor spaces with Figure 4 : others with similar characteristics. ISV subsystems Now imagine that dance hall is your System i’s memory, where most Subsystem Work Type Priority Proposed Pool work executes by default in the *BASE QINTER Interactive 20 *INTERACT memory pool and chaos ensues when QBATCH Batch 50 *SHRPOOL1 different types of work—OS, batch, ASYNCHSBS Asynch 20-50 *SHRPOOL2 database, asynchronous—compete QHTTPSVR Interact 25 *SHRPOOL3 for the same resources using different ISVSBS Asynch/Batch 20-50 *SHRPOOLn priorities, time slices, and threads. As with separate dance floors, it would make more sense to rationalize the different types of work and route them to their own shared memory pool, where they have dedicated resources and execute alongside other jobs with similar work characteristics.
Rationalizing and Implementing Shared Pools
So to effectively tune your server, even before enabling Performance Adjuster, you must rationalize the different types of work running on your server and route them into memory pools sharing similar characteristics. The three main types of work are interactive, batch, and what I call asynchronous—batch work that doesn’t necessarily start and stop, but rather remains active and waits for work to come in, which it processes then waits some more. Asynchronous work sometimes executes at different priorities than batch work. One good example of asynchronous of work is third-party Independent Software Vendor (ISV) subsystems, as shown in Figure 4, which may or may not need to be moved out of *BASE depending Systems Management
Chapter 6: Optimize System i Performance Adjuster and Shared Memory Pools 39
on the amount of resources they utilize. The commands below can be used to route subsystem work out of the *BASE pool into separate shared pools. Note: You must execute the Change Subsystem Description (CHGSBSD) command once for each subsystem, and you must execute the Change Routing Entry (CHRRTGE) command once for each routing entry in a given subsystem. 1. Route batch work from *BASE to Shared Pool 1 ❍❍ CHGSBSD SBSD(QBATCH) POOLS((1 *BASE) (2 *SHRPOOL1)) ❍❍ CHGRTGE SBSD(QBATCH) SEQNBR(nnn) POOLID(2) Figure 5 : The Change Shared Pool screen 2. Route asynchronous work from *BASE to Shared Pool 2 ❍❍ CHGSBSD SBSD(&ASYNCHSBS) POOLS((1 *BASE) (2 *SHRPOOL2)) ❍❍ CHGRTGE SBSD(&ASYNCHSBS) SEQNBR(nnn) POOLID(2) 3. Route HTTP work from *BASE to Shared Pool 3 ❍❍ CHGSBSD SBSD(QHTTPSVR/QHTTPSVR) POOLS((1 *BASE) (2 *SHRPOOL3)) ❍❍ CHGRTGE SBSD(QHTTPSVR/QHTTPSVR) SEQNBR(10) POOLID(2)
Initializing Performance Adjuster
The shared pool characteristics that govern and enforce boundaries around Performance Adjuster can be interactively initialized via the Work with Shared Pools (WRKSHRPOOL) command or programmatically via the Change Shared Pool (CHGSHRPOOL) command. The WRKSHRPOOL command has three views (Pool Data, Tuning Data, and Text), which you toggle between via the F11 key after executing the command. The WRKSHRPOOL command can only be interactively executed but gives you the ability to view and modify all shared pools in a single place. You can execute CHGSHRPOOL interactively or programmatically, but it is pool specific and only allows you to manipulate a single pool at a time. An example of the Change Shared Pool screen is shown in Figure 5. The following examples use the CHGSHRPOOL command to introduce adjustment range baselines and eliminate *MACHINE faulting, which can drive up CPU utilization, disk arm utilization, and across-the-board faulting. It’s important to remember that these examples should be used only as a guideline with the understanding that the needs of a particular server may vary. Best practice is to measure and determine adequate upper boundaries during peak processing times, like the end of the month, because acceptable performance at peak utilization usually guarantees the same during non
Systems Management
40 Chapter 6: Optimize System i Performance Adjuster and Shared Memory Pools peaks. Note: I’d recommend eventually settling on *MACHINE pool min/max % range, which keeps faulting in that pool at 0. 1. Introduce adjustment range baselines ❍❍ CHGSHRPOOL POOL(*MACHINE) MINFAULT(00.00) JOBFAULT(*DFT) MAXFAULT(*DFT) MINPCT(10.00) MAXPCT(20.00) ❍❍ CHGSHRPOOL POOL(*BASE) PAGING(*CALC) MINFAULT(25.00) JOBFAULT(*DFT) MAXFAULT(*DFT) MINPCT(10.00) MAXPCT(40.00) ❍❍ CHGSHRPOOL POOL(*INTERACT) PAGING(*CALC) MINFAULT(10.00) JOBFAULT(*DFT) MAXFAULT(100) MINPCT(10.00) MAXPCT(40.00) ❍❍ CHGSHRPOOL POOL(*SPOOL) PAGING(*CALC) MINFAULT(*DFT) JOBFAULT(*DFT) MAXFAULT(*DFT) MINPCT(1.00) MAXPCT(2.00) 2. Initialize batch memory pool settings ❍❍ CHGSHRPOOL POOL(*SHRPOOL1) PAGING(*CALC) MINFAULT(*DFT) JOBFAULT(*DFT) MAXFAULT(*DFT) MINPCT(10.00) MAXPCT(40.00) 3. Initialize asynchronous memory pool settings ❍❍ CHGSHRPOOL POOL(*SHRPOOL2) PAGING(*CALC) MINFAULT(*DFT) JOBFAULT(*DFT) MAXFAULT(*DFT) MINPCT(10.00) MAXPCT(20.00) 4. Initialize HTTP memory pool settings ❍❍ CHGSHRPOOL POOL(*SHRPOOL3) PAGING(*CALC) MINFAULT(*DFT) JOBFAULT(*DFT) MAXFAULT(*DFT) MINPCT(10.00) MAXPCT(15.00)
An Explanation of Parameters
The following is a brief explanation of each parameter of the WRKSHRPOOL and CHGSHRPOOL commands. These parameters are the same for both commands but are represented as columns of the WRKSHRPOOL command and the rows of the CHGSHRPOOL command: • Pool identifier: The name of the storage pool (*MACHINE, *BASE, *INTERACT, *SPOOL, *SHRPOOLn). • Storage size: The desired size of the storage pool expressed in kilobyte (1KB = 1024 bytes) multiples. • Activity level: The maximum number of threads that can simultaneously run in the pool. • Paging option: This determines whether the system does (*CALC) or does not (*FIXED) dynamically adjust the paging characteristics of the storage pool for optimum performance. • Text description: Verbiage associated with this storage pool. • Minimum page faults: The minimum page faults per second to use as a guideline for adjustment of this storage pool. • Per-thread page faults: The page faults per second for each active thread to use as a guideline for adjustment of this storage pool. Each job is comprised of one or more threads. • Maximum page faults: The maximum page faults per second to use as a guideline for adjustment of this storage pool. • Priority: The priority given to this pool by Performance Adjuster relative to the priority of the other storage pools being adjusted. • Minimum size %: The minimum amount of storage to allocate to this storage pool as a percentage of total main storage. Systems Management
Chapter 6: Optimize System i Performance Adjuster and Shared Memory Pools 41
• Maximum size %: The maximum amount of storage to allocate to this storage pool as a percentage of total main storage.
One Note
Some of the issues discussed in this article in regards to dynamic memory reallocation in a single LPAR containing static resources should be taken into consideration before considering new hardware management functionality, like uncapped processor and memory resources across multiple LPARs. Performance tuning and measurement is a challenge in a single LPAR when resources are static, so you can imagine what issues could arise if the physical resources suddenly become dynamic across multiple LPARs. It could become difficult, if not impossible, to accurately analyze performance against moving targets, let alone undertake a capacity-planning effort. For example, if you’re viewing performance data for a certain interval where CPU is running at 90 percent, the question suddenly becomes 90 percent of whatever amount of CPU happened to be assigned to the LPAR at the given interval. Ad hoc events on one LPAR could also suddenly set off a chain of events across multiple LPARs, making physical resource reallocation a horse race, which is difficult to rationalize. I’m not suggesting never enabling uncapped resources, only to understand the interdependencies between the participating LPARs, the potential ramifications, and to implement uncapped resource reallocation boundaries. Physical resources work together, and increasing/decreasing one could have an effect on the others. For that reason, care should be taken if a decision is made to suddenly allow physical CPU and memory resources to be manipulated dynamically and separately.
In Conclusion
The primary and most desirable goal of tuning a server is to achieve good performance at all times. Best practice is to tune the server to handle processing peaks, assuming that this will provide good performance at off-peak periods. If that’s simply not possible—for example, because of budget constraints that limit your ability to acquire additional processing resources—a secondary goal is to at least make performance predictable in order to manage business and end-user expectations. Poor performance is bad enough, but the only thing worse is unpredictable poor performance Properly implemented, Performance Adjuster can remediate excessive faulting, stabilize enduser and batch performance, allow the server to gracefully transition between interactive and batch workloads, and remediate drastic transitions to achieve that much-desired server predictability. Performance Adjuster is a powerful tool, but boundaries must be established around its ability to adjust resources to prevent it from overreacting, and shared pools should also be implemented to make it most effective. Don’t be afraid to experiment with shared pools and Performance Adjuster ranges. ■ Tom Reilly (
[email protected]) has 25+ years experience in IT working on the System i platform since its inception as the AS/400 and before that on the System/38. Tom provides engineering, delivery, operational automation, and technical writing support for an international pharmaceutical company and specializes in large MRP, ERP, and messaging implementations running on System i.
Systems Management
When Disaster Comes Calling, You’ve Nothing To Fear.
Disaster Recovery. That Works. MIMIX RecoverNow provides reliable, expansive disaster protection and lets you recover single objects or entire applications from any point in time. Visit ww.visionsolutions.com or call 800-957-4511.
© Copyright 2011, Vision Solutions, Inc. All rights reserved. IBM is trademark of International Business Machines Corporation.