Jaqui Lynch - Configuring AIX for the MVSer


Presented at Share in Boston, August 1994

                  
                  Configuring AIX Systems for the MVSer
                                    
                             by Jaqui Lynch
                             Boston College

With the move to Client/Server Computing and to Distributed Systems, MVS capacity planners are now being called on to apply their skills in configuring systems to the UNIX world. This paper covers issues related to configuring AIX systems from the perspective of an MVS capacity planner, and draws comparisons between what an MVS person knows, and items in the distributed world that the MVS person never had to worry about before.

One of the most difficult decisions that capacity planners face today is how to leverage their MVS knowledge into the new evolving world. With budgets decreasing, but system complexity increasing, life for the capacity and configuration specialist has become extremely difficult.

It is becoming clearer, however, that many of the basic approaches are still the same and that it is merely the tools that change. The things that have really changed include rules of thumb, managing the distribution of functions and the number of options that are available. For instance, in tuning a system it is still necessary to undergo a similar process of identifying what is to be tuned, measuring it before change, setting objectives, making changes and then measuring and interpreting the results.

Many of the decisions are also still the same. For instance, it is still necessary to upgrade or replace a CPU when it is too busy; data should still be placed across multiple controllers; additional disk space will be needed; I/O rates still need to be watched; and paging is still dealt with by buying memory. However, what is new is that there are now many more options for dealing with the same problem. For instance, instead of upgrading or replacing a processor, it is possible to supplement it and to distribute the load. This raises several issues including what platforms to distribute across, transparent access to data and code and how to manage distribution in a totally mixed environment.

It is also important to take into account all of the new areas that distributed systems bring with them, particularly graphics and networking issues. Network performance becomes a critical part of the overall performance of any application that is distributed. In the case of Boston College, we have the additional problem that some of the systems we configure are for administrative use, whilst others are for faculty use. The requirements for these can often be diametrically opposed. For instance, in a typical administrative system there are many users so cpu and memory are important. Some paging is alright, but I/O rates and LAN throughput are critical. In the academic environment, particularly where simulations are being run, paging is unacceptable and CPU power is paramount. Other academic systems, however, are more like the administrative ones.

When configuring AIX systems there are literally hundreds of options to choose from, plus many different vendors. All of the machines at BC are totally IBM hardware running primarily IBM software, but even then there is a disconcerting array of options.

There are a multiplicity of CPUs and versions of UNIX to choose from, and each has its own array of options. Previously, different versions of the operating system (by different vendors) were not a concern - the question was whether MVS/ESA would run on the machine with full functionality, not which version of it to run.

Clock speed and chip technology also become a consideration, as do items such as return on investment and depreciation. The technology in the distributed world changes far more rapidly than the mainframe world and machines can be outdated 3 months later, while software upgrades arrive every 3 months or so. This may have an impact on depreciation calculations.

In the IBM world one can choose between SCSI and SCSI-2 controllers, and it is important to take into account what the maximum internal and external disks per controller and per system are. The amount and type of disks and the ways they can be combined are also important. Both CD-Rom and tape can be attached to the same controller that the disk is attached to - this is equivalent to putting them behind the same controller under MVS (with no cache), something that no performance planner would even consider.

There are many choices in the disk, CD and tape worlds. For disk in the mainframe world, the choice was much more limited. Today, typically people have a mix of 3380s and 3390s, or equivalent, in their shops and these disks are of fairly consistent sizes and have standard data rates that conform to the controller. In a typical AIX shop the disk on any one machine will be a mix of 400 to 2.4gb disks and may include a high performance RAID system. The disks and their controllers do not typically have any cache, instead AIX uses memory mapping as a method to cache disks. It also becomes important to know how many slots there are in each machine for adding disk and cards.

Also many of the other controller options that were available with the 3990 controllers are not yet duplicated in the AIX world - such as dual copy, fast dual copy, DLSE, Data Striping and concurrent channel reconnect. The SCSI controllers are also all connected to the same data bus, which is roughly equivalent to all of the DASD controllers being connected to the same I/O path, albeit a fast path.

In the case of CD, throughput rates begin at about 150kb/sec, although it is possible to get four times that rate with the technology available today. Access time ranges from200 to 380ms and it is possible to get multiple platter CD systems or single CD systems. Tape drives come in various sizes and formats - 8mm, 1/4" and reel plus the option of channel attaching to the mainframe's 3480s. Tape drives can be external or internal and range in capacity from 150mb to 1.2gb (1/4") and 2gb to 5 gb (8mm). Some of these tape drives also have compression available, although not all do. Each of these tape drives also has a different native rate for data throughput. The tapes are also of a different physical size to 3480 cartridges and will require storage space.

Other configuration options include whether the system is to be desktop, deskside or rack mounted. This has major cost and storage implications if you are planning on buying quite a few of these systems, but may be necessary to get extra slots for adding disk, etc.

The two main new areas that the capacity planner has to get involved in are the network and graphics. In the case of the network, LAN adapter throughput, LAN speed and type have a major impact on both performance and connectivity. The timing of backups (if backing up across the network) is critical and security becomes a major issue. Using certain commands or with a sniffer it is possible for anyone to trap unencrypted data, userids and passwords. Network options include whether to use thick, thin or twisted pair ethernet card connections or to use token ring. To connect to the mainframe it is possible to use TCP/IP and FTP or LU 6.2 and SNA. SNA can be implemented via a 3270 connection card or over ethernet to the mainframe. Security should also be implemented for the key and for all copies of the boot diskettes for that machine as these can provide a backdoor into the system.

Design of applications is now equally important. With XWindows and CICS on the RS/6000, it is possible to send screens of data all over the network which rapidly causes performance problems. It is important to design applications so that all that is being passed is the raw data and the presentation is then done at the client end. Screenscraping done at the client end leads to more network traffic, and thus congestion.

Historically, graphics on the mainframe has consisted of the choice between a color or monochrome terminal. In the new world graphics choices are far more complex. Firstly the user will typically have a PC or Macintosh on the ethernet as their frontend. The configuration options for these change more rapidly than the RS/6000 options. When configuring an RS/6000, a graphics card and monitor need to be chosen. Card options include 2d, 3d, frame buffer bits, the number of internal slots needed for the adapter and whether you want color or monochrome. The monitors themselves come in a range of choices, from direct attach HFT consoles to ASCII monochrome terminals. Size also becomes an issue. If Xwindows is going to be used a great deal then the system should be configured with larger monitors. Monitors range in size from 12-14" up to more than 21". The cost increases with size, particularly for monitors larger than 17". Resolution is also a concern - most large color monitors should be noninterlaced at a resolution of at least 768 x 1024 pixels so that you don't have problems with screen flicker.

Configuring the software itself is also complex. Compilers are not all the same and the options used to compile a C program, for example, can make a huge difference. It is also important, if writing code for parallel processors, to actually optimize the code for that environment, which typically means a complete redesign.

Lastly, there is the whole performance arena. The MVSer is used to breaking the system down into many workload categories and types and planning for growth and performance in those. In AIX there are only 3 workload categories - batch, background and interactive and they are very similar. The major workload types are server, workstation, client and network.

Even CPU ratings can be confusing and it is difficult to draw direct parallels. On the mainframe performance is usually quoted in MIPS or ITRs. In the case of RS/6000s there are many measures and these include TPC-A, TPC-B, TPC-C, Linpack Mflops, Specmarks (89 & 92), Specint (89 & 92), Specfp (89 & 92) and Drhystones. Each of these is a measure of a different kind of performance. This means that the relevance and importance of a particular measure depends on the type of system you are configuring and the applications you are going to run. For instance, if the system is primarily commercial then the TPC measures or the integer measures are important, whereas for simulations with a lot of floating point the Mflops and Specfp should be looked at.

The main thing to keep in mind when configuring any UNIX system is the use that the machine is intended for, both short and longterm. Key metrics still include CPU, memory and I/O, but it is important to consider items such as number of slots for expansion and maximum amount of disk per system. It is still important to configure for a balanced I/O configuration, both across controllers and within controllers.

The items that we have found to be critical success factors in configuring these systems also include cultural issues. There is a tendency for these systems to be viewed either as a large PC or a small mainframe. They are neither. The best way to view these systems is as machines that have all of the complexity and power of a mainframe with the configuration options of a PC and with a whole new terminology. Many of the rules that applied in the mainframe transfer directly to the AIX world, whilst others transfer with a few modifications.

Index of Jaqui's Papers

Pete's Wicked Home Page


© 1995 Jaqueline A. Lynch

Compiled and edited by Jaqui Lynch

Last revised June 5, 1995