Jaqui Lynch - Issues involved in Migrating to DCE


Presented at Share in Boston, August 1994

                  
        
        
        Issues involved in Migrating to DCE - A User Experience 
                                    
                             by Jaqui Lynch
                             Boston College

This presentation covers the issues that need to be reviewed before moving into the DCE world, and what is involved in actually configuring a DCE cell. Many of the questions being asked in corporations today are covered with the primary focus being on issues to do with Data Management in the new world. The paper is based on a three tier client/server architecture implementation that is part of a pilot project at Boston College.

Since mid 1992 Boston College has been looking at distributing their current systems across multiple platforms so that they can take advantage of the smaller incremental costs and flexibility inherent with the client/server paradigm. In 1993 it was decided that the correct implementation for the college would be based on the three tier architecture model, and that the initial implementation would be based around the current 3090-180J mainframe, RS/6000s, PS/2s and Macintoshes.

The Boston College environment at that time consisted of the 3090 mainframe running MVS/ESA v4.3, so AIX, on the administrative side, was a new venture. After further analysis it was decided that DCE should be the foundation of the new system and a pilot project was set up to implement a small cell and to test out the functionality and performance inherent in the new world.

The three tier architecture that was decided upon consists of clients (typically Macintoshes or PS/2s) who are responsible for some processing and all of the presentation services, servers who provide services such as databases or specialized processing, and a middleware portion that is primarily responsible for moving data around.

In order to implement DCE it was necessary to bring in some RS/6000s and configure them correctly so that a cell could be set up. In order to do this,it is important to understand the components of DCE and the elements of the system that they affect. The major components consist of: security services, cell directory services, distributed time services, Pthreads and the DCE base. An external global naming service is also needed. For the purposes of this paper it is assumed that the reader is conversant with the basic functions of each of these.

It was decided to set up a small DCE cell consisting of 5 systems - a security server and cell directory server combined, a DCE system to run the servers on, a DCE system to run the clients on (all RS/6000s) and two PS/2s to act as clients. Longterm the mainframe and Macintoshes will also be added to the cell. Configuring the three RS/6000s was the first hurdle to overcome.

It was decided for the purpose of the pilot project to combine the security and cell directory services on one system, to avoid timing problems between the two. This system is an RS/6000 250T with 64mb of memory, 2gb of disk, AIX, Xwindows, the DCE base, XDM, security services and the CDS installed. The application server is an RS/6000 360 system with 128mb of memory, 3gb of disk, AIX, Xwindows, the SDE workbench, the DCE base, XDM, and SNA services installed so that it would be possible to use LU 6.2 to the mainframe for data exchange. This system provides the Xwindows based programming environment for MIS. The client system is an RS/6000 250W with 32mb of memory, 2gb of disk, AIX, Xwindows, the DCE base and XDM. The two PS/2s had Windows, Visual C++ and Gradient Systems PC/DCE for Windows installed.

Since the university was already wired for ethernet and the mainframe was already on the ethernet, the networking side did not cause any problems, The major issues were all in the areas of administration, data access and security, and in the actual memory and disk space requirements for the systems. These are addressed in the following paragraphs.

Administration - With a small cell it is not difficult to keep track of what resources are part of the cell, however, in a production environment it is expected that there may be at least two cells, and that they would consist of over 15 machines each plus the client systems. This means that management of items such as system configurations, software versions and updates and source and load modules has become a major issue. These are all being investigated today. Merely maintaining these systems at a fairly recent level of the operating system is expected to be a mammoth task. With the apparent demise of the DME standard, BC is currently unsure as to how this issue will be addressed, although the use of Distributed SMIT is being looked into as one solution. Other administrative issues include the registration of over 20000 users who are currently defined in RACF databases, and some method of defining and deleting users so that all systems are automatically updated in a timely manner.

When naming the cell it is important to realize that the cellname cannot be changed without reconfiguring the whole cell. This name can be either a DNS or X.500 style name. The security server and initial CDS servers also cannot be moved without reconfiguring and you cannot change the uid for cell_admin without a reconfigure. A reconfiguration involves deconfiguring every machine in the cell, and then reconfiguring each system again. It is important to note that the uid for cell_admin is 100, which is the same as guest, and this should be changed when the cell is setup.

As with NIS, userid and groupid numbers need to be unique and consistent across all systems and it is important to ensure they match between DCE and AIX. This is not done for you. Most of the DCE supplied utilities are extremely unfriendly and need to be frontended in some kind of controlled manner, even though most of them are available through Smit. BC is currently in the process of writing a client/server administrative system which interacts with the mainframe (where usernames are generated and information is stored about users), passes the username into Ingres on the RS/6000, and then creates user accounts on the necessary systems.

System maintenance is also a problem currently being addresses. If the security server or the initial CDS server are down, then new servers cannot start, principals cannot authenticate, and directory searching cannot take place. This means that the cell is effectively disabled if either of these two is disabled, thus making it difficult to perform maintenance on those machines. While you can replicate the CDS, not only is it a readonly copy, but replication is done by directory only. This impacts servers as a server needs write access to register itself. There is no replication available from IBM for the security server today so it is important to look into other failsafe methods such as HACMP. There is also no equivalent of the who command for DCE, so it is difficult to know who is logged into the cell at any one time.

In the area of security there are many issues. There is still no publicly available integrated AIX/DCE login, so a user needs both an AIX account and a DCE account to get access, unless a PC client is being used. There is an integrated login available from IBM tools. A major problem for the universities is the lack of a Macintosh DCE client, which hopefully is being addressed this year. Most of the utilities come with permissions set so that anyone can execute them - this allows a guest user to obtain a list of all your accounts. Since a user is not disabled regardless of the number of invalid DCE login attempts, then it is possible to get a listing of the accounts and then use the computer to attempt logins with various passwords. The permissions for all of the utilities, particularly rgy_edit and acl_edit, should be changed. There is also no way for a user to change their own password without using Smit or rgy_edit, both of which are generally secured so the general user cannot access them. This is another process that BC is writing frontend shells for.

Noone should ever Telnet or Ftp to the security server or CDS server, as it is possible to trap those logins and this compromises the whole cell's security. If it becomes necessary to use telnet then a secure mechanism such as Kerberos v5 and Ktelnet should be implemented. It is possible to set up kerberos so that it uses the DCE security server, thus avoiding the need to maintain two databases of users. It should also be noted that it is up to the individual server application to include its own ACL manager to ensure security. It is possible to use profiles and groups to simplify this.

Other related security issues being dealt with include replication of a homegrown Position Based Security system, where access is tied to a person's position within the university. MIS is also closely reviewing all corporate data so that decisions can be made as to what data needs to be encrypted and what level of encryption is needed.

The role of the mainframe is another important issue still being dealt with. BC is a PL/I and VSAM shop, with no mainframe database. There is currently only support for C and COBOL in the DCE world. The VSAM files are very integrated as are all of the administrative systems. This makes it difficult to partition the data across systems, and also makes it difficult to access the data from other systems. Nearly all of the critical files are owned by CICS at least 16 hours a day, which makes sharing and concurrent updating of files a major problem. The intent is to explore several avenues - the first is DCE for the Mainframe, the second is Messaging and Queuing and the third is using CICS/6000 for transfer of data between the platforms. Use of one or all of these methods as middleware would simplify the process of accessing legacy data owned by CICS on the mainframe.

It is also the intent of BC to set up a redundant corporate database on one of the UNIX systems. Moving or copying data to alternate platforms raises a new set of issues - specifically issues relating to data ownership, how to use a location, partitioning data, keeping in sync with updates, enforcing naming standards, location transparency, security, virus protection and ease of access to data. Backup and recovery methods also need to be investigated. It is important to look into methods of archiving data, as well as backing it up, and it is also important to categorize data into what needs to be backed up locally rather than to a remote site. All of these are currently under review as the corporate data model is being developed. BC is also looking at DFSMSRMM and Adstar DSM to solve some of the backup problems, as well as attempting to standardize on 8mm tape drives across UNIX systems. Currently there is a mixture of 4mm, 8mm, and 1/4 inch which causes problems for disaster recovery as well as performance and storage space for cartridges.

When developing in the DCE world it is important to review program design methodologies, and to include code that deals with server shutdown, error recovery and server ACLs. The CDS should always be notified cleanly that a server is terminating or starting. It is of great benefit to group the DCE code into callable modules that standardize such everyday occurrences such as DCE initialization or termination.

Some additional suggestions received from Harvard's TEG include the following: providing APIs on both the client and server side so that the general programmer never sees the DCE code, putting the business logic into the servers and keeping the clients very simple, using stateless servers so that it is easy to recover to a new server should a server go down, pinging the requested server before binding, and splitting out the typedefs and defines into two headers (due to conflicts between C and Idl). All of these suggestions are being incorporated into the new architecture being developed at BC.

The last area to be addressed is performance and capacity planning. Standards need to be set up for sizing servers, clients, dasd and printers to be brought into the cell. This will involve modeling and profiling applications and assessing the network impact caused by moving an application into the DCE world. It is BC's intent to use DCE to provide single network login, and the impact os this needs to be closely analyzed. It is also important to plan for growth. The major problem in these areas is access to tools. There are very few, if any, modelling tools around that will take a current mainframe application and model it in the DCE world, or that will just model a new application in the DCE world. This makes sizing the machines very difficult. Once the mainframe is also brought into the DCE world, it will become even more complicated to model the environment.

As mentioned earlier, it is BC's intention to implement DCE on the mainframe as soon as it is available. Because this requires considerable upgrades to MVS, these are already underway. At the minimum BC needs to install ESA 5.1, DBX and shells, SMS 1.2, RACF 2.1 and some product upgrades. We are hoping to be ready to implement DCE on the mainframe by December 1994, but that is still dependant on release dates for the product. However, in the meantime MIS is evaluating current and new applications in an attempt to decide what the first major DCE application at BC will be.

Jaqui's Index of Papers

Pete's Wicked Home Page


© 1995 Jaqueline A. Lynch

Compiled and edited by Jaqui Lynch

Last revised June 5, 1995