[Go to Home]

NIKHEF Info
Download
Scheduling
IPtables Gen
Wiki

User-oriented info

 ... back



Old Info
Adding Users (ikonet)
Deploy guide (ikonet)

EDG release bundles
EDG test bed info
Automatic Jobs (ikonet)

Local Guide
Local Guide (PDF)
Logbook
Browse the 1.1.3 install
Some Logfile analysis
Network statistics


Adding a Globus node to the NIKHEF setup

Most of the deployment steps should be initiated from the central maintenance host (currently: gandalf), since only from this host remote root access can be obtained. Most of the interesting information can be found in /global/globussrc/local, but first change the following files in the relevant install directory (depends on the version you want to deploy) in /global/globus:
  • globus-services.conf: add a line for the job manager(s), e.g.:

    polyeder.nikhef.nl jobmanager stderr_log,local_cred - ${libexecdir}/globus-jobmanager globus-jobmanager -conf ${sysconfdir}/globus-jobmanager.conf -type fork -rdn jobmanager -machine-type unknown

    or

    monochroom.nikhef.nl jobmanager-pbs stderr_log,local_cred - ${libexecdir}/globus-jobmanager globus-jobmanager -conf ${sysconfdir}/globus-jobmanager.conf -type pbs -rdn jobmanager -machine-type unknown

    But be sure all the information is on one line.
  • grid-info-hosts.conf: add a line corresponding to the host's architecture, e.g.:

    triode.nikhef.nl /global/globus/globus-1.1.3b14-20010312/services/i686-pc-linux-gnu/bin triode.nikhef.nl -

  • globus-gatekeepers.conf: add a line if you do not want the default (daemon or inetd).
  • if the host is the central GIIS host: be sure to temporarily reset the GIIShost name in the grid-info.conf file from the generic CNAME to the actual host name, such as to ghet the giis daemons deployed there.

Go (still on gandalf) to /global/globussrc/local and run the rsync.globus script (sorry for the name) in the specified order. The script's source is here. Run it without arguments to get some help.

  1. prepare - create directories /etc/grid-security and /opt/globus and set the proper ownership and permissions
  2. deploy - after setting the proper values in the configuration files (see above), ru globus-local-deploy on the host. By default, it will NOT preserve the gatekeeper cert and key, since they're symlinks.
  3. postdeploy - move any created gatekeeper cert files to the GSI directory and create the symlinks if needed. Also, change owener ship and permissions on all suggested files. Link to the certificates directory on /global/globus/share.
  4. if needed, add a host cert to the GSI directory - As root on the specified machine, run:
    	cd /etc/grid-security
    	setenv GLOBUS_INSTALL_PATH /global/globus/globus
    	$GLOBUS_INSTALL_PATH/tools/i686-pc-linux-gnu/bin/grid-cert-request -host triangel.nikhef.nl -dir /etc/grid-security
    	$GLOBUS_INSTALL_PATH/tools/sparc-sun-solaris2.6/bin/grid-cert-request -host triangel.nikhef.nl -dir /etc/grid-security
    	ln -s usercert.pem hostcert.pem
    	ln -s userkey.pem hostkey.pem
    	
  5. getcerts - make a backup tar file of the /etc/grid-security GSI directory to gandalf, and make sure the permissions are very restrictive (also on the directory itself). Run this step only after you have installed both the proper gatekeeper and the host certs (after signing).
    Temporary problems
    Since the source device is now stored on ajax and gandalf will suffer root-squash, you may prefer running this last step from ajax and replace "rsh" with "ssh" (and supply the root password explicitly).
Now, create the proper grid-mapfile, possibly using a hard link, in the directory /global/globussrc/local/installroot/etc/grid-security, and add the host, flavour, deploy directory and version to the globushosts file (this should be done on "ajax"). Then, run the script from the /global/globussrc/local directory on gandalf: sh Dist.sh hostspecific. Possibly add the startup to the system init scripts with the command add_to_init-flavour /deploydir.

Disabling HBM services

The beta version of the HBM punts a huge load on a DNS server if the hbm collector host does not exist. Disable the hbm (check it!) by moving the globus-hbm-daemon.conf in $DEPLOY/etc out of the way.

Starting services

To start the GridFTP service in the background, as root run:
/global/globus/gsi-wuftpd-0.5/gsiftpd start
On some systems (schuur, triode, triangel and polyeder), this is symlinked in /etc/rc.d/rc3.d/ and thus should start automatically. If not, restart it by hand.

Globus should also be started automatically on all hosts mentioned in /global/globussrc/local/globushosts, since on those hosts the SXXglobus script is symlinked in the SysV rc* directories for runlevels 3 and 5. Refer to the appropriate rc directory for starting manually or run on these systems:

/opt/globus/sbin/SXXglobus start

On monochroom, also a PBS service is started in runlevel 3 and 5. If it fails, run:

/opt/OpenPBS_2_3_12/sbin/openpbs start

Preserving on reinstall

On a globus node, the following files are installed as part of the Globus deployment:
/opt/globus/
/etc/grid-security/
and the SysV-init style run scripts. These can be installed using the scripts add_to_init-* in /global/globussrc/local.

For PBS, the files are in

/opt/OpenPBS*/
/var/spool/PBS/
And a symlink to the "/opt/OpenPBS*/sbin/openpbs" script in the SysV init rc directory for runlevels 3 and 5.
Relevant Globus releases (verified)1.1.3, 1.1.3b14
Creation dateApril 18, 2001
May 30, 2001
July 6, 2001
Author(s)David Groep

Comments to David Groep