home | sitemap | abstract | introduction | chaos | thinking | checklist | migrating | recovery
pushpull | cost | career | workshop | isconf | list_and_community | papers | references

Client Configuration Management

Prerequisites: Network File Servers, File Replication Servers

In a nutshell, client configuration is localization. This includes everything that makes a host unique, or that makes a host a participant of a particular group or domain. For example, hostname and IP addresses must be different on every host. The contents of /etc/resolv.conf should be similar, if not identical, on hosts that occupy the same subnet. Automount maps which deliver users' home directories must be the same for every host in an authentication domain. The entries in client crontabs need to be mastered from the gold server.

Fortunately, if you have followed the roadmap above, most of this will fall into place nicely. If you fully implemented file replication and O/S update, these same mechanisms can be used to perform client configuration management. If not, do something now. You must be able to maintain /etc/* without manually logging into machines, or you will soon be spending all of your time pushing out ad hoc changes.

Earlier, we mentioned the Carnegie Mellon Software Update Protocol (SUP). SUP replicated files for us. These files included the /etc/services file, automount maps, many other maps that are normally served by NIS, and the typical suite of gnu tools and other open-source utilities usually found in /usr/local on UNIX systems. In each case, we generalized what we could so every client had identical files. Where this was not practical (clients running cron jobs, clients acting as DNS secondaries, etc.), we applied a simple rule: send a configuration file and a script to massage it into place on the client's hard disk. SUP provided this "replicate then execute" mechanism for us so we had little need to add custom code.

In most cases we ran SUP from either a cron job or a daemon script started from /etc/inittab. This generally triggered replications every few minutes for frequently-changed files, or every hour for infrequently changed files.

The tool we used for managing client crontabs was something we wrote called 'crontabber' [crontabber] . It worked by looking in /etc/crontab.master (which was SUPed to all client machines) for crontab entries keyed by username and hostname. The script was executed on each client by SUP, and execution was triggered by an update of crontab.master itself. The crontab.master file looked something like this:

root:all:1 2 * * * [ -x /usr/sbin/rtc ] && /usr/sbin/rtc -c
> /dev/null 2>&1
root:all:0 2 * * 0,4 /etc/cron.d/logchecker
root:all:5 4 * * 6   /usr/lib/newsyslog
root:scotty:0 4 * * * find . -fstype nfs -prune -o -print
stevegt:skywalker:10 0-7,19-23 * * * /etc/reset_tiv


Version Control

Gold Server
Host Install Tools
Ad Hoc Change Tools
Directory Servers
Authentication Servers
Time Synchronization
Network File Servers
File Replication Servers
Client File Access
Client O/S Update
Client Configuration Management
Client Application Management
Search WWW Search www.infrastructures.org
Unix System Administration
[ Join Now | Ring Hub | Random | << Prev | Next >> ]
© Copyright 1994-2007 Steve Traugott, Joel Huddleston, Joyce Cao Traugott
In partnership with TerraLuna, LLC and CD International