- Automounter
An automounter is any program or software facility which automatically mounts filesystems in response to access operations by user programs. These are system utilities (
daemon s underUnix ) which, when notified of file and directory access attempts under selectively monitored subdirectory trees, dynamically and transparently make remote or local devices accessible.The purpose of the automounter is to conserve local system resources and reduce the coupling between systems which are sharing filesystems with a number of servers. For example, a large to mid-sized organization might have hundreds of file servers and thousands of
workstation s or other nodes accessing files from any number of those servers at any time. Usually only a relatively small number of remote filesystems (exports) will be active on any given node at any given time. By deferring the mounting of such filesystem until they are actually needed, the need to track such mounts is reduced, increasing reliability, flexibility and performance.Frequently one or more fileservers will be inaccessible (down for maintenance, on a remote and temporarily disconnected network, or accessed via a congested link). It is also often necessary to relocate data from one file servers to another's to resolve capacity and load balancing issues. Having data mount points automated makes it easier to reconfigure client systems in such events. In addition, some storage devices such as floppies, CD-ROMs and USB keys, should be able to be mounted only when the device is attached to the system.
These factors combine to pose challenges to older "static" management methods of filesystem mount tables (the "
fstab " files on Unix systems). Automounter utilities address these challenges and allowsysadmin s to consolidate and centralize the associations of mountpoints (directory names) to the remote filesystems (exports). When done properly, users can transparently access files and directories as if there were a single enterprise-wide filesystem to which all of their workstations and other nodes were attached.It is also possible to use automounters to define multiple repositories for read-only data; client systems can automatically choose which repository to mount based on availability, file server load, or proximity on the network.
Home directories
Many establishments will have a number of file servers which host the home directories for various users. All workstations and other nodes internal to the organization (typically all those behind a common
firewall separating them from theInternet ) will be configured with automounter services so that any user logging into anynode implicitly triggers access to his or her own home directory which, consequently, is mounted at a common mountpoint, such as/home/"user"
. This allows users to access theirown files from anywhere in the enterprise, which is extremely useful in Unix environments where users will frequently be invoking commands on many remote systems via various job dispatching commands such asssh ,telnet ,rsh orrlogin , and via theX11 andVNC protocols.Shared data
Many computing tasks can be distributed across clusters or "farms" of computing nodes. Commonly each of these nodes must operate on some portions of input data and contribute their results to some common pool of outputs (which typically requires some post processing concatenation or other aggregation). These input data and the storage space for the results are normally located on file servers (often on separate file servers for different projects or data sets).
Software shares and repositories
In many computing environments the user workstations and computing nodes do not host installations of the full range of software that users might want to access. Systems may be "imaged" with a minimal or typical cross-section of the most commonly used software. Also some users in some environments might require specialized or occasional access to older versions of software (for instance developers may need to perform bug fixes and regression testing or some users may need access to archival data using out-dated tools).
Commonly, organizations will provide repositories or "depots" of such software so that it can be installed as needed. These also may include full copies of the system images from which machines have their operating systems initially installed, or available for repair of any system files that may get corrupted during a machine's lifecycle.
Some software may require quite a bit of storage space or might be undergoing rapid (perhaps internal) development. In those cases the software may be installed on andconfigured to be run directly from the fileservers.
Dynamically variant automounts
In the simplest case a fileserver houses data and perhaps scripts which can be accessed by any system in an environment. However, there are certain types of files (executable binaries and shared libraries, in particular) which can only be used by specific types of hardware or specific versions of specific operating systems.
For situations like this, automounter utilities generally support some means of "mapping" or "interpolating" variable data into the mount arguments.
For example an organization with a mixture of
Linux and Solaris systems might arrange to host their package repositories for each on a common file server using export names likedepot:/export/linux
anddepot:/export/solaris
respectively. Thereunder they might have directories for each of the OS versions that they support. Using the dynamic variation features in their automounter they might then configure all their systems so that any administrator on any machine in their enterprise could access available software updates under/software/updates
. A user on a Solaris system would find the Solaris compiled packages under/software
while aRed Hat orCentOS user would find RPMs for their particular OS version thereunder. Moreover a Solaris user on aSPARC workstation would have their/software/updates
mapped to an appropriate export for their system's architecture while a Solaris user on an x86 PC would transparently find their/software/updates
directory containing packages suited to their system. Some software (written in scripting languages such asPerl or Python) can be installed and/or run on any supported platform without porting, recompilation or re-packaging of any sort. Those might be located in a/software/common
export.In some cases organizations may also use regional or location based variable/dynamic mappings --- so that users in one building or site are directed to closer file server which hosts replications of the resources that are hosted at other locations.
In all of these cases automounter utilities allow the users to access files and directories without regard for where they are actually located. Using an automounter the users and systems administrators can usually access files where they are "supposed to be" and find that they appear to be there.
Software
The original automount software was developed by Tom Lyon at
Sun Microsystems , and was introduced inSunOS 4.0 in 1988. [cite book
last = Callaghan
first = Brent
title = NFS Illustrated
origdate = 1999
url = http://books.google.com/books?id=y9GgPhjyOUwC
accessdate = 2007-12-23
edition =
series =
date =
year = 2000
month =
publisher = Addison-Wesley
isbn = 0201325705
pages = pp. 322-323 ] This implementation was eventually licensed to other commercial UNIX distributions. Under Solaris 2.0, first released in 1992, the automounter was implemented as a pseudofilesystem called "autofs".In December 1989, "amd", an automounter "based in spirit" on the SunOS automount program, was released by Jan-Simon Pendry. [cite newsgroup
title ="Amd" - An Automounter
author = Jan-Simon Pendry
date = 1989-12-01
newsgroup = comp.unix.wizards
url = http://groups.google.com/group/comp.protocols.nfs/msg/4951e03d27b7c7e2
accessdate = 2007-12-23] This is now also known as theBerkeley Automounter .Linux automount utilities also use the name autofs.
Disadvantages and caveats
While automounter utilities (and remote filesystem in general) can provide centrally managed, consistent and largely transparent access to an organization's storage services they also can have their downsides.
* Access to automounted directories can trigger delays while the automounter resolves the mapping and mounts the export into place.
* Timeouts can cause mounted directories to be unmounted (which can later results in the mount delays upon the next attempted access).
* The mapping of mountpoint to export arguments is often (usually) done via some directory service such as LDAP orNIS , which constitutes another dependency (potential point of failure)
* When some systems require frequent access to some resources while others only need occasional access, it can be difficult or impossible to use a consistent, enterprise-wide mixture of locally "mirrored" (replicated) and automounted directories.
* When data is migrated from one file server (export) to another there can be an indeterminate number of systems which, for various reasons, still have an active mount on the old location ("stale NFS mounts"); these can cause issues which may even necessitate the reboot of otherwise perfectly stable hosts.
* Organizations can find that they've created a "spaghetti" of mappings which can entail considerable management overhead and sometimes quite a bit of confusion among users and administrators.
* Users can become so accustomed to the transparency of automounted resources that they neglect to consider some of the differences in access semantics that may apply to networked filesystems as compared to locally mounted devices. In particular, programmers may be attempting to use "locking" techniques which are safe and provide the desired atomicity guarantees on local filesystems, but which are documented as inherently vulnerable torace condition s when used on NFS.References
Wikimedia Foundation. 2010.