- Unit Control Block
IBM mainframe operating systems from the z/OSline, a Unit Control Block (UCB) is a memory structure, or a "control block", that describes any single input/output peripheral device, or a "unit", to the operating system.
A similar concept in
Unix-likesystems is a kernel's
devinfostructure, addressed by a combination of major and minor number through a
initial program load(IPL), the Nucleus Initialization Program(NIP) reads necessary information from the I/O Definition File(IODF) and uses it to build the UCBs. The UCBs are stored in system-owned memory, in Extended System Queue Area(ESQA). After IPL completes, UCBs are owned by the I/O Subsystem(IOS). Some of the information stored in the UCB are: device type (disk, tape, printer, terminal, etc...), address of the device (such as "1002"), subchannel identifier and device number, channel path ID(CHPID) which defines the path to the device, for some devices the volume serial number(VSN), and a ton of other information.
The actual I/O at the lowest level is performed by an SIO assembly instruction kicking off a channel program. Since the SIO instruction is privileged, it is represented in
user spaceby an SVC supervisor call instructionusually executed via the Execute Channel Program(EXCP) macro . In the distant past, applications may have performed their own I/O this way.
Today, if a device is shared between programs, separation of users requires that user programs are prevented from doing this.When any task opens, closes, reads, writes, gets, puts, etc... a data set residing on the device, it calls a set of runtime library routines generally referred to as
access methods, providing the device address to them. The UCBs are used in the lower half of the access method complex. IOS is a program, that actually performs SIO on behalf of user space programs, as requestedhuh|date=October 2007 by those methods. A program is usually put to sleep by the operating system. When the I/O completes, the task is woken up and continues on its merry way until the next time, oblivious that it was ever asleep.
Handling parallel I/O operations
UCBs were introduced in the 1960s with
OS/360. Then memory was expensive, so a device addressed by UCB was typically a physical hard disk driveor tape drive, with no internal cache. Without it, the device was usually grossly outperformed by the mainframe's channel processor. Hence, there was no reason to execute multiple input/output operations at the same time, as these would be impossible for a device to physically handle.
And to this day, while an I/O is active to a device, there is a flag in the UCB to indicate that the device is busy. IOS handles all the serialization, and does not issue any other I/O to the device. It places the request in an internal IOS Queue (IOSQ) to wait its turn. When the UCB/device is no longer busy, IOS will look in the queue and select the next I/O from the start of the queue. This activity continues until there are no more I/Os waiting for that particular device.
Workload Manager and UCBs
In days gone by, there was no real way for the operating system to determine if a waiting I/O was more, or less, important than any other waiting I/Os. I/Os to a device were handled
first in, first out. Workload Manager(WLM) was introduced in MVS/ESA5.1. OS/390added "smart" I/O queuing. It allowed the operating system, using information provided to WLM by the systems programmer, to determine which waiting I/Os were more, or less, important than other waiting I/Os. WLM would then, in a sense, move a waiting I/O further up, or down, in the queue so when the device was no longer busy, the most important waiting I/O would get the device next. WLM improved the I/O response to a device for the more important work being processed. However, there was still the limit of a single I/O to a single UCB/device at any one time.
Parallel Access Volumes (PAVs)
With modern peripheral devices, the fact that access to the device is serialized below UCB level has become an important source of bottlenecks. For example, what a modern disk subsystem provides to z/OS as an illusion of "physical
DASDdevice", is usually in fact a portion of a large disk array, equipped with its own cache memory. It is capable of executing multiple operations at a time: some operations are promptly serviced purely from the controller's cache memory, others spread across many of a disk array's drives. Only a small proportion of the concurrent I/Os to the disk volume device actually competes for a single physical magnetic head. Executing many I/O operatios in parallel is not only possible, but it is "recommended", because such load is effectively "pipelined", greatly increasing the overall utilization of a disk subsystem.
Step in Parallel Access Volume (PAV). With appropriate support by the DASD hardware, PAV provides support for more than one I/O to a single device at a time. For
backwards compatibilityreasons, operations are still serialized below UCB level. But PAV allows the definition of additional UCBs to the same logical device, each using an additional "alias" address. For example, a DASD device at "base" address 1000, could have alias addresses of 1001, 1002 and 1003. Each of these alias addresses would have their own UCB. Since there are now four UCBs to a single device, four concurrent I/Os are possible. Writes to the same extent, an area of the disk assigned to one contigous area of a file, are still serialized, but other reads and writes occur simultaneously. The first version of PAV the disk controller assigns a PAV to a UCB. In the second version of PAV processing, WLM (Work Load Manager) re-assigns a PAV to new UCBs from time to time. In the third version of PAV processing, with the DS8000 controller series, each I/O uses any available PAV with the UCB it needs.
The net effect of PAVs is to decrease the IOSQ time component of disk response time, often to zero. As of 2007, the only restrictions to PAV are the number of alias addresses, 255 per base address, and overall number of devices per logical control unit, 256 counting base plus aliases.
In smaller computers using
SCSI, no comparable problem existed, as the SCSI storage devices from the start had the capability of SCSI command queueing.
Static versus dynamic PAVs
There are two types of PAV alias addresses, static and dynamic. A static alias address is defined, in both DASD hardware and z/OS, to refer to a specific single base address. Dynamic means that the number of alias addresses assigned to a specific base address fluctuates based on need. The management of these dynamic aliases is left to WLM, running in goal mode. (Which is always the case with supported levels of
z/OS.) On most systems that implement PAV, there is usually a mixture of both PAV types. One, perhaps two, static aliases are defined for each base UCB and a bunch of dynamic aliases are defined for WLM to manage as it sees fit.
As WLM watches over the I/O activity in the system, WLM determines if there a high-importance workload is delayed due to high contention for a specific PAV-enabled device. Specifically, for a disk device, base and alias UCBs must be insufficient to eliminate IOS Queue time. If there is high contention, WLM will try to move aliases from another base address to this device - if WLM estimates doing so would help the workload achieve its goals more readily.
Another problem may be certain performance goals are not being met, as specified by WLM service classes. WLM will then look for alias UCBs that are processing work for less important tasks (service class), and if appropriate, WLM will re-associate aliases to the base addresses associated with the more important work.
WLM's actions in moving aliases from one disk device to another take a few seconds for the effects to be seen. For many situations this is not fast enough. HyperPAVs are much more responsive because they acquire a UCB from a pool for the duration of a single I/O operation, before returning it to the pool. There is no delay waiting for WLM to react.
Further, because with HyperPAV the UCB is acquired for only the duration of a single I/O, a smaller number of UCBs are required to service the same workload, compared to Dynamic PAVs. For large
z/OSimages UCBs can be a scarce resource. So HyperPAVs can provide some relief in this regard.
Wikimedia Foundation. 2010.