- Subroutine
In
computer science , a subroutine (function, method, procedure, or subprogram) is a portion of code within a larger program, which performs a specific task and can be relatively independent of the remaining code. Thesyntax of many programming languages includes support for creating self-contained subroutines, and for "calling" and returning from them.They are in many ways similar to mathematical functions, but can have side-effects outside of the simple "return value" that functions return (result in). Some programming languages make very little syntactic distinction between functions and subroutines.
There are many advantages to breaking a program up into subroutines, including:
* reducing the duplication of code in a program (e.g., by replicating useful functionality, such as mathematical functions),
* enabling reuse of code across multiple programs,
* decomposing complex problems into simpler pieces (this improves maintainability and ease of extension),
* improving readability of a program, and
* hiding or regulating part of the program (seeInformation hiding ).The components of a subroutine may include:
* a body of code to be executed when the subroutine is called,
* parameters that are passed to the subroutine from the point where it is called, and
* a value that is returned to the point where the call occurs.Many
programming languages , such as Pascal ,Fortran , Ada, distinguish between functions or function subprograms, which return values (via a return statement), and subroutines or procedures, which do not. Some languages, such as C and Lisp, do not make this distinction, and treat those terms as synonymous. The name method is commonly used in connection withobject-oriented programming , specifically for subroutines that are part of objects; it is also used in conjunction withtype class es.Maurice Wilkes , David Wheeler, andStanley Gill are credited with the invention of the subroutine (which they referred to as the closed subroutine). [cite book
last = Wilkes
first = M. V.
coauthors = Wheeler, D. J., Gill, S.
title = Preparation of Programs for an Electronic Digital Computer
publisher = Addison-Wesley
date = 1951 ]Early history
The first use of subprograms was on early computers that were programmed in
machine code orassembly language , and did not support a "call" instruction. On these computers, subroutines had to be called by a sequence of lower level machine instructions, possibly implemented as a macro. These instructions typically modified the program code, modifying the address of a branch at a standard location so that it behaved like an explicit return instruction. Even with this cumbersome approach subroutines proved very useful. The available memory on early computers was many orders of magnitude smaller than that available on today's computers, and non-trivial subroutines saved memory by reducing redundancy. Soon, most architectures provided instructions to help with subroutine calls, leading to explicit call instructions.Technical overview
A subprogram, as its name suggests, behaves in much the same way as a complete computer program, but on a smaller scale. Typically, the caller waits for subprograms to finish and continues execution only after a subprogram "returns". Subroutines are often given parameters to refine their behavior or to perform a certain computation with given values.
No Stack
Early FORTRAN compilers were written for machines like the
HP 2100 which did not support stacks (or recursion) with hardwarestack register s. The Jump to subroutine instruction had the following format:------+-----+---------+-------- label JSB m [,I] comments
The address for label is placed into the location represented by m and control transfers to the NEXT location, m+1. On completion of the subroutine, control may be returned to the normal sequence by performing a JMP m,I. This reserves a location at or before the start of a subroutine to save the return location. This did not require a separate stack, but did not support recursion since there is only one return storage location per subroutine. A similar technique was used by
Lotus 1-2-3 to support a tree walk to compute recalculation dependencies, as a location was reserved in each cell to store the "return" address. Sincecircular reference s are not allowed for natural recalculation order, this allows a tree walk without reserving space for a stack in memory which was very limited on small computers such as theIBM PC .tack
Most implementations use a
call stack to implement subroutine calls and returns.When an assembly language program executes a call, program flow jumps to another location, but the address of the next instruction (that is, the instruction that follows the call instruction in memory) is kept somewhere to use when returning. The IBM
System/360 saved this address in aprocessor register , relying on convention to save and restore registers andreturn address es in memory associated with individual subroutines, then using branches to the address specified in the register to accomplish a subroutine return.Compilers for most languages use a push-down stack and support recursive subroutine calls (each call is given a fresh new location to store the return address). In a stack based architecture, the return address is 'pushed' as a point of return on the stack. The subroutine 'returns' by 'popping' a return value from the top of the stack, which reads the previously pushed return address and jumps to it, so that program flow continues immediately after the call instruction. Most
RISC andVLIW architectures save the return address in alink register (as the IBM 360 did), but simulate a stack with load and store instructions rather than with push and pop instructions. The disadvantage of such a scheme is that the stack can overflow if recursion takes place at too many levels, or if the variables on each stack frame are too large. If there is not sufficient stack space, and there is no recursion, a tree-walk can be simulated with an iterative algorithm, storing return locations at each tree node, as was done withLotus 1-2-3 and work-alike clone, The Twin which were based on PC's with very limited stack space.This section deals with the modern implementation of having subroutine data stored on one or more stacks.
Due to usage of a stack, a subroutine can call itself (see
recursion ) or other subroutines (nested calls), and of course it can call the same subroutine from several distinct places. Assembly languages generally do not provide programmers with such conveniences as local variables or subroutine parameters. They get to be implemented by passing values in registers or pushing them onto the stack (or another stack, if there is more than one).When there is just one stack, the return addresses must be placed in the same space as the parameters and
local variable s. Hence, a typical stack may look like this (for a case where function1 calls function2):
* previous stack data,
* function1 local variables,
* parameters for function2,
* function1 return address (of the instruction which called function2),
* function2 local variables.This is with a forwards-growing stack — on many architectures the stack grows backwards in memory. Forward and backwards-growing stacks are useful because it is quite practical to have two stacks growing towards each other in a common scratch space, using one mainly for control information like return addresses and loop counters and the other for data. (This is what Forth does.)
The parts of the program which are responsible for the entry into and exit out of the subroutine (and hence, the setting up and removal of each stack frame) are called the
function prologue and epilogue.If the procedure or function itself uses stack handling commands, outside of the prologue and epilogue, e.g. to store intermediate calculation values, the programmer needs to keep track of the number of 'push' and 'pop' instructions so as not to corrupt the original return address.
ide-effects
In most
imperative programming languages, subprograms may have so-called side-effects; that is, they may cause changes that remain after the subprogram has returned. It can be technically very difficult to predict whether a subprogram has a side-effect or not. In imperative programming,compiler s usually assume every subprogram has a side-effect to avoid complex analysis of execution paths. Because of its side-effects, a subprogram may return different results each time it is called, even if it is called with the same arguments. A simple example is a subprogram that implements apseudorandom number generator ; that is, a subprogram that returns a random number each time it is called.In "pure"
functional programming languages such as Haskell, subprograms can have no side effects, and will always return the same result if repeatedly called with the same arguments. Such languages typically only support functions, since subroutines that do not return a value have no use unless they can cause a side effect. In functional programming writing to a file is a side effect.C and C++ examples
In the C and
C++ programming languages, subprograms are referred to as "functions" (or "methods" when associated with a class). Note that these languages use the special keywordvoid
to indicate that a function takes no parameters (especially in C) and/or does not return any value. Note that C/C++ functions can have side-effects, including modifying any variables whose addresses are passed as parameters (i.e. "passed by reference"). Examples:The function does not return a value and has to be called as a stand-alone function, e.g.,
function1();
This function returns a result (the number 5), and the call can be part of an expression, e.g.,
x + function2()
This function converts a number between 0 to 6 into the initial letter of the corresponding day of the week, namely 0 → 'S', 1 → 'M', ..., 6 → 'S'. The result of calling it might be assigned to a variable, e.g.,
num_day = function3(number);
.This function does not return a value but modifies the variable whose address is passed as the parameter; it would be called with "
function4(&variable_to_increment);
".Local variables, recursion and re-entrancy
A subprogram may find it useful to make use of a certain amount of "scratch" space; that is, memory used during the execution of that subprogram to hold intermediate results. Variables stored in this scratch space are referred to as local variables, and the scratch space itself is referred to as an activation record. An activation record typically has a
return address that tells it where to pass control back to when the subprogram finishes.A subprogram may have any number and nature of call sites. If recursion is supported, a subprogram may even call itself, causing its execution to suspend while another "nested" execution of the same subprogram occurs.
Recursion is a useful technique for simplifying some complex algorithms, and breaking down complex problems. Recursive languages generally provide a new copy of local variables on each call. If the programmer desires the value of local variables to stay the same between calls, they can be declared "static" in some languages, or global values or common areas can be used.Early languages like
Fortran did not initially support recursion because variables were statically allocated, as well as the location for the return address. Most computers before the late 1960s such as thePDP-8 did not have support for hardware stack registers.Modern languages after ALGOL such as Pl/1 and C almost invariably use a stack, usually supported most modern computer instruction sets to provide a fresh activation record for every execution of a subprogram. That way, the nested execution is free to modify its local variables without concern for the effect on other suspended executions in progress. As nested calls accumulate, a
call stack structure is formed, consisting of one activation record for each suspended subprogram. In fact, this stack structure is virtually ubiquitous, and so activation records are commonly referred to as "stack frame s".Some languages such as Pascal and Ada also support nested subroutines, which are subroutines callable only within the scope of an outer (parent) subroutine. Inner subroutines have access to the local variables of the outer subroutine which called them. This is accomplished by storing extra context information within the activation record, also known as a "display".
If a subprogram can function properly even when called while another execution is already in progress, that subprogram is said to be "re-entrant". A recursive subprogram must be re-entrant. Re-entrant subprograms are also useful in multi-threaded situations, since multiple threads can call the same subprogram without fear of interfering with each other.
In a multi-threaded environment, there is generally more than one stack. An environment which fully supports
coroutine s orlazy evaluation may use data structures other than stacks to store their activation records.Overloading
It is sometimes desirable to have a number of functions with the same name, but operating on different types of data, or with different parameter profiles. For example, a square root function might be defined to operate on reals, complex values or matrices. The algorithm to be used in each case is different, and the return result may be different. By writing three separate functions with the same name, the programmer has the convenience of not having to remember different names for each type of data. Further if a subtype can be defined for the reals, to separate positive and negative reals, two functions can be written for the reals, one to return a real when the parameter is positive, and another to return a complex value when the parameter is negative.
When a series of functions with the same name can accept different parameter profiles or parameters of different types, each of the functions is said to be overloaded.
As another example, a subroutine might construct an object that will accept directions, and trace its path to these points on screen. There are a plethora of parameters that could be passed in to the constructor (colour of the trace, starting x and y co-ordinates, trace speed). If the programmer wanted the constructor to be able to accept only the color parameter, then he could call another constructor that accepts only color, which in turn calls the constructor with all the parameters passing in a set of "default values" for all the other parameters (X and Y would generally be centered on screen or placed at the origin, and the speed would be set to another value of the coder's choosing).
Conventions
A number of conventions for the coding of subprograms have been developed. It has been commonly preferable that the name of a subprogram should be a verb when it does a certain task, an adjective when it makes some inquiry, and a noun when it is used to substitute variables and such.
Experienced programmers recommend that a subprogram perform only one task. If a subprogram performs more than one task, it should be split up into more subprograms. They argue that subprograms are key components in maintaining code and their roles in the program must be distinct.
Some advocate that each subprogram should have minimal dependency on other pieces of code. For example, they see the use of
global variable s as unwise because it adds tight-coupling between subprograms and global variables. If such coupling is not necessary at all, they advise to refactor subprograms to take parameters instead. This practice is controversial because it tends to increase the number of passed parameters to subprograms.Efficiency and inlining
There is a runtime overhead associated with passing parameters, calling the subprogram, and returning. The actual overhead for each invocation depends on the local context at the point of call and the requirements specified in the architecture's
application binary interface .One technique used to minimise this overhead is
inline expansion of the subprogramcall site . However, inlining often increases code size and can introducecache miss es into a previously optimised block of code.Dynamic dispatch can introduce further overheads - although the performance difference between indirect and direct calls on commodity CPUs has narrowed since the1980 s, because of research and work done by CPU designers (driven by the increasing popularity of object-oriented programming, which uses dynamic dispatch extensively). Also, software techniques have been developed to make dynamic dispatch more efficient.Related terms and clarification
Different programming languages and methodologies possess notions and mechanisms related to subprograms:
* Subroutine is practically synonymous with "subprogram." The former term may derive from the terminology of
assembly language s and Fortran.
* Function and Procedure are also synonymous with "subprogram" - with the distinction (in some programming languages) that "functions" generate return values and appear in expressions, where "procedures" generate no return values and appear in statements. Hence, a subprogram that calculates the square root of a number would be a "function" (eg:y = sqrt(x)
) where a subprogram to print out a number might be a "procedure" (eg:print(x)
). This is not a distinction found in all programming languages and notably the C family of programming languages use the two terms interchangeably. See also:Command-Query Separation .
* Predicate is, in general, aboolean-valued function (a function that returns a boolean). Inlogic programming languages, often all subroutines are called "predicates", since they primarily determine success or failure.
* Method or Member function is a special kind of subprogram used inobject-oriented programming that describes some behaviour of an object.
* Closure is a subprogram together with the values of some of its variables captured from the environment in which it was created.
*Coroutine is a subprogram that returns to its caller before completing.
*Event handler , or simply "handler," is a subprogram that is called in response to an "event", such as a computer user moving the mouse or typing on the keyboard. TheAppleScript scripting language simply uses the term "handler" as a synonym for subprogram. Event handlers are often used to respond to anInterrupt - in which case they may be termed an Interrupt handler.People who write compilers, and people who write in assembly language, deal with the low-level details involved in implementing subroutines:
*calling convention
* Nearly all implementations of subroutines involve acall stack .
*link register
*function prologue and function epilogue
*Threaded code makes code even more compact. It uses a small interpreter to execute subroutines that consist of lists of subroutine addresses. The lowest levels of subroutine are the only machine language.Most subprograms typically implement the idea of an
algorithm -- they are given a finite amount of information when they start, they are given no more information while they run, and they give out no information until they end.However, each process, job, task, or thread is a program or subprogram that typically sends and receives information many times before it ends.See also
*
Function (mathematics)
*Method (computer science)
*Module (programming)
*Transclusion
*Operator overloading References
Wikimedia Foundation. 2010.