Optimization (computer science)

Optimization (computer science)

In computing, optimization is the process of modifying a system to make some aspect of it work more efficiently or use fewer resources. For instance, a computer program may be optimized so that it executes more rapidly, or is capable of operating with less memory storage or other resources, or draw less power. The system may be a single computer program, a collection of computers or even an entire network such as the Internet. See also algorithmic efficiency for further discussion on factors relating to improving the efficiency of an algorithm.

General

Although the word "optimization" shares the same root as "optimal," it is rare for the process of optimization to produce a truly optimal system. The optimized system will typically only be optimal in one application or for one audience. One might reduce the amount of time that a program takes to perform some task at the price of making it consume more memory. In an application where memory space is at a premium, one might deliberately choose a slower algorithm in order to use less memory. Often there is no “one size fits all” design which works well in all cases, so engineers make trade-offs to optimize the attributes of greatest interest. Additionally, the effort required to make a piece of software completely optimal—incapable of any further improvement— is almost always more than is reasonable for the benefits that would be accrued; so the process of optimization may be halted before a completely optimal solution has been reached. Fortunately, it is often the case that the greatest improvements come early in the process.

Categories

Code optimization can be broadly categorized as platform dependent and platform independent techniques.Platform independent techniques are generic techniques (such as loop unrolling, reduction in function calls, memory efficient routines, reduction in conditions, etc.) and are effective for most digital signal processors (DSP) platforms. Generally, these serve to reduce the total Instruction path length required to complete the program and/or reduce total memory usage during the process. Platform dependent techniques involve instruction level parallelism, data level parallelism, cache optimization techniques, i.e. parameters that differ among various platforms.

'Levels' of optimization

Optimization can occur at a number of 'levels':
* Design levelAt the highest level, the design may be optimized to make best use of the available resources. The implementation of this design will benefit from a good choice of efficient algorithms and the implementation of these algorithms will benefit from writing good quality code. The architectural design of a system overwhelmingly affects its performance. The choice of algorithm affects efficiency more than any other item of the design.
* Compile levelUse of an optimizing compiler tends to ensure that the executable program is optimized at least as much as the compiler can predict.
* Assembly levelAt the lowest level, writing code using an Assembly language designed for a particular hardware platform will normally produce the most efficient code since the programmer can take advantage of the full repertoire of machine instructions. The operating systems of most machines has been traditionally written in Assembler code for this reason.

With more modern optimizing compilers and the greater complexity of recent CPUs, it is more difficult to write code that is optimized better than the compiler itself generates, and few projects need resort to this 'ultimate' optimization step.

However, a large amount of code written today is still compiled with the intent to run on the greatest percentage of machines possible. As a consequence, programmers and compilers don't always take advantage of the more efficient instructions provided by newer CPUs or quirks of older models. Since optimization often relies on making use of special cases and performing complex trade-offs, a fully optimized program can sometimes, if insufficiently commented, be more difficult for less inexperienced programmers to comprehend and hence may contain more faults than unoptimized versions.

* Run time
Just in time compilers and Assembler programmers may be able to perform run time optimization exceeding the capability of static compilers by dynamically adjusting parameters according to the actual input or other factors.

Different algorithms

Computational tasks can be performed in several different ways with varying efficiency. For example, consider the following C code snippet whose intention is to obtain the sum of all integers from 1 to N:

int i, sum = 0;for (i = 1; i <= N; i++) sum += i;printf ("sum: %d ", sum);

This code can (assuming no arithmetic overflow) be rewritten using a mathematical formula like:

int sum = (N * (N+1)) / 2;printf ("sum: %d ", sum);

The optimization, sometimes performed automatically by an optimizing compiler, is to select a method (algorithm) that is more computationally efficient while retaining the same functionality. See Algorithmic efficiency for a discussion of some of these techniques. However, a significant improvement in performance can often be achieved by solving only the actual problem and removing extraneous functionality.

Optimization is not always an obvious or intuitive process. In the example above, the ‘optimized’ version might actually be slower than the original version if N were sufficiently small and the particular hardware happens to be much faster at performing addition and looping operations than multiplication and division.

Trade-offs

Optimization will generally focus on improving just one or two aspects of performance: execution time, memory usage, disk space, bandwidth, power consumption or some other resource. This will usually require a trade-off: where one factor is optimized at the expense of others. For example, increasing the size of cache improves runtime performance, but also increases the memory consumption. Other common trade-offs include code clarity and conciseness.

There are instances where the programmer performing the optimization must decide to make the software more optimal for some operations but at the cost of making other operations less efficient. These trade-offs may sometimes be of a non-technical nature - such as when a competitor has published a benchmark result that must be beaten in order to improve commercial success but comes perhaps with the burden of making normal usage of the software less efficient. Such changes are sometimes jokingly referred to as "pessimizations".

Bottlenecks

Optimization may include finding a bottleneck, a critical part of the code that is the primary consumer of the needed resource - sometimes known as a "hot spot". As a rule of thumb, improving 20% of the code is responsible for 80% of the results.

In computer science, the Pareto principle can be applied to resource optimization by observing that 80% of the resources are typically used by 20% of the operations. In software engineering, it is often a better approximation that 90% of the execution time of a computer program is spent executing 10% of the code (known as the 90/10 law in this context).

More complex algorithms and data structures perform well with many items, while simple algorithms are more suitable for small amounts of data—the setup and initialization time of the more complex algorithm can outweigh the benefit.

In some cases, adding more memory can help to make a program run faster. For example, a filtering program will commonly read each line and filter and output that line immediately. This only uses enough memory for one line, but performance is typically poor. Performance can be greatly improved by reading the entire file then writing the filtered result, though this uses much more memory. Caching the result is similarly effective, though also requiring larger memory use.

When to optimize

Optimization can reduce readability and add code that is used only to improve the performance. This may complicate programs or systems, making them harder to maintain and debug. As a result, optimization or performance tuning is often performed at the end of the development stage.

Donald Knuth said
*"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." (Knuth, Donald. "Structured Programming with go to Statements", ACM Journal Computing Surveys, Vol 6, No. 4, Dec. 1974. p.268.)

"Premature optimization" is a phrase used to describe a situation where a programmer lets performance considerations affect the design of a piece of code. This can result in a design that is not as clean as it could have been or code that is incorrect, because the code is complicated by the optimization and the programmer is distracted by optimizing.

An alternative approach is to design first, code from the design and then profile/benchmark the resulting code to see which parts should be optimized. A simple and elegant design is often easier to optimize at this stage, and profiling may reveal unexpected performance problems that would not have been addressed by premature optimization.

In practice, it is often necessary to keep performance goals in mind when first designing software, but the programmer balances the goals of design and optimization.

Macros

Optimization during code development using macros takes on different forms in different languages. In some procedural languages, such as C and C++, macros are implemented using textual substitution, and so their benefit is mostly limited to avoiding function-call overhead.

In many functional programming languages, however, macros are implemented using compile-time evaluation and substitution of non-textual, compiled code. Because of this difference, it is possible to perform complex compile-time computations, moving some work out of the resulting program. Lisp originated this style of macro [Fact September 2008] , and such macros are often called “Lisp-like macros.”

As with any optimization, however, it is often difficult to predict where such tools will have the most impact before a project is complete.

Automated and manual optimization

"See also "

Optimization can be automated by compilers or performed by programmers. Gains are usually limited for local optimization, and larger for global optimizations. Usually, the most powerful optimization is to find a superior algorithm.

Optimizing a whole system is usually undertaken by programmers because it is too complex for automated optimizers. In this situation, programmers or system administrators explicitly change code so that the overall system performs better. Although it can produce better efficiency, it is far more expensive than automated optimizations.

Use a profiler (or performance analyzer) to find the sections of the program that are taking the most resources — the "bottleneck". Programmers sometimes believe they have a clear idea of where the bottleneck is, but intuition is frequently wrong. Optimizing an unimportant piece of code will typically do little to help the overall performance.

When the bottleneck is localized, optimization usually starts with a rethinking of the algorithm used in the program: more often than not, a particular algorithm can be specifically tailored to a particular problem, yielding better performance than a generic algorithm. For example, the task of sorting a huge list of items is usually done with a quicksort routine, which is one of the most efficient generic algorithms. But if some characteristic of the items is exploitable (for example, they are already arranged in some particular order), a different method can be used, or even a custom-made sort routine.

After one is reasonably sure that the best algorithm is selected, code optimization can start: loops can be unrolled (for lower loop overhead, although this can often lead to "lower" speed if it overloads the CPU cache), data types as small as possible can be used, integer arithmetic can be used instead of floating-point, and so on.

Performance bottlenecks can be due to language limitations rather than algorithms or data structures used in the program. Sometimes, a critical part of the program can be re-written in a different programming language that gives more direct access to the underlying machine. For example, it is common for very high-level languages like Python to have modules written in C for greater speed. Programs already written in C can have modules written in assembly. Programs written in D can use the inline assembler.

Rewriting pays off because of a general rule known as the 90/10 law, which states that 90% of the time is spent in 10% of the code, and only 10% of the time in the remaining 90% of the code. So putting intellectual effort into optimizing just a small part of the program can have a huge effect on the overall speed if the correct part(s) can be located.

Manual optimization often has the side-effect of undermining readability. Thus code optimizations should be carefully documented and their effect on future development evaluated.

The program that does the automated optimization is called an optimizer. Most optimizers are embedded in compilers and operate during compilation. Optimizers can often tailor the generated code to specific processors.

Today, automated optimizations are almost exclusively limited to compiler optimization.

Some high-level languages (Eiffel, Esterel) optimize their programs by using an intermediate language.

Grid computing or distributed computing aims to optimize the whole system, by moving tasks from computers with high usage to computers with idle time.

Time taken for optimization

Sometimes, the time taken to undertake optimization in itself may be an issue.

Optimizing existing code usually does not add new features, and worse, it might add new bugs in previously working code (as any change might). Because manually optimized code might sometimes have less 'readability' than unoptimized code, optimization might impact maintainability of it also. Optimization comes at a price and it is important to be sure that the investment is worthwhile.

An automatic optimizer (or optimizing compiler) a program that performs code optimization) may itself have to be optimized - either to to further improve the efficiency of its target programs or else speed up its own operation. A compilation performed with optimization 'turned on' usually takes longer, although this is usually only a problem when programs are quite large (but probably more than compensated for over many run time savings of the code).

In particular, for just-in-time compilers the performance of the run time compile component, executing together with its target code, is the key to improving overall execution speed.

Quotes

* "“The order in which the operations shall be performed in every particular case is a very interesting and curious question, on which our space does not permit us fully to enter. In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selection amongst them for the purposes of a Calculating Engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation.”" - Ada Byron's notes on the analytical engine 1842.
* "“More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason - including blind stupidity.”" - W.A. Wulf
* "“We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.”" [Knuth, Donald: [http://pplab.snu.ac.kr/courses/adv_pl05/papers/p261-knuth.pdf Structured Programming with Goto Statements] . "Computing Surveys" 6:4 (1974), 261–301. ] - Donald Knuth
* "“Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you have proven that's where the bottleneck is.”" - Rob Pike
* "“The First Rule of Program Optimization: Don't do it. The Second Rule of Program Optimization (for experts only!): Don't do it yet.”" - Michael A. Jackson

ee also


*Algorithmic efficiency
*Abstract interpretation
*Caching
*Control flow graph
*Lazy evaluation
*Loop optimization
*Low level virtual machine
*Memoization
*Memory locality
*Performance analysis (profiling)
*Queueing theory
*Simulation
*Speculative execution
*SSA form
*Worst-case execution time

References

* Jon Bentley: "Writing Efficient Programs", ISBN 0-13-970251-2.
* Donald Knuth: "The Art of Computer Programming"

External links

* [http://www.azillionmonkeys.com/qed/optimize.html Programming Optimization]
* [http://www.eventhelix.com/RealtimeMantra/Basics/OptimizingCAndCPPCode.htm C,C++ optimization]
* [http://www.abarnett.demon.co.uk/tutorial.html C optimization tutorial]
* [http://www.cs.arizona.edu/solar/ Software Optimization at Link-time And Run-time]
* Article " [http://doi.ieeecomputersociety.org/10.1109/2.348001 A Plea for Lean Software] " by Niklaus Wirth
* [http://c2.com/cgi/wiki?CategoryOptimization Description from the Portland Pattern Repository]
* [http://www.daemon.be/maarten/ipperf.html Performance tuning of Computer Networks]
* [http://www.thinkingparallel.com/2006/08/07/my-views-on-high-level-optimization/ An article describing high-level optimization]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Computer science — or computing science (abbreviated CS) is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems. Computer scientists invent algorithmic… …   Wikipedia

  • COMPUTER SCIENCE — The term Computer Science encompasses three different types of research areas: computability, efficiency, and methodology. General Introduction Computability deals with the question of what is mechanically computable. The most natural way to… …   Encyclopedia of Judaism

  • Portal:Computer science — Wikipedia portals: Culture Geography Health History Mathematics Natural sciences People Philosophy Religion Society Technology …   Wikipedia

  • Kernel (computer science) — In computer science, the kernel is the central component of most computer operating systems (OS). Its responsibilities include managing the system s resources (the communication between hardware and software components). As a basic component of… …   Wikipedia

  • Recursion (computer science) — Recursion in computer science is a way of thinking about and solving problems. It is, in fact, one of the central ideas of computer science. [cite book last = Epp first = Susanna title = Discrete Mathematics with Applications year=1995… …   Wikipedia

  • Deforestation (computer science) — In the theory of programming languages in computer science, deforestation (also known as fusion) is a program transformation to eliminate tree structures. The term deforestation was originally coined by Philip Wadler in his paper Deforestation:… …   Wikipedia

  • Soot (computer science) — In computer science, Soot is a language manipulation and optimization framework consisting of intermediate languages for Java. It has been developed by the Sable Research Group at McGill University known for its Sable VM, a Java virtual machine… …   Wikipedia

  • Topic outline of computer science — Computer science, or computing science, is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is… …   Wikipedia

  • List of important publications in computer science — This is a list of important publications in computer science, organized by field. Some reasons why a particular publication might be regarded as important: Topic creator – A publication that created a new topic Breakthrough – A publication that… …   Wikipedia

  • List of computer science conferences — This is a list of academic conferences in computer science. Most of these academic conferences are annual or bi annual events.The order with which the conferences are listed in their respective fields corresponds to a rough and non authoritative… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”