- Very long instruction word
Very Long Instruction Word or VLIW refers to a CPU architecture designed to take advantage of
instruction level parallelism (ILP). A processor that executes every instruction one after the other (i.e. a non-pipelined scalar architecture) may use processor resources inefficiently, potentially leading to poor performance. The performance can be improved by executing different sub-steps of sequential instructions simultaneously (this is "pipelining"), or even executing multiple instructions entirely simultaneously as insuperscalar architectures. Further improvement can be achieved by executing instructions in an order different from the order they appear in the program; this is calledout-of-order execution .These three techniques all come at a cost: increased hardware complexity. Before executing any operations in parallel, the processor must verify that the instructions do not have interdependencies. There are many types of interdependencies, but a simple example would be a program in which the first instruction's result is used as an input for the second instruction. They clearly cannot execute at the same time, and the second instruction can't be executed before the first. Modern out-of-order processors use significant resources in order to take advantage of these techniques, since the scheduling of instructions must be determined dynamically as a program executes based on dependencies.
The VLIW approach, on the other hand, executes operation in parallel based on a fixed schedule determined when programs are compiled. Since determining the order of execution of operations (including which operations can execute simultaneously) is handled by the compiler, the processor does not need the scheduling hardware that the three techniques described above require. As a result, VLIW CPUs offer significant computational power with less hardware complexity (but greater compiler complexity) than is associated with most superscalar CPUs.
As is the case with any novel architectural approach, the concept is only as useful as code generation makes it. That is, the fact that a number of special-purpose instructions are available to facilitate certain complicated operationssay, fast Fourier transform (FFT) computation or certain calculations that recur in tomographic contextsis useless if compilers are unable to spot relevant source code constructs and generate target code that duly utilizes the CPU's advanced offerings. "A fortiori", the programmer must be able to express his algorithms in a manner that makes the compiler's task easier.
History has demonstrated that instruction sets must strike a careful balance between complexity and ease of exploitation. For example, the ability of the DEC
VAX to evaluate polynomials in a single instruction is useful insofar as the construct recurs frequently in certain applications; it is easy enough for the programmer to exploit and for the compiler to detect; and it is general enough to not demand unduly complicated microcode or, yet, special-purpose, hard-wired logic.Design
In superscalar designs, the number of execution units is invisible to the instruction set. Each instruction encodes only one operation. For most superscalar designs, the instruction width is 32 bits or less.
In contrast, one VLIW instruction encodes multiple operations; specifically, one instruction encodes at least one operation for each execution unit of the device. For example, if a VLIW device has five execution units, then a VLIW instruction for that device would have five operation fields, each field specifying what operation should be done on that corresponding execution unit. To accommodate these operation fields, VLIW instructions are usually at least 64 bits in width, and on some architectures are much wider.
For example, the following is an instruction for the SHARC. In one cycle, it does a floating-point multiply, a floating-point add, and two autoincrement loads. All of this fits into a single 48-bit instruction.
f12=f0*f4, f8=f8+f12, f0=dm(i0,m3), f4=pm(i8,m9);
Since the earliest days of computer architecture, some CPUs have added several additional
arithmetic logic unit s (ALUs) to run in parallel.Superscalar CPUs use "hardware" to decide which operations can run in parallel. VLIW CPUs use "software" (the compiler) to decide which operations can run in parallel. Because the complexity of instruction scheduling is pushed off onto the compiler, the hardware's complexity can be substantially reduced.A similar problem occurs when the result of a parallelisable instruction is used as input for a branch. Most modern CPUs "guess" which branch will be taken even before the calculation is complete, so that they can load up the instructions for the branch, or (in some architectures) even start to compute them speculatively. If the CPU guesses wrong, all of these instructions and their context need to be "flushed" and the correct ones loaded, which is time-consuming.
This has led to increasingly complex instruction-dispatch logic that attempts to guess correctly, and the simplicity of the original
RISC designs has been eroded. VLIW lacks this logic, and therefore lacks its power consumption, possible design defects and other negative features.In a VLIW, the compiler uses heuristics or profile information to guess the direction of a branch. This allows it to move and preschedule operations speculatively before the branch is taken, favoring the most likely path it expects through the branch. If the branch goes the unexpected way, the compiler has already generated compensatory code to discard speculative results in order to preserve program semantics.
History
The term "VLIW", and the concept of VLIW architecture itself, were invented by
Josh Fisher in his research group atYale University in the early 1980s. His original development oftrace scheduling as a compilation technique for VLIW was developed when he was a graduate student atNew York University . Prior to VLIW, the notion of prescheduling functional units and instruction-level parallelism in software was well established in the practice of developing horizontal microcode. Fisher's innovations were around developing a compiler that could target horizontal microcode from programs written in an ordinary programming language. He realized that in order to get good performance, and to target a wide-issue machine, it would be necessary to find parallelism beyond that which one generally finds within abasic block . He developedregion scheduling techniques to identify parallelism beyond basic blocks. Trace scheduling is such a technique, and involves scheduling the most likely path of basic blocks first, inserting compensation code to deal with speculative motions, scheduling the second most likely trace, and so on, until the schedule is complete.Fisher's second innovation was the notion that the target CPU architecture should be designed to be a reasonable target for a compiler — the compiler and the architecture for VLIW must be co-designed. This was partly inspired by the difficulty Fisher observed at Yale of compiling for architectures like
Floating Point Systems ' FPS164, which had a complex instruction set architecture (CISC) that separated instruction initiation from the instructions that saved the result, requiring very complicated scheduling algorithms. Fisher developed a set of principles characterizing a proper VLIW design, such asself-draining pipeline s, wide multi-portregister file s, andmemory architecture s. These principles made it easier for compilers to write fast code.The first VLIW compiler was described in a Ph.D. thesis by John Ellis, supervised by Fisher. The compiler was christened Bulldog, after Yale's mascot. cite web
title = ACM 1985 Doctoral Dissertation Award
publisher = ACM
accessdate = 2007-10-15
url = http://awards.acm.org/citation.cfm?id=9267768&srt=year&year=1985&aw=146&ao=DOCDISRT
quote = For his dissertation "Bulldog: A Compiler for VLIW Architecture".] John Ruttenberg also developed certain important algorithms for scheduling.Fact|date=February 2007Fisher left Yale in 1984 to found a startup company,
Multiflow , along with co-founders John O'Donnell and John Ruttenberg. Multiflow produced the TRACE series of VLIWminisupercomputer s, shipping their first machines around 1988. Multiflow's VLIW could issue 28 operations in parallel per instruction. The TRACE system was implemented in an MSI/LSI/VLSI mix packaged in cabinets, a technology that fell out of favor when it became more cost-effective to integrate all of the components of a processor (excluding memory) on a single chip. Multiflow was too early to catch the following wave, when chip architectures began to allow multiple issue CPUs. The major semiconductor companies recognized the value of Multiflow technology in this context, so the compiler and architecture were subsequently licensed to most of these companies.Implementations
Cydrome was a company producing VLIW numeric processors using ECL technology in the same timeframe (late 1980s). This company, like Multiflow, went out of business after a few years.One of the licensees of the Multiflow technology is
Hewlett-Packard , which Josh Fisher joined after Multiflow's demise.Bob Rau , founder of Cydrome, also joined HP after Cydrome failed. These two would lead computer architecture research within Hewlett-Packard during the 1990s.In the 1990s, Hewlett-Packard researched this problem as a side effect of ongoing work on their
PA-RISC processor family. They found that the CPU could be greatly simplified by removing the complex dispatch logic from the CPU and placing it into the compiler. Today's compilers are much more complex than those from the 1980s, so the added complexity in the compiler was considered to be a small cost.VLIW CPUs are usually constructed of multiple RISC-like functional units that operate independently. Contemporary VLIWs typically have four to eight main functional units. Compilers generate initial instruction sequences for the VLIW CPU in roughly the same manner that they do for traditional CPUs, generating a sequence of RISC-like instructions. The compiler analyzes this code for dependence relationships and resource requirements. It then schedules the instructions according to those constraints. In this process, independent instructions can be scheduled in parallel. Because VLIWs typically represent instructions scheduled in parallel with a longer instruction word that incorporates the individual instructions, this results in a much longer
opcode (thus the term "very long") to specify what executes on a given cycle.Examples of contemporary VLIW CPUs include the
TriMedia media processors by NXP (formerly Philips Semiconductors), the SHARC DSP by Analog Devices, theC6000 DSP family byTexas Instruments , and the STMicroelectronicsST200 family based on the Lx architecture (also designed by Josh Fisher). These contemporary VLIW CPUs are primarily successful as embedded media processors for consumer electronic devices.VLIW features have also been added to configurable processor cores for SoC designs. For example, Tensilica's
Xtensa LX2 processor incorporates a technology dubbed FLIX (Flexible Length Instruction eXtensions) that allows multi-operation instructions. The Xtensa C/C++ compiler can freely intermix 32- or 64-bit FLIX instructions with the Xtensa processor's single-operation RISC instructions, which are 16 or 24 bits wide. By packing multiple operations into a wide 32- or 64-bit instruction word and allowing these multi-operation instructions to be intermixed with shorter RISC instructions, FLIX technology allows SoC designers to realize VLIW's performance advantages while eliminating thecode bloat of early VLIW architectures.Outside embedded processing markets, Intel's
Itanium IA-64 EPIC appears as the only example of a widely used VLIW architecture. However, EPIC architecture is sometimes distinguished from a pure VLIW architecture, since EPIC advocates full instruction predication, rotating register files, and a very long instruction word that can encode non-parallel instruction groups.Backward compatibility
When silicon technology allowed for wider implementations (with more execution units) to be built, the compiled programs for the earlier generation would not run on the wider implementations, as the encoding of the binary instructions depended on the number of execution units of the machine.
Transmeta addresses this issue by including a binary-to-binary software compiler layer (termed "Code Morphing") in their Crusoe implementation of thex86 architecture. Basically, this mechanism is advertised to recompile, optimize, and translate x86 opcodes at runtime into the CPU's internal machine code. Thus, the Transmeta chip is "internally" a VLIW processor, effectively decoupled from the x86 CISCinstruction set that it executes.Intel's
Itanium architecture (among others) solved the backward-compatibility problem with a more general mechanism. Within each of the multiple-opcode instructions, a bit field is allocated to denote dependency on the previous VLIW instruction within the program instruction stream. These bits are set atcompile time , thus relieving the hardware from calculating this dependency information. Having this dependency information encoded into the instruction stream allows wider implementations to issue multiple non-dependent VLIW instructions in parallel per cycle, while narrower implementations would issue a smaller number of VLIW instructions per cycle.Another perceived deficiency of VLIW architectures is the
code bloat that occurs when not all of the execution units have useful work to do and thus have to executeNOP s. This occurs when there are dependencies in the code and the functional pipelines must be allowed to drain before subsequent operations can proceed.Since the number of transistors on a chip has grown, the perceived disadvantages of the VLIW have diminished in importance. The VLIW architecture is growing in popularity, particularly in the embedded market, where it is possible to customize a processor for an application in an embedded
system-on-a-chip . Embedded VLIW products are available from several vendors, including the [http://www.fujitsu.com/global/services/microelectronics/product/micom/frv/ FR-V] fromFujitsu , the [http://www.pixelworks.com/ BSP15/16] fromPixelworks , theST231 from STMicroelectronics, the [http://www.nxp.com/products/nexperia/home/products/media_processors/index.html Trimedia] from NXP, theCEVA-X DSP from CEVA, theJazz DSP from Improv Systems, and [http://www.siliconhive.com Silicon Hive] . The Texas InstrumentsTMS320 DSP line has evolved, in its C6xxx family, to look more like a VLIW, in contrast to the earlier C5xxx family.ee also
*
Explicitly parallel instruction computing (EPIC)
*Russian processors "Elbrus"References
External links
* [http://portal.acm.org/citation.cfm?id=801649&coll=portal&dl=ACM Paper That Introduced VLIWs]
* [http://www.hpl.hp.com/news/2005/jul-sep/VLIW_retrospective.pdf ISCA "Best Papers" Retrospective On Paper That Introduced VLIWs]
* [http://www.vliw.org/ VLIW and Embedded Processing]
Wikimedia Foundation. 2010.