- Algorithm examples
"This article

**Algorithm examples**supplementsAlgorithm andAlgorithm characterizations ."= An example: Algorithm specification of addition m+n =

**Choice of machine model:**This problem has not been resolved by the mathematics community. There is no “best”, or “preferred” model. The

Turing machine , while considered the "sine qua non " – "the" indispensable, the ultimate fall-back – is notoriously opaque when confronted face-to-face. And different problems seem to require different models to study them. Many researchers have observed these problems, for example::“The principal purpose of this paper is to offer a theory which is closely related to Turing's but is more economical in the basic operations” (Wang (1954) p. 63):“Certain features of Turing machines have induced later workers to propose alternative devices as embodiments of what is to be meant by effective computability.... a Turing machine has a certain opacity, its workings are known rather than seen. Further a Turing machine is inflexible ... a Turing machine is slow in (hypothetical) operation and, usually complicated. This makes it rather hard to design it, and even harder to investigate such matters as time or storage optimization or a comparison between efficiency of two algorithms.” (Melzak (1961) p. 281)

:Shepherdson-Sturgis (1963) proposed their register-machine model because “these proofs [using Turing machines] are complicated and tedious to follow for two reasons: (1) A Turing machine has only one head... (2) It has only one tape....” They were in search of “a form of idealized computer which is sufficiently flexible for one to be able to convert an intuitive computational procedure into a program for such a machine” (p. 218).

:“I would prefer something along the lines of the random access computers of Angluin and Valiant [as opposed to the

pointer machine of Schönhage] ” (Gurivich 1988 p. 6) : “Showing that a function is Turing computable directly...is rather laborious ... we introduce an ostensibly more flexible kind of idealized machine, an abacus machine...” (Boolos-Burgess-Jeffrey 2002 p.45).About all that one can insist upon is that the algorithm-writer specify in exacting detail (i) the machine model to be used and (ii) its instruction set.

**Atomization of the instruction set:**The Turing machine model is primitive, but not as primitive as it can be. As noted in the above quotes this is a source of concern when studying complexity and equivalence of algorithms. Although the observations quoted below concern the

Random access machine model – a Turing-machine equivalent – the problem remains for anyTuring-equivalent model: :“...there hardly exists such a thing as an ‘innocent’ extension of the standard RAM model in the uniform time measure; either one only has additive arithmetic, or one might as well include all multiplicative and/or bitwise Boolean instructions on small operands....” (van Emde Boas (1992) p. 26):“Since, however, the computational power of a RAM model seems to depend rather sensitively on the scope of its instruction set, we nevertheless will have to go into detail...

:“One important principle will be to admit only such instructions which can be said to be of an "atomistic" nature. We will describe two versions of the so-called "successor" RAM, with the successor function as the only arithmetic operation....the RAM0 version deserves special attention for its extreme simplicity; its instruction set consists of only a few one letter codes, without any (explicit) addressing.” (Schönhage (1980) p.494)

Example #1: The most general (and original) Turing machine – single-tape with left-end, multi-symbols, 5-tuple instruction format – can be atomized into the Turing machine of Boolos-Burgess-Jeffrey (2002) – single-tape with no ends, two symbols { B, | }, 4-tuple instruction format. This model in turn can be further atomized into a

Post-Turing machine – single-tape with no ends, two symbols { B, | }, 0- and 1-parameter instruction set.Example #2: The RASP can be reduced to a RAM by moving its instructions off the tape and (perhaps with translation) into its finite-state machine “table” of instructions, the RAM stripped of its indirect instruction and reduced to a 2- and 3-operand “abacus”

register machine ; the abacus in turn can be reduced to the 1- and 2-operand Minsky (1967)/Shepherdson-Sturgis (1963) counter machine, which can be further atomized into the 0- and 1-operand instructions of Schönhage (and even a 0-operand Schönhage-like instruction set is possible!).**Cost of atomization**:Atomization comes at a (usually severe) cost: while the resulting instructions may be “simpler”, atomization (usually) creates "more" instructions and the need for "more computational steps". As shown in the following example the increase in computation steps may be significant (i.e. orders of magnitude – the following example is “tame”), and atomization may (but not always, as in the case of the Post-Turing model) reduce the usability and readability of “the machine code”. For more see

Turing tarpit .Example: The single

register machine instruction "INC 3" – increment the contents of register #3, i.e. increase its count by 1 – can be atomized into the 0-parameter instruction set of Schönhage, but with the equivalent number of steps to accomplish the task increasing to 7; this number is directly related to the register number “n” i.e. 4+n):(NOTE: The reader may find the equivalent

Post-Turing machine program easier to read and simulate. In this example the head moves; just reverse L and R for the case when the tape moves):**Example: Counter-machine versus Turing-machine computation**This example is even more detailed than the above, because here the counter machine is specified to be a "simulation" in an EXCEL spread sheet. Where the "user" puts the parameters in the spreadsheet does matter.

We also observe that, while superficially similar, the core of the two algorithms is truly different: in this example the counter machine has to increment "m" times its parameter "n". (i.e. to add "m"=three to "n", increment "n" three times). In the Turing machine case the machine just had to fill in the first blank it came to, then find its way "home". It could have halted after filling in if we had specified this as acceptable. But the counter machine algorithm does not have this option.

Thus we conclude that: "the algorithm is machine dependent". :(i)

**Number format**: Positive integers (and 0) expressed in conventional decimal notation, one (unbounded) decimal integer per register. Zero is considered to be numeral 0 -- the empty count, null, "blank B". The numerals themselves are simply abbreviations for unary counts within the registers::: "0" = B; "1" = |, "2" = | |, "3" = | | |:(iia)

**(Virtual, simulated) machine type**: The machine to be used in the computation as acounter machine is equipped with only 3 registers (alternately: "Only three registers #0, #1, #2 will be used"). The instruction set will be the following three Minsky (1967) instructions:::**INC**[ r ] ;**INC**rement contents of register r, proceed to next instruction in sequence: [ r ] +1 → r , & [ state-machine instruction counter ] +1 → state-machine instruction counter::

**DEC**[ r ] ;**DEC**rement contents of register r, proceed to next instruction in sequence: [ r ] -1 → r , & [ state-machine instruction counter ] +1 → [ state-machine instruction counter::

**JZ**[ r, z ] ;**J**ump if contents of register r is**Z**ero then go to instruction z else proceed to next instruction in sequence.::

**H**;**H**ALT:(ib)

**Target (actual) machine:**An EXCEL spreadsheet run on an Intel Pentium processor decodes/interpretates the instructions and "runs" the (simulated) counter machine. The instructions -- one per cell -- are represented by symbol strings, for example, "I N C" or "i n c"; the instruction-operands are represented by decimal numeral symbols "3", "14", etc.:(iii)

**Head Location**: No tape head is required. The instructions' operands -- one per cell -- are explicit names of the registers upon which the action will be taken and/or explicit jump-to addresses.:(iv)

**Location of the Program**: The (virtual-) counter-machine program will be in its finite-state TABLE of instructions: In the EXCEL simulation this virtual program is located in the cells that represent the state-machine TABLE of instructions.:(iv)

**The program**: A formal program listing is shown below.**High-level description:**:Repeat until register #2 is 0::: While counting down the contents of register #2 to zero, count up the contents of register #1. The sum will accumulate in register #1.

**Implementation description :**:**Implementation specification:**:(a)

**Input format**: The two arguments (m, n) of function "m+n" (e.g. "sum(m, n)") will be represented as integers expressed in conventional decimal notation, one (unbounded) decimal numeral per register. Unused registers will be considered to be blank/empty/of zero-count. Register #0 is reserved to hold the numeral “0” (blank, empty, zero-count). The following symbolism [ 5 ] means "contents of register 5"::: Example: integer 5 in register "3", [ 3 ] = symbol "5" = | | | | |:(b)

**Initial location of arguments**: The first argument "m" will be register #1, the second argument "n" in register #2.:: Example: 2+3, [ 0 ] =0, [ 1 ] =m, [ 2 ] =n :(c)**Halt upon successful computation:**: If the function m+n successfully assigns a value Σ to the arguments (m, n) that are represented initially on the tape, then the machine will eventually halt with the numeral Σ, represented in conventional decimal symbols, in a single register and otherwise-empty registers::: Example: m+n=∑, [ 0 ] =0, [ 1 ] =∑, [ 2 ] =0:(d)

**Halt upon successful computation: register specified to hold output:**In this case [c] the machine will halt with the computation (sum Σ) in register #1: :: Example: 3+2=5, [ 0 ] =0, [ 1 ] =5, [ 2 ] =0:(e)

**Unsuccessful computation:**If the function that is to be computed assigns no value to the arguments that are represented initially in the registers, then the machine either will never halt, or will halt in some nonstandard configuration::: Example: [ 0 ] =3, [ 1 ] =0, [ 2 ] = 42, etc.**Implementation description:**:*steps 1, 5: Test for n+0 and by implication 0+0: if register #2 is zero, then go to step 5 ’’done’’ else continue::*steps 2, 3: Decrement register #2 and increment register #1; register #1 is accumulationg counts that will, upon halt, represent the sum.:*step 4: Unconditionally jump back to step 1 as follows: Test constant-register #0 for 0 (i.e. [ 0 ] =0) and upon (always-)successful test jump back to step 1.

**Formal description:**:The program is written in “assembly code” mnemonics to be interpreted by an EXCEL program. Either upper- or lower-case may be used, but the spellings are mandatory. The format of "how" and "where" to put the code will be seen below and in the example:Conventional assembly code listing:: "m+n"::: 1 JZ (2,5) ; [ 2 ] =m: If [ 2 ] =0 then instruction 5 else continue :: 2 DEC (2) ;Decrement count in register #2: [ 2 ] -1 → 2 :: 3 INC (1) ; [ 1 ] =n or sum: Increment count in register #1: [ 1 ] +1 → 1:: 4 JZ (0,1) ; IF [ 0 =0 then jump to instruction 1 else continue.:: 5 HALT

The instruction "operation-codes" (assembly mnemonics) are to appear from left to right below their instruction numbers. The instruction "operands" (numerals) are to appear from left to right in separate cells below their mnemonics, as shown:

* polynomial: base-n, e.g. binary, decimal :: binary: n

_{0}*2_{0}+ n_{0}*2^{1}+ n_{2}*2^{2 + ... + nm*2m:: decimal: n0*100 + n0*101 + n2*102 + ... + nm*10m* Gödel numbering expressed in, probably, binary or decimal}The solution will depend on the nature of the "memory" of the FSM. If it is built from a typical binary

Random access memory then the instruction "address" (number) will have to be encoded as binary. The instruction code can be chosen from whatever method is most easy to decode and "fits in the width of the memory".This is not a trivial problem, and appears in the work of authors who are encoding for purposes of comparing algorithms [example?] . What they tend to do is use e.g. two consecutive 8-bit memory "cells" for every instruction (or e.g. 2 x 16 bit). Thus they might encode an instruction set as follows::FSM encoding for up to 128 registers and 256 instructions with an 8-bit random-access memory with two consecutive 8-bit bytes::: *0hhhhhhh: the identifier-number/location/identifier of register h expressed in binary (0 through 127). Example: register #5 = 00000101:: *xxxxxxxx: the jump-to address expressed in binary (0 through 255)

* INC h = 00000001, 0hhhhhhh

* DEC h = 00000010, 0hhhhhhh

* JZ h,x = 1hhhhhhh, xxxxxxxx

* HALT = 111----- are both expressed in binaryThus the first byte always specifies the instruction type. And this type specifies where the 7-bit register number hhhhhhh and the jump-to address will be. But such coding complicates the finite state machine -- it must be smart enough to know (i) interleave and (ii) that if 011hhhhh occurs then the next instruction will be a jump-to address. And if a mistake is made, the machine can begin to "execute" operands instead of instructions.

**References***cite book|last=Boolos, George and Jeffrey, Richard|authorlink=Boolos and Jeffrey|title=Computability and Logic|edition=First Edition|publisher=Cambridge University Press, London|year=1974, 1980, 1989, 1999|ISBN 0-521-20402-X. cf chapter 3 "Turing machines" where they discuss "certain enumerable sets not effectively (mechanically) enumerable".

*Andreas Blass andYuri Gurevich (2003), [*http://research.microsoft.com/~gurevich/Opera/164.pdf "Algorithms: A Quest for Absolute Definitions"*] , Bulletin of European Association for Theoretical Computer Science 81, 2003. Includes an excellent bibliography of 56 references.

*cite journal|last=Church|first=Alonzo|authorlink=Alonzo Church|title=An Unsolvable Problem of Elementary Number Theory|journal=The American Journal Of Mathematics|volume=58|pages= 345–363|year=1936|doi=10.2307/2371045 Reprinted in "The Undecidable", p. 89ff. The first expression of "Church's Thesis". See in particular page 100 ("The Undecidable") where he defines the notion of "effective calculability" in terms of "an algorithm", and he uses the word "terminates", etc.

*cite book|last=Davis|first=Martin|authorlink=Martin Davis|title=The Undecidable: Basic Papers On Undecidable Propostions, Unsolvable Problems and Computable Functions|publisher=Raven Press|location=New York|year=1965 Davis gives commentary before each article. Papers ofGödel ,Alonzo Church ,Turing ,Rosser ,Kleene , andEmil Post are included.

*cite book|last=Davis|first=Martin|authorlink=Martin Davis|title=Engines of Logic: Mathematicians and the Origin of the Computer|publisher=W. W. Nortion|location=New York|year=2000 Davis offers concise biographies ofLeibniz ,Boole ,Frege , Cantor,Hilbert ,Gödel andTuring withvon Neumann as the show-stealing villain. Very brief bios ofJoseph-Marie Jacquard ,Babbage ,Ada Lovelace ,Claude Shannon ,Howard Aiken , etc.

*

*Yuri Gurevich , [*http://research.microsoft.com/~gurevich/Opera/141.pdf "Sequential Abstract State Machines Capture Sequential Algorithms"*] , ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pages 77-111. Includes bibliography of 33 sources.

*cite book|last=Kleene|first=Stephen C.|authorlink=Kleene|title=Introduction to Metamathematics|edition=Tenth Edition 1991|publisher=North-Holland Publishing Company|year=First Edition 1952 Excellent -- accessible, readable -- reference source for mathematical "foundations".

*A. A. Markov (1954) "Theory of algorithms". [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e. Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS 60-51085.]

*cite book|last=Minsky|first=Marvin|authorlink=Minsky|title=Computation: Finite and Infinite Machines|edition=First|publisher=Prentice-Hall, Englewood Cliffs, NJ|year=1967 Minsky expands his "...idea of an algorithm -- an effective procedure..." in chapter 5.1 "Computability, Effective Procedues and Algorithms. Infinite machines."

*cite journal|last=Post|first=Emil|authorlink=Emil Post|title=Finite Combinatory Processes, Formulation I|journal=The Journal of Symbolic Logic|volume=1|year=1936|pages=pp.103–105|doi=10.2307/2269031 Reprinted in "The Undecidable", p. 289ff. Post defines a simple algorithmic-like process of a man writing marks or erasing marks and going from box to box and eventually halting, as he follows a list of simple instructions. This is cited by Kleene as one source of his "Thesis I", the so-calledChurch-Turing thesis .

*cite journal|last=Rosser|first=J.B.|authorlink=J.B. Rosser|title=An Informal Exposition of Proofs of Godel's Theorem and Church's Theorem|journal=Journal of Symbolic Logic|volume= 4 |year=1939 Reprinted in "The Undecidable", p. 223ff. Herein is Rosser's famous definition of "effective method": "...a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps... a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer" (p. 225-226, "The Undecidable")

*cite book|last=Sipser|first=Michael |title=Introduction to The Theory of Compution, Second Edition |edition=2006|publisher=Thomson Course Technology, Boston MA cf Chapter 3 The Church Turing Thesis p. 137 wherein he presents his §3.3 The Definition of Algorithm.

*cite book|last=Stone|first=Harold S.|title=Introduction to Computer Organization and Data Structures|edition=1972|publisher=McGraw-Hill, New York Cf in particular the first chapter titled: "Algorithms, Turing Machines, and Programs". His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is called an "algorithm" (p. 4).

*Wikimedia Foundation.
2010.*