BLOG@CACM
Computing Applications

Now What? Order and Test

Posted
Robin K. Hill, University of Wyoming

I used to have a file (that is, a physical manila folder) entitled "Minimal Program Structures" but I couldn't find out what they were in the sense I sought, so its contents were merged into other files and the question remains. What exactly makes up an algorithm, in the sense of what programmers put together? The answer I seek is less formal than the partial recursive functions, is in fact mundane, based on addressable computer memory along with some executable at work that performs all necessary computation and control. (Here, we ignore assignment and focus on flow.) This view regards a stack machine, which has control "built in," as cheating; and a half-adder as too simplistic and mechanical. For a formal account, we can look to Rapaport's analysis from the ground up, incorporating the Böhm-Jacopini result, which tells us that any computer program can be written using only the three rules of grammar: sequence (generalized composition), selection (conditional definition), and repetition (while-recursion) [Rapaport, Sec.7.6.3]. The Turing Machine is a great model, and quite sparse, with one instruction, a 5-tuple or variation, incorporating a state, input-symbol, output-symbol, move, and new-state. But what is it doing? Inspired to curiosity about which program control operations are arbitrary and which are essential, we ask what we HAVE to have in order to compute. We're used to loops and conditions, which are not evident in the TM instructions. Do we have to have loops and conditions? What are the independent modular operations of an algorithm? The reader is warned that in the exploration of this question, we zigzag among different perspectives on computing architecture.

We first examine the OISC model (pronounced "whisk"), the One Instruction Set Computers, of Gilreath and Laplante [Gilreath]. Consider this single instruction, Subtract and Branch if Negative (SBN):

SBN operandam, operandum, next-address    
  (operandam := operandam - operandum;
   if operandam < 0 goto next-address;)

Other instructions (for other architectures) can be implemented with this single instruction. Here's how to ADD (where R1 and R2 are registers and PC is the program counter, but any memory cells will do):

ADD X + Y
SBN R1,R1 ; clear R1
SBN R1,X,PC ; put -X in R1
SBN R1,Y,PC ; put -X-Y into R1
SBN R2,R2,PC ; clear R2
SBN R2,R1,PC ; R2 contains X + Y

It's intriguing, and it works, but why?

We observe that as soon as we turn to real-world application, complexity erupts, in the form of long dense programs. Looming complexity is also characteristic of the Turing Machine for anyone crazy enough to attempt actual computation with it, yet we still admire its elegance of description. Here, we simply bow to compiler writers, upon whom many issues are foisted.

The analogue in logic is the functionally complete operator NAND ("not and"). Wikipedia (along with logic textbooks) has a nice account of implementation in NAND of other logical functions[WikiNAND], serving as a proof of completeness. The NOR operator is also functionally complete. They're binary, of course, so the function NOT P is implemented as P NAND P. It's not intuitive, but it works. But why? Why is there a single operation that can provide all of the standard moves in propositional logic, and why this one? Unless an affirmative-only minimal set exists, there is no alternative that is symmetric in that sense.

Does logical negation (denial) hold some magic that makes it crucial? As zero allowed clean positional representation of numbers, and NOT allows the single logical operation NAND, we may need some "nihilistic" option in order to manipulate data to its full potential. Negation is generally marked in natural language, so assertion and negation drawn from the real world are asymmetric, as noted in classic philosophy [Horn], and as students grasp when they struggle to draw a Venn diagram of a set complement. We can't settle for that, because we must have explicit falsehood for conditional results. In programming, we rely on TRUE and FALSE, however implemented, as equipotent values. But coding choices often reflect division into normal and outlier contingencies, or simply reification of convenient and inconvenient options, implying that programmers press conditions into artificial contradictions, and suggesting that it's not the negation that matters, but rather the dichotomy. So maybe NOT doesn't matter except as means for selection of action.

Let's move on to instructions, and look at what is going on in SBN. We have an arithmetic operation that alters the value of the operandam. We have a test for a particular value of that operandam. We have a branch, a change in the order of operations, an exception to the standard ordering. And that's all. But why? Why is there a single instruction that can accomplish all we want, and why this one in particular? We want some explanation that appeals to coding intuition, that seems natural, that provides a small tasteful set of peer operations that get the job done. In SBN, we see a notion of going forward, and a notion of some contingency that thwarts the "going-forward", and a notion of testing a value to decide that contingency. The value zero, which divides the contingencies, is a placeholder, of course; the test for numeric negativity is simply a comparison to some fixed value.

Now consider the significance of sequence, a reminder that the simplest conventions, taken for granted in real life work, must be specified to a computer. Treatment as a convention demands an extra operation, branching, to break out of it. That means that we don't need repetition; we just branch to a location out of sequence. We have such an operation—the humble (and deprecated) GOTO. Given a sequence of instructions, a GOTO to a greater sequence number is a jump; a GOTO to a previous sequence number is a repetition [Gilreath].

But even sequence is a GOTO, where the destination is simply the next step. So sequence is an artifact, and it can be conflated together with branching into a notion of ordering. That makes the primitives ordering (find-next-step) and testing. Does this amount to a program counter and a comparison, and drop us into the CPU, skidding past the point of minimal program structures? Well, we can sidestep to the same place where Böhm and Jacopini started—the flowchart. We need (1) a box with one arrow out and (2) a box with two arrows out that (2.5) holds a test determining which arrow to follow based on a dichotomy (2.5.5). That hard-working tool of design had the answer all along. (Hey, didn't the principles of programming languages already tell us this?) But the graphic form of the flowchart, with particular affordances and freedoms, invites further questions about what bridges the conceptual gap between that rendition and program operations such as SBN.

We have probed just one aspect of minimal control structures—the overt operations. Gilreath and LaPlante provide criteria for a minimal instruction set, but it describes states, not operations. (I argue [Hill 2016] that the algorithm is not a declarative, but an imperative, abstraction—not made up of states, but of commands.) To follow an intriguing tangent, see The Page-Fault Weird Machine [Bangert], for computation with no CPU instructions as all.

References

[Bangert] Julian Bangert, Sergey Bratus, Rebecca Shapiro and Sean W. Smith. 2013. The Page-Fault Weird Machine: Lessons in Instruction-less Computation. 7th USENIX Workshop on Offensive Technologies (WOOT 13). USENIX Association.

[Gilreath] William F. Gilreath and Phillip A. Laplante. 2003. Computer Architecture: A Minimalist Perspective. Springer Science & Business Media Vol. 730.

[Hill 2016] Robin K. Hill. 2016. What an Algorithm Is. Philosophy & Technology 29:1.

[Horn] Laurence R. Horn and Heinrich Wansing. Negation. 2020. The Stanford Encyclopedia of Philosophy, Edward N. Zalta, ed.

[Rapaport] William J. Rapaport. 2015. Philosophy of Computer Science. Online.

[WikiNAND] Wikipedia contributors. 2021. NAND logic. In Wikipedia, The Free Encyclopedia. Retrieved 23:18, May 26, 2021.

 

Robin K. Hill is a lecturer in the Department of Computer Science and an affiliate of both the Department of Philosophy and Religious Studies and the Wyoming Institute for Humanities Research at the University of Wyoming. She has been a member of ACM since 1978.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More