Opinion
Computing Applications Viewpoint

Modeling in Engineering and Science

Understanding behavior by building models.
Posted
  1. Article
  2. References
  3. Author
abstract model, illustration

For more than 40 years—since 1978—I have been working on computers that interact directly with the physical world. People now call such combinations "cyber-physical systems," and with automated factories and self-driving cars, they are foremost in our minds. Back then, I was writing assembly code for the Intel 8080, the first in a long line of what are now called x86 architectures. The main job for those 8080s was to open and close valves that controlled air-pressure driven robots in the clinical pathology lab at Yale New Haven Hospital. These robots would move test tubes with blood samples through a semiautomated assembly line of test equipment. The timing of these actions was critical, and the way I would control the timing was to count assembly language instructions and insert no-ops as needed. Even then, this was not completely trivial because the time taken for different instructions varied from four to 11 clock cycles. But the timing of a program execution was well defined, repeatable, and precise.

The models I was working with then were quite simple compared to today’s equivalents. My programs could be viewed as models of a sequence of timed steps punctuated with I/O actions that would open or close a valve. My modeling language was the 8080 assembly language, which itself was a model for the electrical behavior of NMOS circuits in the 8080 chips. What was ultimately happening in the physical system was electrons sloshing around in silicon and causing mechanical relays to close or open. I did not have to think about these electromechanical processes, however. I just thought about my more abstract model.

Today, getting real-time behavior from a microprocessor is more complicated. Today’s clock frequencies are more than three orders of magnitude higher (more than 2GHz vs. 2MHz), but the timing precision of I/O interactions has not improved and may have actually declined, and repeatability has gone out the window. Today, even if we were to write programs in x86 assembly code, it would be difficult, maybe impossible, to use the same style of design. Instead, we use timer interrupts either directly or through a realtime operating system. To understand the timing behavior, we have to model many details of the hardware and software, including the memory architecture, pipeline design, I/O subsystem, concurrency management, and operating system design.

During these 40-plus years, a subtle but important transformation occurred in the way we approach the design of a real-time system. In 1978, my models specified the timing behavior, and it was incumbent on the physical system to correctly emulate my model. In 2018, the physical system gives me some timing behavior, and it is up to me to build models of that timing behavior. My job as an engineer has switched from designing a behavior to understanding a behavior over which I have little control.

To help understand a behavior over which I have little control, I build models. It is common in the field of real-time systems, for example, to estimate the "worst case execution time" (WCET) of a section of code using a detailed model of the particular hardware that the program will run on. We can then model the behavior of a program using that WCET, obtaining a higher level, more abstract model.

There are two problems with this approach. First, determining the WCET on a modern microprocessor can be extremely difficult. It is no longer sufficient to understand the instruction set, the x86 assembly language. You have to model every detail of the silicon implementation of that instruction set. Second, the WCET is not the actual execution time. Most programs will execute in less time than the WCET, but modeling that variability is often impossible. As a consequence, program behavior is not repeatable. Variability in execution times can reverse the order in which actions are taken in the physical world, possibly with disastrous consequences. For an aircraft door, for example, it matters whether you disarm the automatic escape slide and then open the door or the other way around. In this case, as with many real-time systems, ordering is more important than speed.

The essential issue is that I have used models for real-time behavior in two very different ways. In 1978, my model was a specification, and it was incumbent on the physical system to behave like the model. In 2018, my model is a description of the behavior of a physical system, and it is incumbent on my model to match that system. These two uses of models are mirror images of one another.

To a first approximation, the first style of modeling is more common in engineering and the second is more common in science. A scientist is given a physical system and must come up with a model that matches that system. The value of the model lies in how well its behavior matches that of the physical system. For an engineer, however, the value of a physical system lies in how well it matches the behavior of the model. If the 8080 microprocessor overheats and fails to correctly execute the instructions I have specified, then the problem lies with the physical system, not with the model. On the other hand, if my program executes more quickly than expected on a modern microprocessor and the order of events gets reversed, the problem lies with my model, not with the physical system.

Some of humanity’s most successful engineering triumphs are based on the engineering style of modeling. Consider VLSI chip design. Most chips are designed by specifying a synchronous digital logic model consisting of gates and latches. A physical piece of silicon that fails to match this logic model is just beach sand. One level up in abstraction, a synchronous digital logic model that fails to match the Verilog or VHDL program specifying it is similarly junk. And a Verilog or VHDL model that fails to correctly realize the x86 instruction set is also junk, if an x86 is the intended design. We can keep going up in levels of abstraction, but the essential point is that at each level, the lower level must match the upper one.


Science and engineering are both all about models.


In science, models are used the other way around. If Boyle’s Law were not to accurately describe the pressure of a gas as it gets compressed, we would not hold the gas responsible. We would hold the model responsible. In science, the upper level of abstraction must match the lower one, the reverse of engineering.

The consequences are profound. A scientist asks, "Can I build a model for this thing?" whereas an engineer asks, "Can I build a thing for this model?" In addition, a scientist tries to shrink the number of relevant models, those needed to explain a physical phenomenon. In contrast, an engineer strives to grow the number of relevant models, those for which we can construct a faithful physical realization.

These two styles of modeling are complementary, and most scientists and engineers use both styles. But in my experience, they usually do not know which style they are using. They do not know whether they are doing science or engineering.

Nobel prizes are given for science, not for engineering. But in 2017, Rainer Weiss, Barry Barish, and Kip Thorne won the Nobel Prize in physics "for decisive contributions to the LIGO detector and the observation of gravitational waves." The LIGO detector is an astonishing piece of engineering, an instrument that can measure tiny changes in distance between objects four kilometers apart, even changes much smaller than the diameter of a proton. They engineered a thing for a model, and that thing has enabled science. Their decisive engineering triumph, the LIGO detector, enabled experimental confirmation of a scientific model of a physical phenomenon in nature, gravitational waves. Gravitational waves are a 100-year-old model due to Einstein, but LIGO has also enabled new science because it has detected more black hole collisions than astronomers expected. This will require revising our models of the universe. Here, science precedes engineering and engineering precedes science.

Returning to real-time systems, the problem today is that we are doing too much science and not enough engineering. As a community, people who work in real-time systems resign themselves to the microprocessors given to us by Intel and Arm. Those are definitely engineering triumphs, but the models that they realize have little to do with timing. Instead of accepting those microprocessors as if they were artifacts found in nature, we could design microprocessors that give us precise and controllable timing, processors that we call PRET machines.1 Then we could specify real-time behaviors, and the hardware will need to match our specification. We have shown that such microprocessors can be designed, and that at a modest cost in hardware overhead, there is no need to sacrifice performance.2

Science and engineering are both all about models. But their uses of models are different and complementary. Any model is built for a purpose, and if we do not understand the purpose, the model is not likely to be very useful. To read more about the relationship between engineering and scientific models, see my recent book.3

Back to Top

Back to Top

    1. Edwards, S.A. and Lee, E.A. The case for the precision timed (PRET) machine. In Proceedings of the Design Automation Conference (DAC), San Diego, CA, 2007.

    2. Lee, E.A., Reineke, J., and Zimmer, M. Abstract PRET machines. In Proceedings of IEEE Real-Time Systems Symposium (RTSS), Paris, France, 2017.

    3. Lee, E.A. Plato and the Nerd—The Creative Partnership of Humans and Technology. MIT Press, 2017.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More