For more than 40 years—since 1978—I have been working on computers that interact directly with the physical world. People now call such combinations "cyber-physical systems," and with automated factories and self-driving cars, they are foremost in our minds. Back then, I was writing assembly code for the Intel 8080, the first in a long line of what are now called x86 architectures. The main job for those 8080s was to open and close valves that controlled air-pressure driven robots in the clinical pathology lab at Yale New Haven Hospital. These robots would move test tubes with blood samples through a semiautomated assembly line of test equipment. The timing of these actions was critical, and the way I would control the timing was to count assembly language instructions and insert no-ops as needed. Even then, this was not completely trivial because the time taken for different instructions varied from four to 11 clock cycles. But the timing of a program execution was well defined, repeatable, and precise.
The models I was working with then were quite simple compared to today's equivalents. My programs could be viewed as models of a sequence of timed steps punctuated with I/O actions that would open or close a valve. My modeling language was the 8080 assembly language, which itself was a model for the electrical behavior of NMOS circuits in the 8080 chips. What was ultimately happening in the physical system was electrons sloshing around in silicon and causing mechanical relays to close or open. I did not have to think about these electromechanical processes, however. I just thought about my more abstract model.