Steering Saturn: It took more than a calculator to get Armstrong to the moon

Fall 2019 | By Alex Crick, Class of 2020. Photo by Neil A. Armstrong Collection, Courtesy of Purdue University Libraries, Karnes Archives and Special Collections.

Story's Main Image

The task of sending astronauts to the moon was not an easy one. Hundreds of companies along with hundreds of thousands of people worked tirelessly towards one goal: the Moon, by way of the Saturn V rocket. The team of software engineers at the Massachusetts Institute of Technology (MIT) wrote thousands of lines of code that guided the Command and Service Module (CSM) and Lunar Module (LM), propelling their cargo and crew through open space towards the Moon in order to orbit and land, and then safely return to Earth. The code gave life to the crucial functions of the Apollo Guidance Computer (AGC) that steered Saturn’s upper stages. We tend to diminish its historical role, given its rather weak calculating power, at least relative to contemporary computing. Yet the code and AGC were, for their day, cutting edge. The challenge of the astronauts was to learn about the fledgling field of Computer Science in order to understand this code and the basics of the AGC. This was especially true for Neil Armstrong, who both depended on its automatic guidance and control, and was forced to override the very system that was guiding him to the moon. The advancements made by the AGC also created the practical means of augmenting human performance and, thanks to its size, power, and ease of use, create new possibilities like fly-by-wire technology and the personal computer.

Armstrong was a Purdue aeronautical engineer (BSAE 1955), Korean War veteran, and an accomplished pilot. He was accepted into the space program and began taking Computer Science classes on October 30, 1962, under Professor Bob Smith at Texas A&M University. His notes from that class still survive to this day, and I was able to take a close look at them to see how he learned computer concepts more than fifty years ago. Reviewing the documents in the Purdue Archives was eye opening, as I saw many of the same notes I had taken just a year earlier, if now in Armstrong’s handwriting on the yellowing loose-leaf pages. Small cutouts of printed diagrams and charts were pasted all throughout the pages, still held on by ever-darkening strips of once-transparent tape. Many of these snippets had come loose from the page, collecting on the bottom of the file. Trying my best to put them back where they seemed to have been originally, I was able to piece together a picture of how Armstrong once learned computer science.[i]  [Image 1]

First, he started to learn basic theory. This theory remains the same today as it was in Armstrong’s day. When I looked over his notes, I saw many of the same tables that I had drawn in my own notes during my introductory classes the year before. These include Boolean algebra, number systems, and logic gates. Boolean algebra is a type of math that deals with only two things: true and false. For example, a statement in Boolean algebra can be something like “true and true equals true,” or “true and false equals false.” The term “and” in Boolean algebra means that both the first and the second input have to be true for it to output true. Simple logic statements like this make up every program that has ever been made on a computer. Number systems are ways to represent strings of “Booleans,” or true and false values. What was true in Armstrong’s day is still true in ours.

The most common way to represent Booleans is by using binary. Binary is a number system that represents all numbers using only two values: 0 and 1. The similarity of Booleans and binary made the number system a perfect fit to use in computers, where the true and false values are denoted by 1 and 0 respectively. All programs on a computer are fed through the chips as long strings of 1’s and 0’s. These strings are set in a particular order, with certain amounts of the string dedicated to certain things, as for example an address in memory, or even an instruction for the Central Processing Unit (CPU) to carry out. This makes hardware manufacturing very easy to do, since with the use of Boolean algebra, large functions and complex mathematical concepts can be carried out by many combined logic “gates.” These gates carry out Boolean functions like “and” and others, and all computer hardware is built out of billions of these logic gates.

Like most Computer Science students today, including me, Armstrong learned Boolean algebra by using truth tables, or charts of different input combinations, and their resulting outputs from a particular Boolean function. These assist in the construction of logic circuits by simplifying large sets of inputs and outputs into one function, which is then converted into a physical function by the use of the logic gates. So, while technology has changed dramatically from when Armstrong and the other astronauts started out, the basic theory of Computer Science has remained the same.  [Images 2 and 3]

As Armstrong was learning Computer Science, the Massachusetts Institute of Technology (MIT) won the NASA contract to begin designing and developing what eventually became the AGC. The design for the computer started out as a proof of concept created by systems engineers and computer designers at Charles Draper’s Instrumentation Lab.[ii] The project attempted to show the usability of the Display and Keyboard Module (DSKY), which was the interface between the computer and the astronaut who used it. This was an example of what we consider today as a “desktop-like” interface. The display showed vital information like the screen on a computer, although not as elegant back then. The keyboard allowed for the astronauts to interface with the computer and run different programs during the flight.

The first program that the engineers made for the DKSY was a visual game with a box on the screen and a hole in it. The program spawned a ball that had different physics models applied to it, as for example, changing the amount of gravity affecting the ball. The ball then bounced around the box until it fell through the hole and then off the screen. The program then spawned a new ball on the screen and repeated the same process. The success of this proof of concept showed that the interface for a guidance computer onboard the Command and Lunar Modules was achievable.[iii]

The development of the AGC was centered at the Instrumentation Lab because no Computer Science departments existed in the United States in 1961 (although Purdue University created the first one in 1962). At the time, high level languages did not exist either, and the only useful languages were functional ones like FORTRAN and MAC.[iv] The engineers ultimately decided to use assembly code because of its extreme efficiency both with memory and processing. The crucial restriction given to the team designing the hardware for the AGC was that the computer had to fit inside a space the size of one square foot, so that it could fit within both the CSM and LM. Each module had one: the CSM to guide the craft to and around the Moon, and the LM to properly descend to the surface and return back to the CSM.[v] This kind of technology, required to fit the processing power needed for running the guidance programs into such a small space, was in its infancy in the early 1960’s. As Herb Thaler, a Raytheon employee brought on to help design the hardware for the AGC, once said, “And then shortly after arriving, the concept of the integrated circuit arose … the decision was made to go ahead with the integrated circuit approach. Of course none of us knew what an integrated circuit was at that time.”[vi]

Up until the 1960’s, computers were so large they required entire rooms to themselves. They were composed of multiple large machines connected together through small cable networks, and took sometimes months to install. The integrated circuit took up most or all of these large parts, and with the help of the micro-transistor, allowed for all the components of a computer to be placed on a single circuit board. So, MIT’s designers had a challenge: to bring the size of a computer, powerful enough to guide the Apollo astronauts to the Moon, down from a single room to a single square foot. Program storage was also going to be a problem because at the time, most programs were stored on paper punch cards or large magnetic drums. These solutions did not work for the AGC because it would have to run programs automatically during the flight; and it also needed to have the functionality to replace code in programs mid-flight. The most compact memory that had these abilities at the time was core memory. It used small magnets held by crossing wires, and the magnets could be turned “on” or “off” and also be checked by sending different electrical pulses along the wires. Sections of core memory were small square cards that were virtually flat, so these suited the size constraints of the AGC well.

On completing the development stages of the AGC, the MIT team had created a device that was groundbreaking for its time. Never before had so much computing power been put into such a small package. This was a leap in the 1960’s.  But how does it compare to computers today? There is a longstanding popular cliché in the media and literature: that the AGC was only as powerful as a modern graphing calculator. If we look at the stats of the AGC, as compared to a widely-used graphing calculator, as for example the Texas Instruments TI-84, the cliché seems to hold true. The AGC had a 2 MHz processor, was 1 cubic foot, weighed 70 pounds, and cost $1.1 million to make. The TI-84 has a 15 MHz processor, weighs 7 ounces, is 7.3’ by 3.5’ by 1.0’, and costs $108.99 (the figures are in 2019 dollars). If we compare it to the AGC, the TI-84 is: 7.5 times more powerful, 5 times smaller, 160 times lighter, and costs at least $1 million cheaper, even with all its advantages! It’s an impressive contrast: a rather weak AGC costing so much.[vii]

While true on the surface, the cliché is misleading. The AGC was actually able to use its limited software to great advantage. This allowed the “under-powered” computer to be able to compute many different complex programs all at once, using methods similar to modern computers. For example, to accomplish its goals, the AGC relied on three main programs: the “executive,” “waitlist,” and “restart and self-check” programs. The executive program allowed the computer to time-share memory between different programs. This meant that the computer could have multiple programs that could each store data at the same time. It also allowed for programs currently running to be suspended and then programs with higher priority to be run instead. This program allowed for seven programs to be in suspended animation, which let the computer take care of many different processes at the same time, increasing its functionality immensely.  [Image 4]

The waitlist program kept track of how much time each program was taking and told the executive program when to let other programs run. The restart and self-check program was the most important of all the programs. In the case of a calculation error, instead of continuing on as if nothing was wrong and feeding incorrect information to the astronauts and the spacecraft, the computer checked its calculations. It did this by comparing its results to a range of results possible that had been calculated on the ground beforehand. It was called a “reasonableness test” to make sure that no errors were present. If it had started producing incorrect calculations, it would save all of the vital program data it currently had, then restarted and continued with its calculations. This caused the problem in the computer to reset and then run normally. This software allowed the AGC to calculate and run many more programs than the average calculator today would be able.[viii]  So, the old AGC-calculator cliché is overdone.

The AGC’s code was tested by having the Instrumentation Lab’s computers feed information to the AGC, acting as the sensors that would eventually be connected to it. The AGC treated the sensor input data the same, no matter what the source was, so the simulations elicited the same response from the computer as if it were in outer space. This was extremely important, as the reliability of the computer would have to be verified before it could be trusted with the lives of astronauts. As the Apollo program moved closer to its mission launches, more strenuous simulations were required of the AGC. These simulations required more computing power than what was available at the Lab, so many upgrades had to be made to keep up with the schedule. By 1968, the Instrumentation Lab was running two IBM 360/75s computers. These computers allowed the simulation capacity to increase by a factor of 100 from the original IBM 650 the Lab had when the project started. These upgrades were costly both in time and money, with the delivery and installation times of the new computers taking as long as 18 months. These upgrades also meant that the software and assembler for the simulations would have to be changed manually to work with the new hardware.[ix]

The simulators for the CSM and LM had their own physical cockpits in the Instrumentation Lab. These simulators allowed the Apollo astronauts to interact with the controls of the spacecraft and be able to prepare for every possible outcome of the space flight and Moon landing. During the Apollo 11 simulation runs, Armstrong – along with crew mates Buzz Aldrin and Michael Collins -- had to learn what to do in the situations when an error code appeared. The codes were presented as four-digit codes flashing on the DKSY, with a warning bell to notify them of their presence. The simulations ran on the ground put the astronauts through hundreds of different error code situations. The purpose of these being to inform the astronauts and more importantly, ground control, of which errors to take seriously and which ones to disregard.

One such instance of the simulation proving effective were the 1201 and 1202 error codes produced by the LM’s AGC during Apollo 11’s descent to the moon. In one of the many simulation runs for the landing, when these error codes were generated, ground control had to figure out to “go” or “no-go.” Essentially, to ignore and continue, or to abort and come back home immediately. In the simulations, ground control operators looked at the 1202 code’s meaning in their notes and saw that it referenced an “Executive Overflow.” This error occurred when the AGC was unable to compute the results that it needed, in the time allotted to that specific program. Operators eventually decided to call a no-go on this error code and the simulation ended. But program leader, Gene Kranz, discovered that the code was not a major problem if it did not occur frequently, as that the computer was able to catch up and continue working like it was supposed to. Armed with this knowledge during the actual lunar descent, ground control knew to keep going and told Armstrong to ignore the code, produced because Aldrin had left the rendezvous radar on during descent in case the landing had to be aborted. A small error in the AGC’s hardware caused the radar data to take more CPU time than it was supposed to, making the code appear. If that particular simulation had not been run, and ignored, Apollo 11’s landing may not have taken place.

The AGC experienced another problem during the descent of Apollo 11, when Armstrong noticed, out of the LM’s window, that the landmarks he was supposed to be seeing were appearing at the wrong time. The AGC had miscalculated. The site where the LM was heading turned out to be a boulder field. He decided to take manual control of the LM, slowing its descent to almost hovering, and flew across the surface looking for a better spot to land. Once he found a small landing zone that was clear of debris, he landed the LM and the rest is history.

Something very important happened during all of this. The need for an experienced pilot to have the instinct to take over the descent of the lunar lander created one of the first instances of a computer augmented human performance. When Armstrong took over the landing controls from the AGC, he was just acting as another source of inputs to the computer. No matter what he did, he still was not bypassing the computer and its programs. It was not possible for him to conduct all the many simultaneous and complex AGC calculations, as for example those relating to the balance of the craft and its orientation to the lunar surface, and land the LM at the same time. So, while he took over the AGC to guide it to a new landing site, he was still at the mercy of the computer doing everything else correctly. Without the AGC, landing on the moon would have been an impossible task. Both Armstrong and the AGC made it work.

In sum, the Apollo Guidance Computer was a technical and technological leap. The need for a computer that was powerful enough to guide the CSM and LM to the Moon, and back to Earth, while being small enough to fit inside a box the size of a cubic foot, spurred the advancement of computing by at least a decade. The hardware and software capabilities of the AGC were not available to the public until the first personal computers were released in 1975, a full fourteen years after the development of the AGC began in 1961. The development team at Charles Draper’s MIT Instrumentation Lab created new programs and used cutting-edge technological advances -- to solve all the complex problems of trying to fit the computational ability of a large computer into a small box.

Thus, the complexity of the guidance systems and the space program itself required the astronauts to learn computer theory during their training. The AGC’s code, brought to life by its hardware and executed by the astronauts during the Apollo missions, was the first instance of machine-augmented human performance. Its example paved the way for the first fly-by-wire technologies in aircraft, helping pilots fly their planes much like the AGC helped Armstrong get to the lunar surface. The AGC was integral to the success of the Apollo program, and its extraordinary goal of reaching the Moon and coming back to Earth in less than a decade (1961-1969). Interestingly enough, the Apollo program, starting from scratch and developing all the way until the Moon landing, took less time than it took from the AGC’s initial development to the first mass-market personal computers offering comparable specs and features. That is a comparison worth considering. In the Moon race, as perhaps in the digital era, the accelerated “time” factor is as important to weigh as the mighty “calculating” power.

[i]  Neil Armstrong’s notes from his Computer Theory course with Dr. Bob Smith (October 30, 1962), Box 4, Folder 7, Neil A. Armstrong Papers, Purdue University Archives and Special Collections.

[ii]  On the history, see the presentations on “The Apollo Guidance Computer,” sponsored by the

Dibner Institute for the History of Science and Technology (MIT), 

[iii]  Eldon C. Hall, Journey to the Moon: The History of the Apollo Guidance Computer (Reston, VA: AIAA, 1996), 155-156.

[iv]  Coding languages that can perform complex functions with relatively simple code.

[v]  “The Apollo Guidance Computer,” (MIT).

[vi]  “The Apollo Guidance Computer,” (MIT). Raytheon was the company contracted by NASA to manufacture the hardware of the AGC.

[vii]  On the cliché, see Frank O'Brien, The Apollo Guidance Computer: Architecture and Operation (Springer: 2010), 4. 2 MHz means 2 megahertz, or two million operations a second.

[viii]  Hall, Journey to the Moon, 161-166.

[ix]  Hall, Journey to the Moon, 157-159.