From page 6, describing the representation of binary numbers: The word bigit is defined as binary digit.
I'm kind of sad that "bigit" didn't stick.
Edit:
From page 291, I'm also enjoying:
The "turn-on" and "turn-off" procedures for the computer naturally seem to be more in the domain of the engineers rather than that of the programmers; however, the turn-off procedure has been simplified to the extent that the programmers can do it.
The machine has two different breakpoints types-- "red" and "green". That's an interesting feature. The instruction set architectures I'm used to don't have such a concept and it's interesting to think about.
The way to represent negative numbers (page 3 of the manual) is really cool: to do "a - b" you add a to the complement of b (with the complement of b being 2 - |b|).
It feels really similar to modular arithmetic: -3 = 7 (modulo 10)
I'm kind of sad that "bigit" didn't stick.
Edit:
From page 291, I'm also enjoying:
The "turn-on" and "turn-off" procedures for the computer naturally seem to be more in the domain of the engineers rather than that of the programmers; however, the turn-off procedure has been simplified to the extent that the programmers can do it.
The machine has two different breakpoints types-- "red" and "green". That's an interesting feature. The instruction set architectures I'm used to don't have such a concept and it's interesting to think about.