Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I going to come across as defensive here, but I'm actually in a Computer Engineering program (not Computer Science). This book purports to cover as much material as 8 undergrad courses, I feel like it must skimp on depth to (for example) condense all of 'compilers' into two weeks. Compilers are a very large topic, a single undergrad course isn't even sufficient to really understand a real world project like GCC. Likewise, 'computer architecture' makes up three classes in my curriculum: one all about building RISC processors, another about CISC, then a third about modern architectures. Only in the last one do you approach an understanding of a contemporary CPU architecture.

My question for people who have done the course is: does it cover even simple design theory like K-maps? Does it make you account for propagation delay? Does it explain caching schemes and TLBs? I feel like it probably has to gloss over a lot of the 'hard stuff' to remain so dense.

Likewise, it sounds like it's all done in custom languages. Half of my first year was spent struggling with industry standard, terrible software like Altera which is super powerful but terribly designed. The other half was spent actually breadboarding circuits and having them fail because of problems you never see in simulations (or which they solve for you).

I'm not saying it's not an interesting project, but it really is a nice, abstract diversion for people who work on software all day. People calling for it to be included in comp. eng. programs probably don't realize the depth of what actually gets covered in comp eng.

Edit: to sound a bit less whiny, if anyone is doing this course and they want to dig deeper into a particular area, I'd be happy to point them to the books/course materials we used.



The hardware content in this book is not sufficiently detailed for computer engineering. It's really for CS students who want to understand roughly how computers are made. (This is evident by counting pages in the book: the first 5 chapters are the hardware chapters and they span only 100 pages. The remaining 200 pages cover the software stack from assembly up.)

For example, one of the assignments is to design a 16-bit adder in their toy HDL, but they never cover carry lookahead adders. The only thing that matters is that your circuit passes the tests, so ripple carry is considered okay.

Similar efficiency/performance issues are glossed over throughout. Propagation delay is never covered, and the sequential circuits use idealized clocks (instant transition between low and high). They also don't describe how to build up flip flops from latches: the D-Flip Flop is given as a primitive and you build up other elements from there.

K-maps are not covered either. Caches are ignored as well.

Still, the book is amazing for its intended purpose. If you don't already know this stuff, this is an easy way to get a somewhat detailed (though abstract) view of how computers work without getting mired in all the concerns that accompany the engineering of actual computers.


The things you list are pretty much some of the base fundamentals of circuit design. I can't imagine that this curriculum is of much use without them.

Also, I seriously doubt that it covers the entire breadth of information required to create, from scratch, the entire video subsystem required for displaying graphics. Or anything like that.


I don't understand your critique. A book manages to condense the design of a computer -- from gate-level to operating system -- in a 300 page tutorial and you do not like the fact that it glosses over some details? I doubt any single person has the knowledge to build a modern computer from scratch, and few people have/need the knowledge to go from gate level to OS-level in the real world.

This book aims to change that in an simplified environment. I don't fault it for skipping over Karnaugh maps just like I don't fault it for skipping the physics of cosmic background radiation, or the techniques used to compensate for failures in multi-level flash memory cells. These details are not on the most direct route from (simulated) nand gates to tetris.


My biggest problem with the book is that it gives you an adequate, hand-wavey idea of how a computer works, but it doesn't impart any hard skills. Karnaugh maps are a reusable skill in digital design. I've done hundreds of them for simple problems, it's like breathing now. I suspect using multiplexers to simplify a design is also left out. VHDL is a real language people actually use, and you can put it on your CV or do something besides this class with it. Breadboarding a real circuit and troubleshooting it is useful and opens up a whole world of projects.

This class equips you with the tools and knowledge to do one thing: finish the class. It guides you along, handing you simplified abstractions that allow you to progress without getting frustrated. At the end, however, you'll only really know how to take the class. If you wanted to drill deeper, you've already done the introductory week where they do a high-level overview of the course material.


You're not going to learn enough of any HDL in a 100 page tutorial to be particularly proficient in it anyway. Trying to explain the quirks of VHDL in a short tutorial would be a nightmare. The result would inevitably be skipping over stuff rather than explaining it. Coming from a software-development background, once you get into the higher-level aspects of writing VHDL (and I'd assume Verilog, although I have no experience with it) there's a lot of fundamental mistakes you can make in your understanding of how the language is interpreted into hardware. Using their own HDL which doesn't have much more than basic primitives stops people falling into those holes.

In my opinion, the hard part of hardware description is understanding the concepts, not the language used. If you've got a basic understanding of hardware description, the barriers to moving to VHDL or verilog will be much lower. Coming from the other direction, I can see large similarities between the concepts in their HDL and VHDL - the syntax may be slightly different, but the concepts are the same.


This is like complaining learning algebra doesn't equip you to be a mathematician. That's not the purpose.

I studied computer engineering and computer science, and I haven't used Karnaugh maps or VHDL since college because I write software now. I'm still glad I studied computer engineering though, because it gave me a deep understanding of how computers work.


Just as meaningful as a full course. This is just a "depth first search" thru the content, getting from top to bottom in one pass; you're complaining its not a "breadth first search" covering everything on one level. Done this way, you get the gist of how it all does, in fact, go from NAND gate to games - yes a lot is glossed over or missed, but once the student sees the vertical structure he can see how each layer connects, and how expanded knowledge of each serves not just one layer but those above and below (and how to leverage each in sync).

I appreciate how my education ran from sand to Skyrim (so to speak), and I find it hard to see how anyone can really function in computing without such a vertical understanding.


This is actually closer to breadth-first (since the alternative is an 'in-depth' course) in my mind, but I get your meaning. The thing is, do you actually take anything away? If you don't talk about caching in the CPU, scheduling in the OS, or propagation delay in the gates, how does that help your understanding of how to write software?

I'd be curious to know a) how deep your education actually went (since you've implied it was broad, from logic gates up to OSes and high-level programming) b) what you actually do day-to-day that you think this high level overview is indispensible.


> The thing is, do you actually take anything away? If you don't talk about caching in the CPU, scheduling in the OS, or propagation delay in the gates, how does that help your understanding of how to write software?

What materials and courses structured like this excel at doing is very rapid demystification. They quickly allow the student to remove the "and this layer is black magic" notion of things and give them structure on which they can realize the limits of their own knowledge, or learn to know what they don't know. With this sort of foundation they are better equipped to teach themselves.

Materials and courses like this are not vocational, and don't pretend to be. They are very much the opposite.


We all start out not understanding how it's possible to make a CPU, write a compiler, communicate over wires, draw text into a framebuffer, and so on. These things seem like magic. This is a problem: as long as they seem like magic, we're deprived of engagement with them. You get a family of systematic errors:

* Magic is supposed to work. So you see people calling for functionality to be moved from whatever they're doing (their user-level code, say) into the magic: build something into the language, compile it to machine code instead of interpreting, do it in hardware, etc. Because of course if it's done by magic, it doesn't cost anything and it works perfectly!

* Magic is out of your control. So if it breaks, there's nothing you can do. If your operating system is crashing your program, or downloading updates you don't want, you're out of luck.

* Magic is easy. So the people who make the magic happen don't get the credit.

* Magic is memorized, not understood. So you need to memorize the incantations needed to squeeze performance from your database/OS/CPU/whatever instead of doing them yourself.

You don't need to understand how to use Karnaugh maps to understand that putting more multipliers on your chip is going to cost you real estate. You don't need to understand the different possible scheduling policies to understand that making your program multithreaded will slow it down, not speed it up, unless you have more than one core. Even a shallow understanding is sufficient to be very useful, and to enable you to question things.


It helps immensely in knowing WHY such things are useful, how they can improve (or screw up!) another layer, and where the correct solution should be implemented.

There's an old joke that the difference between computer science and computer engineering is that in the former one assumes infinite speed and infinite storage. Understanding that there are limitations, and why they exist and to what degree, is important.

As already noted, it demystifies the surrounding "magic". There's a confidence and freedom which comes from knowing that nothing in the system is beyond you.

My education indeed went from "sand to Skyrim", from basic physics & chemistry to electrochemistry to discrete electronics to quantum mechanics to semiconductor doping to hand-layout of integrated circuits to automated layout of ICs (writing the automators, that is) to hardware languages (acronym escapes me) to logic to gate theory to basic CPU design to machine language to assembler to compiler design to C/APL/Pascal/Prolog/Lisp/C++ to OS design discrete math to graph theory to raster graphics to 3D graphics, and a bunch of other stuff throughout. It's indespensible because I can look at any problem and grok what's happening all the way down to silicon, able to work with someone writing Windows printer drivers one day and proving a linked crossover bug in the USB driver IC the next while discussing circuit design in between, why an elegant recursive solution causes a "drive full" error under certain conditions, why error handling in a certain protocol is pointless (already handled six layers down the network stack) - to name just a few real cases.

Knowing propagation delay in the gates can explain/reveal the limits of scheduling in the OS. Understanding drive rotation speeds provided the breakthrough of on the fly compression as an OS-level storage acceleration technique.

Take anything away? Just a sensible understanding of how everything works, and ability to drill into detail where and when needed. All learned in about 6 years, and even came out understanding why Aristophanes' plays survived for several millennia (to wit: dirty jokes endure).

What I do day to day (now)? Writing an iPad app for mobile enterprise data. Working under a genius crafting the many layers of abstraction making it fast & flexible, he can (has) describe a new way to represent very high level data, hand me a rough description of a virtual machine to process it efficiently, and I'll instantly see how it runs on server hardware. I can't imagine not having this view. As a part time teacher, I'm trying to get students from zero to binary to writing object oriented games in 12 weeks flat; to do less is to deprive them of the joy and rewards of knowing how things work - at every level.

"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects." — Robert Heinlein, Time Enough for Love


I think you are significantly overestimating how much knowledge is necessary to do these things. This course does not claim to prepare you to be able to work anywhere near the current level of technology, it claims to teach you enough to construct a Tetris game starting from NAND gates. I can assure you, that Attanasof (inventor of the electronic digital computer) would have killed to have a resource like this.

To go with your specific example of compilers. Why would people need to learn enough to be able to conrtibute to GCC to get anything out of this course. GCC is the leading open-source compiler; knowing enough to contribute is overkill.

To give an example from personal experience, I, over the course of a weakened, wrote my own operating system from scratch, without having had any formal training in Systems design. I have no doubt that my system would collapse under the weight of doing anything remotely close to what we would expect a modern system to be able to do. The only reason I was willing to risk running it on bare metal was that I had an old, semi-broken, laptop that I didn't care about. But I still learned a lot from the process.

In terms of including it in an comp. eng. program, it seems like it would fit in well as a 101 course. It provides a big picture of how everything fits together, and a first look at all of the topics will help enough when you go to learn them in depth that it seems like it could be worth the time investment.


You sound a bit defensive. The title of the post is "Building a Modern Computer from First Principles". It's a book; the author doesn't claim equivalence to a computer engineering degree. The areas of study you've listed are gaps a motivated student with appropriate background can fill through self-study or other means. Depending on your chosen career after graduation, you're likely to have some gaps of your own. I don't mean to compare your curriculum to the book, but to make the point that there is always more to learn.

As some of the responses have stated the book appears to (because I haven't read it, yet) motivate (i.e. motivate as professors do as part of the introduction to a course: why are we learning this? how may we apply this? what should we learn next?) topics the reader may want to study deeper.


There's nothing wrong with taking a course that covers a very wide area, even if you're intending to specialize more deeply in all the topics later. It's actually a very effective learning strategy, because it motivates all the subsequent deeper dives.


I will definitely second that. It's simply not possible to cover every aspect of a topic within any course. Survey courses are a great solution to a very real problem.


I guess personally I would feel like I wasted a semester when I had no problem being motivated to deep dive into the other topics already. I suppose if someone was undirected and needed to pick a specialization this might help.


Ah, but if you're actually motivated there's nothing stopping you from going as deep as you want into any of the topics. There's never an excuse for "wasting" a semester.

Coursework is the minimum, not the maximum.


What I meant was that I would want to deep-dive into each topic, but we'd be busy moving on, and I'd cover all that material again next year anyways. I don't think survey courses fit my way of looking at topics, I'm very single-minded. That doesn't mean they aren't valuable or they can't work for other people. Just that I wouldn't want them to be mandatory.


I think the advantage this has is you have one continuous path from NAND gate to Tetris. Whereas in my experience attempting CS at Cal Poly, we did all these steps but they were disconnected. The output of one course was not used as input to the next.

I don't know if there is any pedagogical benefit from being able to say "I built this whole thing from scratch". But it sure is cool.


It's really hard to go from NAND to Tetris, because there's a logical jump when you get to VLSI. This is the whole notion of quantity being a qualitative property: when you get enough gates together, you really start abstracting them and thinking about higher-order components. There's no smooth zoom out, there's just a sudden discontinuity when you stop using individual gates and start using MUXes, flip-flops, etc as your primitives instead. I suspect there's another layer, but I'm not there yet ;)

I think I may have benefited from going to a smaller school, in this respect: I've had the same professor for every core 'Computer Architecture' class I've taken across 3 years (apparently there's one other guy, but they teach the same content). The courses are numbered, and they pick up exactly where the last one left off. I think the problem with trying to fit this stuff into a CS program is that it's so broad and deep, and it's not your primary focus. For me all of those classes were tightly scheduled requirements, so I took them at the right time, back to back.

It's definitely cool, and I encourage anyone who works primarily in software to check it out and get a better understanding of hardware design in an abstract way.


I don't know what you mean about MUXes. It's just simple abstraction to say "yeah, take this pattern of gates and give it a symbol so we don't have to draw it over and over again in detail". You're still just wiring up combinational logic...

Flip-flops and sequential logic made my brain flip-flop itself for a while though when I first ran into it. :) That really does require a different sort of thinking, since you're introducing time as a factor.


I know a lot of people who didn't really move beyond thinking about boolean logic. It's easy to reason about the larger scale component if you're reading a diagram, but it gets harder to design with them at scale.


As he mentioned, at least one school calls it computer science 101. It's not a replacement for an entire computer engineering degree.

I also studied computer engineering (and computer science), so I got all of this information over the course of 4 years, but I think it would have been valuable to take this course up front in order to immediately understand how all the pieces fit together.

It would also be valuable for computer science students who don't get most of the computer engineering material.


OK, to clarify, this is a cool book/course. I don't mean to disparage the author, they've done an excellent job condensing a large body of material. However:

The title is very ambitious. This is not really building a computer from first principles, there are some steps skipped. This is a high-level overview of modern computers, it's worth noting there's a lot of depth to be explored.

Everyone agrees custom languages are not great. They don't really give you a lot of transferable skills, it would be cool if you really implemented C or Lisp, and did it in Verilog or VHDL.

This style of course may suit a particular type of student, who enjoys a broad overview or wants to specialize in only one area. Personally my preferred way to learn is in depth, serially, so this doesn't really apply to me. My degree also covered most of these topics anyways, so picking wasn't really a problem. I realize this doesn't apply to everyone.

A lot of comments say 'a motivated student will just learn that on their own'. This material is a good jumping off point, but (once again, in my experience) the theory is the hardest stuff to learn on your own. I would rather do the 'dull' stuff in class, then teach myself how to make games out of it (as opposed to being taught how to make games, and having to learn best practices, design techniques, theory).

Some commenters were also saying that this is unique, or it should be taught everywhere. It is unique in that it's a single, very dense class, but the material is definitely available elsewhere, in a format that I find easier to learn from. I wanted to make it clear that, if this is interesting, I think a computer engineering degree will let you learn the same stuff, but in much greater detail. Taking this class first might motivate some people, but I would find it redundant.

In conclusion, this is great, but it's not for everyone. If you like all the content but you're disappointed by how brief it seems, try computer engineering.

edit: I forgot, a lot of comments implied that understanding this material helped them do higher level programming. It's certainly cool to have a soup-to-nuts knowledge, but I still don't really understand how it could help without the topics that actually impact performance like caching, pipelining, I/O, etc.


For whatever reason, you feel the need to defend the value of your degree. You forget that people have various reasons (some personal) for seeking knowledge. It's not always about gaining marketable skills or about learning all there is to know about a subject.

Many of the points you make in your critique (lack of depth, etc) are obvious to anyone that decides to read the book. As an example, the book Learn Modern 3D Graphics Programming [1] has been posted and praised on HN in the past, but it should be obvious to anyone that there's a lot more to Computer Graphics than that book alone.

I think your comments would be more valuable if you had something more positive to add, perhaps in addition to criticism. If this book glosses over some topics, perhaps you could suggest some learning resources for those topics.

1. http://www.arcsynthesis.org/gltut/


I have to agree. The GP is getting bent out of shape. I think him describing the limitations of the course would be useful if not worded so defensively.

This is like a professional fabricator complaining that the 10 hour welding course at night-school doesn't cover welding aluminium. I don't think anyone was under the impression that this was a replacement for a an engineering degree. People will do this course because it's cool to build stuff you thought was beyond you.

Also, enroll in a welding course, it's cool to be able to build big stuff out of metal too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: