Learning C takes a bit more than learning than its syntax. Feeding someone this and telling them they now know C is a recipe for disaster. The "Best to find yourself a copy of K&R" line is the only line that should be there.
I was brought up on K&R. I wouldn't recommend it for anyone anymore. Much like a Java certification, most of it will have to be unlearned in real life. It's littered with holes and land mines and misses a massive chunk of the surrounding tooling and all the idioms which make for a good C program that won't kill you or your immediate successor who has to maintain what you've written.
I'd go with Zed's Learn C The Hard Way, then 21st Century C and go from there.
Zed made a really good intro to C. I was surprised. My only issue with it was he introduced readers to bstring instead of something UTF, but still better than vanilla chars. I wouldn't even know an UTF library for C with permissive license and lightweight on top of my head. C coding is all about memory management and data structures layout.
Agree but Unicode is a rather complicated diversion. In a lot of circumstances it's not really required as well. It's definitely a reading on subject.
Also, bstr stops a lot of simple tasks looking impossible and therefore motivates people.
Further to that, if you consider Java, Python, C# etc, you're using massive libraries at a very high abstraction level and it is considered normal. There isn't anything built into C like that.
I agree completely with you on all counts. He didn't made a wrong choice with bstrlib by far. I just felt there was a strong movement in recent years to move towards utf in general, so why not expose beginners to it as well, just as a part of overall strategy. It's not a trivial issue to tackle though for sure.
Indeed, my first reaction is "no you won't." Even a low-level language like C has its quirks that will bite you -- undefined behavior, for example, or even how many bits wide an int is. Not to mention that knowing C involves a lot of knowing the standard library, which doesn't happen in minutes.
No they are not, assuming a byte as defined in ISO/IEC 80000-13 (8 bits, the most common accepted definition). In fact other types are defined in terms of the size of char, and sizeof(char) is always 1 by definition. These are not 8 bit bytes though. Some machines don't have byte addressing, for example DSPs. CHAR_BIT will tell you how many bits are in a char.
What is defined as a byte in the C standard is irrelevant to this article. A person reading this awful article(?) hoping to learn C wouldn't be aware of the C-standard definition precisely because he doesn't know C! He would only be familiar with the common definition of 8 bit bytes. ISO/IEC 80000-13 defines 8 bit bytes precisely because this is the common definition.
If you are going to use the common definition instead of the C-standard definition, why not use the common implementation characteristics of C as well? A char is one byte both in the context of the standard and in the context of common usage. It's only when you mix the two contexts haphazardly that the two aren't the same size.
I have seen the similar doc for python and that seemed very interesting and would've been a great resource for any beginner, but I wonder if such a format would work for C programming. Being able to actually make up in C would take a lot more effort than just that I would say.
Why should tutorials cater to a spec which was ratified over twenty years ago and has been superseded by two subsequent standards? Sure, VS only supports C89... but anyone writing serious applications in C avoids VS if at all possible anyway.