The long history of exploitable buffer overflows in C programs is something to keep in mind when people hyperventilate about how recent vulnerabilities in a popular framework reflect on the security of the language ecosystem as a whole.
Assuming you and your parent are both coyly referring to the recent YAML parsing related Ruby and Rails vulnerabilities, I think the analogy is better than you realize. Raw strcat is to the built-in YAML parser as strncat is to the various whitelist-based YAML fixes[1]. Both strcat and the YAML parser work as intended, but should never be exposed to data controlled by an external source. Buffer overflows aren't intentional, but uses of strcat are, even when they are wrong.
All the networking code for Python, Ruby, Perl, Haskell, Java, Scala, Clojure, Erlang, and just about every other language in existence eventually makes calls to the sockets API, which is written entirely in C.
Where were the "Ruby is unfit for networking code" comments for all the recent Rails remote code execution exploits?
Lazy developers will write unsafe code in any language.
The first and most important is to assume any tool that reaches out to the internet and other machines is suspect no matter what language it's written in, and to treat the entire machine as tainted.
The industrial data approach to this is to have a sacrificial machine that's almost completely network isolated from which all FTP's, curl's, wget's et al are run - once setup 'securely' then image the machine to be able to wipe reset on a regular basis. Use that machine to fetch all data from foreign sites and have it dump incoming files to an in coming queue folder.
Network security should be such that one other machine on the network can read from that folder and all other network traffic related to the fetch machine rings alarms.
Another approach, when using C to develop your own in house tools is to understand that 'C Strings" are not and never have been "part of the language" - go read the spec. in the latter half, after the language specification, it mentions that a particular byte pattern is referred to a string in a C context, it's just a label of convenience.
Don't use the C stdlib functions that manipulate C string patterns, think of them as just an early example of the kinds of things you can build with C, when dealing with tainted data (any data from a network, a user, a foreign source) use hard blocks and hard sizes. Don't fall into the tacit trap that input is "well behaved" .. it never is.
It has a similar set of capabilities to C (including pointers, in line ASM, arbitrary memory access, etc.), ability to call C code, so integrates nicely with C libs.
It has slightly better type handling - e.g. you can declare "feet" and "metres" types based on float, but the compiler won't let you assign one to the other (unless you force a cast).
The major safety advantage is that by default access to strings, arrays and memory management is bounds checked. You can turn this off with pragmas in the code for performance sensitive bits.
In fact, standard practice in C now is to use bounds-checked functions (the strn* functions, etc). Except instead of the compiler keeping track of data sizes for you, you have to do it manually!
Normal string handling is also faster in TP since it stores the length of the string, so it doesn't have to scan through to find the zero char during string operations.
Of course, all this is possible in C, but it's not part of the standard language and API. This has two disadvantages:
1) Lazy or inexperienced programmers will write the most dangerous code. Yikes!
2) Each careful programmer will solve the safety problems in their own way, code from different sources will not necessarily be compatible, forcing library APIs to fall back to the unsafe standard structures.
C++. I hate it, it has its own host of problems, and memory corruption is certainly still an issue there, but the most common issues that plague C apps won't plague properly written C++ apps/libraries. C's string handling (or lack thereof) has cost the world an immeasurable amount of time and money.
The biggest advantage that C++ has over just about everything else, is that you can use it to write libraries usable from everything else, and you can do it very easily. You can expose a C-style API trivially, and bind it to everything you want; that advantage can't be overstated.
While I'd love for everything to be written in pure safe, managed code, that's not viable right now. C++ is the best alternative we have, when safe languages aren't usable for the task.
Not that I agree with the upper comment, but functional languages that limit and rigorously encapsulate memory operations can eliminate a large number of these problems (the access itself is localized to monads, and the rest is strictly-typed and must be explicitly reasoned about). They can still be DDoS'd (too many open handles, out of memory, etc) to be sure. As long as there are fixed buffers, there will be buffer overflows but even using a functional style in C/C++ can often provide some of the same benefits (even just using const wherever possible...).
I agree, but using functional languages won't let you replace C in the vast majority of cases. The performance hit is minimal, but the ability for code to be used from anywhere (as C libraries can be) is extremely important.
Also, grouping C and C++ together in the context of security is a Bad Thing (TM). They have completely different security issues that plague them, and while C++ may be (mostly) a superset of C, it's truly a completely different language. Things like buffer overflows in string handling are next to nonexistent in proper C++ code, while they exist all over the place in C.
C and C++, as you note, permit a similar lowest-common-denominator programming style and so a similar class of low-level bugs. While the C++ libraries (std::string, etc) and other differences can help, they can be subverted through ignorance or will, and often are. The strict typing in C and C++ helps eliminate bugs over the same task written in assembly (or Forth, etc). In the same way, goto might best be used sparingly, since the sharper the tool, the more likely the damage done by accident. C and C++ sit at approximately the same "danger" level (potential for low-level access) whereas I think functional languages can be automatically "safer" (admittedly ill-defined), assuming you are willing to subjugate yourself to them and can trust the implementation (which is not always a fair assumption), since compilation boils down to a relational proof. (That's not to say it's my preferred style.)
Furthermore, Haskell and OCaml, for example, can both be compiled to linkable objects (C interface-able), so I don't see the loss of interoperability as you suggest. A Haskell .o looks like any other.
> The performance hit is minimal, but the ability for code to be used from anywhere (as C libraries can be) is extremely important.
I'd argue about the performance hit. (And I programme in Haskell for a living.) But FFI, on the other hand, isn't too unpleasant in functional languages.
More generically than the other answers: Anything with managed strings gets you out of buffer-overflow land.
If you're really concerned about security, something that does not support "eval" is also a good idea. Replacing your buffer exploit which still requires some skill to exploit with the opportunity to create a "Please tell me what code you would like to execute, in source code form" exploit isn't exactly a good trade. You'd think it would be easy to prevent users from executing code, but evidence suggests you'd be wrong.
A garbage collected language created by people ideologically opposed to dynamic linking? How is that ever going to replace dynamic libraries written in C that can be used from dozens of other languages?