Actually a Forth written in a language apart from English would be a cool idea. The 'words' model maps directly to natural languages and would be straightforward to apply to other languages.
# define testcase(X) if( X ){ sqlite3Coverage(__LINE__); }
If the testcase(condition) always evaluates to true, the code coverage analyzer would complain that the if ( X ) {sqlite3...} statement never evaluates to false and the branch coverage drops below 100%, the converse holds conversely if it was never evaluated to true in the first place.
Only if the code using the testcase macro gets repeatedly called in a way so that the testcase macro evaluates sometimes to true and sometimes to false, the coverage stays at 100%.
sqlite3Coverage() does some dummy work so that the call will not be optimized away by the compiler.
I don't think that's harsh for the title of the video as posted on HN. Like many others interested in programming, I've tried teaching it to my loved ones and come back with the understanding they either they don't need to or they don't care. A lot of programmers I know tend to assume that other people need to know this stuff or that it is important to their lives somehow which is a narrow view of the world to say the least. My girlfriend might listen to me talk about programming but assuming that she needs to be brought into the fold _somehow_ feels selfish to me. Again this is all subjective and anecdotal. I'm sure the talk would be useful to someone who knows someone who is interested and is just looking for the best way to teach them.
Rightfully so. Part of the problem is the way that we speak about the technology - calling it 'artificial intelligence' when the underlying technology does not resemble true intelligence at all. This raises expectations and lets people use AI as a dumping ground for infeasible ideas.
I agree, it is not the same sandbox as for example, browser sandboxes which restrict capabilities for multiple tabs to a given set of resources. However, its closer to KVM or Docker than to VirtualBox. The host and the 'guest' appear to be much more tightly integrated than in a full virtualized environment.
Unfortunately, that's not entirely accurate. The problem with language level benchmarking is that the benchmarks are only as good as their implementations and while I agree that different languages encourage different kinds of idioms it seems only fair to use the same data structure/technique when the language offers that choice (as Rust does with intrusive data structures).
The Rust program was not a faithful reimplementation as the author concedes. I expected a more closer comparison given the somewhat broad title of the article. Claiming the perf benefit came out of the thinking encouraged by the language is okay but debatable.
I've been on something of a language tour recently - trying out C++, Typescript, OCaml each for a couple of weeks just to get a taste of living in each ecosystem. I haven't been actively following Rust beyond reading Hacker News articles (of which there are many) about people liking the language.
One of my pet peeves with several "newer" languages that I've looked at is that users don't usually talk about the ugly parts. You can try out C++ for a week visiting forums and r/cpp and you find out fairly quickly what the pain points are and what typical workarounds look like. At that point, it is up to you to decide to what extent you can live with those downsides and where to tread lightly. Same for C, Python, OCaml etc.
To be clear, I don't mean that you should go looking for the bad parts in a language but I do believe you should be aware of them before you invest a lot of your time in it. Unfortunately, a lot of the language love blog posts do not talk about the pain points of the language and what kind of problems it isn't well suited for.
I like Rust and I love several features that Bryan talks about (algebraic data types being one of them) but I would love to read a more balanced evaluation of the language - focusing on aspects which are rough around the edges and expected future improvements.
Language-wise, Rust is really top-notch, I find it hard to fault any design decision. There are some design choices that annoy me (e.g., two closure types can never be equal), but there is always a fact-based rationale for why things are the way they are.
My two major pain points in Rust are (1) the compile times, (2) the high variance of quality in the ecosystem.
The compiler for Rust is very slow, it takes minutes to build software from scratch. Things are getting better with incremental compilation, but it's definitely not as fast as D or Go to compile, and that can be very grating during development.
Anyone can easily contribute to the crates ecosystem and post their libraries and programs to crates.io. Unfortunately, there is no real way to know what's production-quality and what's a toy. You can try and rely on the number of downloads, the names of the contributors, etc., but there is no system that tells you what other Rustaceans think of a crate. For instance, I tried using an HTTP client with few dependencies (because the most downloaded option, reqwest, has a lot of dependencies), but I found that (a) the library issued a read(2) syscall for each byte that it downloaded, (b) did not always return the full payload for HTTPS. There was no way I could tell from just looking at the crates.io page.
Compile times are really a big pain point, even slower than C++, when taking advantage of incremental compilation, linking, binary libraries and experimental modules, while not being crazy with templates.
I am also looking forward to the incremental compilation improvements.
Still think that until cargo actually supports binary dependencies, the experience will not be as fast as it could be.
In our continuous integration system at work, I've enabled sccache with the S3 backend. This reduced the compile times from 20-25 minutes (tests and release build) to 8-9 minutes. Still longer than I like, but it's possible to ease the pain somewhat.
The dependency thing worries me to a fair degree. Amazon has (or at least had when I was there) a build tool fairly similar to cargo. Libraries and software was imported in to the underlying repositories with a version associated. You put that in your list of dependencies and voila when you built your code everything got neatly combined and compiled as needed.
One routine source of pain was when one of your upstream dependencies changed its dependencies. That would happen quite routinely. All was fine, unless you actually had two packages that had dependencies on different versions of a library.
You could work around it by pinning the version of the dependency, but of course that's risky. You don't know if you're exposing yourself to bugs, because you're making software run with a dependency version it hasn't been tested against.
Pretty much every build file I ever saw in Amazon had at least a half dozen or more dependencies pinned. Every now and then you'd find you were getting a new dependency conflict, and that things had become an impossible knot to untangle. All you could do is unpin _everything_ and then figure out all the version pins you needed again from scratch.
I swear I would lose at least one to two days a quarter doing that. The development focussed teams would spend way more than that on fixing conflicts.
I started out with Rust just last weekend. Put a couple of dependencies in the Cargo.toml library and got stunned when it pulled in over 400 dependencies, a number of which I'd expect to have seen in the stdlib, not left to the vagaries of random people's implementations.
For native Rust libraries this is a solved problem. Cargo finds one common compatible version of each library that satisfies requirements of all dependencies, and only when that isn't possible, it allows more than one copy of the library (and everything is strongly namespaced, so there's no conflict).
And it has a lockfile, so your dependencies won't update if you don't want them to.
The only problem is C dependencies like OpenSSL that do break their API, and don't support multiple copies per executable, so there's nothing that Rust/Cargo can do about it (you as a user can avoid them and choose native Rust libs where possible).
In Cargo you can have different libraries use different versions of the same dependency, and as long as those different versions don't interact you're fine. What this means is that if library A depends on Bv1, and C depends on Bv2, then as long as A doesn't expose something from Bv1 that you try to use with an API in C that expects Bv2, you're good.
You’ll see more of that over the next few months. The roadmap process for rust kicks off with an end-of-the-year call for blog posts about what you want to see in Rust the next year. That invariably brings up areas where Rust could be improved, that’s kind of the point! We’re beta testing the last improvements for the year now, so looking at last year’s posts isn’t going to give you an accurate picture; a lot has happened in the last year!
That said, I can give you at least one thing where Rust needs to, and will be, improving over the next months: the async story is finally settling into place, but hasn’t settled yet. Async/await is coming and will improve things so so much. Right now things are a bit boiler-platey, and you can’t always write what would be equivalently idiomatic to the sync code. See here: http://aturon.github.io/2018/04/24/async-borrowing/
Well, that's mostly because there are almost no ugly parts in Rust. Sure, you can find minor annoyances like those:
- Type inference pretty much being killed by method calls. For instance, code like this won't work:
fn x() -> Vec<i32> {
let mut x = (0..3).collect();
x.sort(); // Calling any method of Vec
x // Cannot infer that `x` is `Vec<i32>` because a method was called
}
- Turbofish syntax is ugly. For instance, in Rust you say `std::mem::size_of::<T>()`. It would be nice if you could replace `::<` with `<`.
- Negative modulo follows C semantics. This means `-2 % 12` is `-2`, not `10`. There is a sane implementation in `mod_euc` method, but it's not the default.
- Lexical lifetimes reject code that should be valid necessitating weird workarounds. NLL will fix that one.
- Trait objects types used to be written using a trait name like `Display`. There is a new syntax which is more clear: `dyn Display`, but for backwards compatibility reasons the old syntax is accepted.
- Macros (including procedural derives) cannot access private items in a crate, requiring workarounds like exporting private items in a crate, and having `#[doc(hidden)]` attribute to hide them from documentation.
- Trait version incompatibility issue. That one is weird, but essentially it's possible for a program to have two versions of the same library. It's possible for a library to say, have a function that requires an argument implementing `Serialize` interface from serde 0.9. If you try to implement `Serialize` interface from serde 1.0 then you will get an error message complaining about not implementing `Serialize`, despite you having implemented it, just in the wrong version of a library.
- Missing higher kinded polymorphism.
- Missing const generics.
- The compiler is slow. Like, really slow.
- The language is hard to learn because of many complicated features like ownership and borrow checking. That said, I think those features are a good thing, I'm missing those in other programming languages, but they are problematic when you are learning the programming language.
But really, there is much more I would complain about in other programming languages, so it's not that bad.
> This book digs into all the awful details that are necessary to understand in order to write correct Unsafe Rust programs. Due to the nature of this problem, it may lead to unleashing untold horrors that shatter your psyche into a billion infinitesimal fragments of despair.
It's surprising how many things in that book don't have anything to do with unsafe per se. It's just that, when you can't rely on the inference system to "do the right thing", you now have to actually think about subtype variance and the order that destructors run in, instead of letting the compiler worry about all that bookkeeping.
In addition to Steve's comment, I'll add one more thing.
Try writing some Rust, it will surprise you in a few ways. These are not warts (though there are some), but more features of the language. For some these features, move-by-default, single-mutable-reference, strongly-typed-errors, they will be struggles to deal with initially.
It's a language that restricts you in many ways, and this is startling for many. It was for me for sure.
Last time i tried was a year ago so things might have changed but then i think lack of GUI libraries, and libraries in general, is the ugliest part of Rust ecosystem. Language-wise it's very friendly but once you want to interact with the outside world you many times end up having to wrap existing C or C++ libraries with FFI and that's not very fun. Especially not C++ libraries as C++ doesn't have a standard ABI and can't be called straight from rust so you first have to wrap the c++ library with 'extern C' and then call that C interface from rust FFI. Also making sure the C++ lib and rust is compiled in a binary compatible way on all platforms is quite a challenge.
Good IDE support, with autocomplete and integrated debugging, was also missing but that should have improved i believe?
I’d say C++ is mostly about being ugly (it’s old) whereas everything in Rust makes sense. I would advise you to check Erlang, Swift and Go (in addition to Rust) which are in my opinion these kind of languages.
Can you point me to a resource with details about this? I'm relearning C++ and a comparative evaluation of a language feature like that would be very useful.