Hacker Newsnew | past | comments | ask | show | jobs | submit | KaeseEs's commentslogin

There are many, many people who use C and rarely or never use C++. Most embedded work is done in C, the Linux and BSD kernels are in C, etc. There are a variety of reasons for this, with audit-ability and debug-ability being two big ones. Code space is also a reason, although there are ways to make C++ mostly behave the way you want it to on all but the teensy tiniest platforms (although the resulting code is sometimes a questionable upgrade over just writing the thing in C). A great number of language interpreters are also written in C rather than C++ (cpython being one of them) - easier bindings are one reason for this.

As another datum, where I went to school the intro to CS data structures and algorithms courses were taught in C++ and the operating systems course was taught in C. In my day to day work I use C and a tiny bit of assembly (more reading than writing).


Are there any articles out there detailing this? I've never heard this part of either story.



I don't know any data scientists, or even what one is really, but here's the EE explanation: the Fourier transform maps a signal in the time domain to the frequency domain, and vice versa. So if you have a sine wave in the time domain [say something at 3khz: y = sin(3000x)], the Fourier transform would turn that into a pair of Dirac deltas (spikes that are infinitely thin but have an area of one) at omega = +-3000. Or, if you had a pulse, the Fourier transform would map that to a sinc [sinc(x) = sin(x)/x].

The Fourier transform is related to the Laplace transform, which maps signals in the time domain to those in the "S domain", a domain where locations are normally described with complex coordinates which contains information both on frequency of the periodic components of the signal as well as any transients (starting conditions, in layman's terms).


Great analysis, although I'm curious how the idea that doing a bunch of 64 bit ops in order to accomplish byte arithmetic came about to begin with - was the function in question not written by a firmware guy?


I think this one goes back to PDP days and wasn't necessarily written to be the fastest possible implementation. The PDP could do 36*36 multiply into 72 bits. Not sure how the modulo instruction performed but there was a DIV instruction.


Down the rabbit hole says this came from HAKMEM No. 239 in 1972!

http://www.inwap.com/pdp10/hbaker/hakmem/hacks.html#item167


Or the programmer was a firmware hacker, and she knew that RBIT is ARMv6T2, which IIRC wasn't available until the iPhone 3GS. (Not 100% certain, and I don't have my manuals handy)


... in which case she would have used a lookup table, unless she was a really old-school firmware hacker and still believed that you couldn’t justify 256B for the table. (FWIW, you’re right about ARMv6T2).


I am. 256 bytes pain me :) (I've worked on systems with that much RAM total)

Kidding aside, I'd probably not go for the lookup table unless the whole thing was necessary in an inner loop - the cache miss cost is high. And since it's in a function, it better not be in an inner loop :)


Surely there is an argument that if it's called infrequently enough that cache misses on the LUT are a problem, then it's also called so infrequently that its performance is irrelevant.


256B is quite large for sure, but what about going for 2 nibble lookups in a 16 byte table? Or a 2 bit swap and a 6 bit lookup in a 64B table? (current x86-64 CPUs typically have 64B L1 cache lines?)


8 words to an L1 line on the original iPhone ARM, so yes, you'd fit it into a smaller table. You'll still face the memory latency issue if you're using this outside a tight loop. (Cache is only 4-way set associative. But at least I & D cache are separate)

But really, if you spend that much time thinking about the performance, you really shouldn't have that abstracted into a function. Calling that costs cycles and blows out one entry in your return stack - which makes a more costly mispredict later on likely.

Why yes, yes I do miss fiddling around with low level details. :)


... don't C++ people always tell us that inlining is the single easiest optimization for a compiler to perform?


Yeah. Sure. Especially across libraries.

If you're talking to a dev with a background in firmware dev, I'd say you'd be hard pressed to find any body who'll put a single asm instruction into a separate function if it's speed-critical.

Sure, the compiler might (and probably will) inline, but it means any change in your tool chain or a whim of the inline heuristic can cause serious performance regressions that are completely avoidable.

When it comes to performance, the embedded mindset will always be "belt and suspenders" ;)


"Being professional" is not about all agreeing to the same ideology, but consistent standards of behavior are the core of professionalism. If we cannot agree that screaming "SHUT THE FUCK UP!" and calling people brain-damaged and "worthless piece[s] of shit" (as Linus has done elsewhere; the specific example in the OP isn't as bad) is inconsonant with professionalism, the concept has lost all meaning. There are ways to express disagreement or that someone's actions are wrong without namecalling, screaming, profanity (which is not always problematic in many environments, but can get off the rails) or dressing them down publicly.

Linus isn't a drill sergeant and his poor imitation thereof when he power trips doesn't get his point across any better than just saying "X was a bad decision, don't do it again for reasons Y and Z".


Yeah, I personally don't really agree that name-calling and dressing people down publicly is really the most effective method of altering a team's behaviour for the greater good of a project. I guess I'm just thinking about the vast array of conflicting opinions and ideas around the world and how they might mesh together to form some kind of cohesive whole, in general.

So, like, I don't agree that the name-calling is effective. But, maybe, Linus does. Can we still work together even though we might be screaming at each other? Can we still make progress? If we can do that (without debilitating each other) I think that could be seen as a good thing.

Linus' point is that there is no universal definition of "professional" and there never will be. There's just a bunch of different humans walking around on this rock floating in space, each with their own minds and ideas about what's right and what makes the most sense (sometimes similar, often times different). With billions of us out there, it's probably impossible for us to all be on the same page at all times. But maybe we can be on the same page, for some things, some of the time and tolerate the other things where we're not on the same page in the interest of progress (however you want to define "progress"). I think that's more realistic, and I think that's all we can ever hope for. There are no "rails" when it comes to human behaviour, imho.


> Linus isn't a drill sergeant

That's right, he's actually a lieutenant in reserve of the FDF.

He got his start of leading men by training a bunch of recruits for FDF, and his leadership style "management by Perkele" very closely matches the one found all over in Finland among other reserve leaders (Finland maintains conscription, and officer training is considered a very good experience for leadership roles even outside the military). Not an imitation.


On the one hand, OCaml is a very interesting language and I welcome a good book about it in the vein of Real World Haskell. On the other hand, it is the height of treason for O'Reilly to make a book with a camel on the cover that isn't about Perl :v


They're different species of Camel. The Bactrian has two humps and the Dromedary has one (among other differences).


When someone/thing dies, it's ok to reuse their Jersey eventually.



Nice hand-picked rebuttal. Can I play too?

http://www.indeed.com/jobtrends?q=perl%2C+Java%2C+C%2B%2B%2C...

HMMMM indeed.


You're saying C++ is dying?!


I don't think that's an entirely fair take on the situation. There was a great series of comments on Russ Cox's blog in late 2009[1] by longtime Boost contributor and general brainiac Rivorous about the benefits of generic programming a la Stepanov and pervasive value semantics, which were noted but not engaged in any meaningful way.

[1]http://research.swtch.com/generic


So because Russ Cox didn't respond to some comments on his blog, the Go team is trying to shield users from generics because they think they're too complicated? I don't buy that. If you follow the go-nuts mailing list, the pros and cons of various generics implementations have been argued back and forth, and they have yet to find one that meets their criteria. I don't think they disagree that it would be helpful, they just don't think that the trade-offs current implementations provide aren't worth the convenience they add.


>So because Russ Cox didn't respond to some comments on his blog, the Go team is trying to shield users from generics because they think they're too complicated?

Yes. The whole "we'll do it when we find the perfect way" is bollocks. Generics are a solved problem and engineering is about compromise.

It's just that the compromise in favour of generics is not the one they wanted to take. But the official excuse is more of a way to shut people up about it, than truthful commitment to finding the best way to add them in Go.


I'll give you that what you say about the Go team's motivations is entirely possible (though I don't believe it to be the case), but not responding to blog post comments shouldn't be indicative of anything.


Yes, I agree with your disclaimer re: blog post comments in general.

Thought it might be if not proof, maybe indicative: he specifically asked for comments on that blog post. And according to the parent, those comments were well reasoned and by a high expert on the field -- they would at least deserve a reply.


Personally, I think that generics, in whatever form, are never going to happen in Go. The designers are smart enough that if they were really interested in them, they would've studied different approaches and come up with a reasonable design. I just think that having maps, slices and channels be polymorphic is enough for the designers and they have no use for something more complex.


> I mean when you get a rate that high, it seems like it must be just a relatively heavy element, and a metal, like Gold, and relatively unreactive (just carrying ions and such).

This is the opposite of the truth. Lithium is not dense at all (~.53g/cm^3), especially for a metal, and as a Group 1 metal is extroardinarily reactive - if you cut lithium it will oxidize as you watch, and if you expose it to water is will explode into flaming chunks.

> I imagine they don't get to 100% because they just don't bother to heat it high enough to melt things in there with a ridiculously high melting point and risk creative copper fumes and whatnot (from some metals evaporating). i.e. it's still easy, they just don't want to.

No. Lithium has a very low melting point (roughly 180C/355F).

Furthermore, your contention that a process with an efficiency of 93% should be easy to bring to 100% is so off base I don't even know where to begin. Squeezing out the last few percentage points is the hardest part!


It would be easier (for me at least, but I suspect others as well) to take Bitcoins seriously if the backbone of the Bitcoin economy weren't the Magic The Gathering Online Exchange, which apparently is crippled by loads of less than 40 transactions/sec.


It seems a bit fallacious to equate a Dollars-to-Bitcoins exchange with the Bitcoin currency itself. I know it's easy to forget amid this speculative frenzy, but conversion into USD is not really Bitcoin's ultimate purpose. None of these attacks have had any effect on transactions within the Bitcoin network/economy itself.


Quoting the mtgox homepage which in turn claims to be quoting wikipedia: "As of July 2011, Mt. Gox handles over 80% of all Bitcoin trade".

MTGOX is more than just a USD to bitcoin exchange.


No, actually, it isn't. You're quoting a piece of marketing material for chrissake. They handle 80% of all Bitcoin TRADE. As in Bitcoin-to-USD-and-vice-versa trades. Not Bitcoin-to-Bitcoin transactions.

You clearly don't have even a basic understanding of how Bitcoin itself works. It's a distributed, peer-to-peer system. That statement is analogous to saying "The Pirate Bay handles over 80% of all Bittorrent traffic." In reality, TPB handles exactly zero actual torrent traffic, and Gox is in no way involved in processing actual bitcoin blockchain transactions.

Like TBP, Gox is a convenient target for people trying to disrupt easy entry into the more robust distributed system that it provides access to. But if you already have bitcoins and simply wish to spend them, Gox could get sucked into a blackhole and it wouldn't affect you any more than a raid on TBP affects your in-progress torrent downloads.


Is there a limit on the depth of comments here? I can't seem to reply to cjh_'s adjacent post. Anyway, he's correct that it's possible to keep all your coins on a Gox-hosted wallet. I kind of forgot that people actually do that. It's a really terrible way to store your coins, even if you use a more stable/trustworthy service than theirs.

There are much better ways to utilize the convenience of a web wallet without handing all your coins over to some sketchy website, such as blockchain.info's wallet system.


Yes and No. You are right in that Mt.Gox doesn't handle 80% of all btc-btc trade.

However many use Mt.Gox as their wallet so the TPB analogy doesn't quite hold that far, as Mt.Gox being taken down would mean these people wouldn't have access to their BTC funds stored in that wallet.

I should have mentioned this in my original post, but I was sloppy.


Transactions only cause lag on their trading engine. They are not the reason their whole website and online presence disappear!

When MtGox denies service, like today, this is because of a DDoS. They have said multiple times that they see 10+ Gbps (SYN flood and UDP packets) hitting their servers on a weekly basis!

https://bitcointalk.org/index.php?topic=166578.msg1737375#ms...


Well, that's coming. Give it a few months. There seem to be a number of projects in this space, but I guess it won't be legit for some people until an LMAX Disruptor is written to do BTC FOREX trading.


Organometallic reagents like dimethyl mercury have their own entry in the same series of blog posts as the OP, cf. http://pipeline.corante.com/archives/2009/10/23/things_i_won...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: