This is something we deal with in game development regularly, tracking spikes as fixing them.
I’m not sure how much sense it makes to measure them in the context of video card reviews as the article alludes to.
In my experience frame time spikes are more often a result of software rather than hardware. It would make more sense in the context of game reviews I think.
An ECMAScript to C compiler. Sort of the inverse of Emscripten. The goal was to compile ECMAScript to LLVM IR but it's not quite there yet. It currently implements version 5.1 and nearly has full test262 coverage:
https://github.com/kindahl/descripten
This one could actually be very useful for selectively optimising JavaScript modules by transpiling to C, optimising it, then transpiling it back with Emscripten.
For that kind of work it doesn't need to be perfect. The output doesn't even need to compile first time. It's just a way to save time by starting with a scaffold and refactoring, rather than writing from scratch.
Not sure if I'll get around to forking it anytime soon, but I've bookmarked it for future reference, as I could see myself using it in the not-too-distant future.
I highly recommend this plugin! I find it really snappy on my projects.
I haven't had the opportunity to try it on any gigantic projects, but I have found configuration files for YCM in the Chromium source tree which suggests that it scales nicely (assuming it's usable on the Chromium tree).
I haven't tried it on anything but rather small projects in C: 10-15 source files/modules, but I prefer vim and ycm to XCode for coding. As a matter of fact, all other code completion thingy's I use seem a bit crappy in comparision, with the excempt of bbautocomplete.
Time to install the new beta of vim, and check out ycm! :)
I was actually under the impression that Fortran was used simply because the experts (in this case in fluid dynamics) was familiar with Fortran. It was the language they learned and used while back at the university. At least this is the impression I got from working in the field.
I never heard of anyone suggesting we should use Fortfran for performance reasons, instead there was an ongoing movement to evolve the code, moving it to use the OpenFOAM solver that's written in C++.
This is just totally false. You can lay out your C arrays in exactly the same way that Fortran does, if you choose to. Or you can do something else, if that will give better performance. Tools are just tools; they don't determine what you can do with them. (I write matrix operations for a living, generally in C or Assembly; much of what I write is provably as fast as possible on the target hardware, so it could not possibly be faster if written in Fortran).
Seriously, there is a lot of judgement in that sentence.
"Having a life", having a family is a matter of choice. I can choose not to have a family and work on my pet projects. I choose how I spend my time. If I sacrifice relationships in favor of pet projects, does that make me a lesser person? could it possibly make me a better software developer?
Is it really unreasonable to imagine that there are many talented software developers in the "lifeless" set of people with pet projects? Perhaps the author is of the opinion that the set of people with pet projects have higher ratio of talent compared to the full set. That is not to say that there are not passionate talented people without pet projects.
You can read a judgement in there if you want to, it is actually the opposite. I think having/not having pet projects or other life choices you make outside of your professional hours should not have such a direct impact on being hired or not.
Of course you are free to make your choices any way you see fit and if you choose work over personal relationships that does not make you a lesser person.
So you can have your pet projects and another person could have their family (or both, or none!) and I think the only thing that should matter during the job interview is what they intend to do during their shifts and whether or not they are capable of doing that to the best of their abilities. Your free time is yours, not your employers and is non of your employers business.
My point is that having pet projects might actually increase software development skills. In the same way that practicing any activity generally makes you better at it. People who're good at socializing, basketball, football or whatever typically invests time in it. The more you do it the better you get at it. Why shouldn't the same apply to software development?
A person who invest their free time in relationships or sports will probably get better at those things than a person who only invests in pet projects, but why shouldn't the person investing in pet projects get better at his/her activity?
I think it's unfair to say that the employer should ignore all skills achieved your in spare time because other people choose to spend their time differently.
Good point. There's some psychology to be thought about here.
You need an element in the picture that is of a known scale. The couch is pretty good, but maybe include something on the wall so you can compare the two... a door frame is the best I can think of, since they are pretty standardised.
I don't know, I kind of like the sofa. Standing behind it you can take in the whole room, not just the compare the screen with the speakers. Besides, this was the coolest photo I could buy. :)
Interesting reading but I don't get the rant about the controllers being so ugly containing the model logic? I mean it's essentially a matter where you place the logic. The logic itself whether in the controller or model will/could be very similar.
Agreed, spaces in OS X has always appeared half-baked to me. Coming from a GNU/Linux environment I found it surprising that it wasn't possible to send a window to a specific workspace using the keyboard only. I don't want to invoke the mouse for basic workspace usage
For me, long compilation times puts me out of "the zone". If compilation takes too long I'll start reading e-mails or web browsing. When compilation is done my mind is elsewhere and I have to get into the problem solving "zone" again.
This is very true; that's why I hate the Scala compiler and when working with Java I take special care to have a lean and mean compilation strategy.
For example with Java I use manually written Rakefiles (I prefer it over Ant since I have more control), I make sure it doesn't compile unless files have actually changed and in case the project is getting big I start separating functionality in multiple projects, having multiple JARs as a result.
Then, I'm using Emacs and in Emacs I can start a build whenever I'm hitting "Save" on a file. And in case of compilation errors, Emacs even highlights the errors for me.
You have to work on it a little and you lose time on the actual build process, but you can achieve a lean and mean setup (unless the compiler really sucks).
Of course, this is the advantage of an IDE - it takes care of annoying details for you; but then you have to put up with all the bloat that brings. And for humongous projects, your IDE will choke anyway, even if you have the latest state-of-the-art hardware; try loading the Firefox codebase in Eclipse CDT or in Visual Studio sometimes.
In the 'old days', compile time was a chance to print out your code on fanfold paper and do a top down thoughtful code review, refreshing the map of the whole project in your mind.
It's quite rare these days for developers to literally see their whole program's code at once, and I think we're the worse for it.
In the 'old days', you must have been writing small programs. Printing a 1K-line program on paper might give you a nice perspective on your code. Printing a 10K-line program on paper is a waste of paper. Printing a 100K-line program on paper is ridiculous. Beyond that it just gets worse.
> In the 'old days', you must have been writing small programs.
On the contrary, 10,000 lines is still only 150 pages at 66 lines per page, and fanfold paper flips through easily.
I've read through several projects that took at least three reams of fanfold paper. I'm not saying that was fun. Fortunately one didn't generally have to read through the whole project, only the module one was working on.
I don't think it's useful to attempt to hold 150 pages of code in my head. Even with a long compile time, it's not possible to do more than just barely skim that much code.
I have printed code and reviewed it before. Sometimes it's useful for small programs or classes. I don't think it's useful to waste 150 (or more) pages to print an entire large program, though.
> I don't think it's useful to attempt to hold 150 pages of code in my head
Not sure it was about holding the actual code in one's head so much as the structure, flow, or shall we say, "plot".
The Chronicles of Thomas Covenant runs 4948 pages, Song of Ice and Fire is 4195 pages so far, and even LotR is 2144 pages.
This is one of the reasons I think there's a high correlation between great developers and developers who love history -- leveraging the ability to envision and hold a complex sequence of interlocking details in mind.
I feel like you're really reaching for a comparison here. You're not reading LotR in the 10 minutes it takes your code to compile. You're not even skimming it. The fact that your code is shorter than LotR isn't really meaningful. You also aren't reading 150 pages of code while your code compiles, and if all you want is to review the high-level flow, skimming the code is going to miss a lot.
I just can't see the value in printing 150 pages of code to barely skim it. Especially since those 150 pages will be increasingly out of date as time goes on. It's just such a waste of paper.
I'm not sure where you get your "great developers" and "developers who love history" link, either. Liking history has nothing to do with coding. Nor does it have anything to do with reading fantasy novels. And none of the history buffs I know are even coders. This is such a random tangent.
> "I'm not sure where you get your "great developers" and "developers who love history" link, either."
I provided credentials above. I've been managing hundreds of developers over the past decade and working as an independent developer for a decade before that. (And as a hobbyist developer the decade before that.)
> "Liking history has nothing to do with coding."
My experience hiring and managing hundreds of devs indicates the exact opposite. You may have found differently, but I will continue to focus on hiring people who find learning a rich tapestry of interconnected context fascinating, and preferring to hire those with history (or linguistics or other complex humanities) degrees with formal CS electives over those with pure CS degrees.
Sorry, I should have been more clear: Your personal anecdotes are not sufficient to establish any meaningful connection between these two. Moreover, this is entirely tangential and has nothing to do with what we were discussing.
> "Your personal anecdotes are not sufficient to establish any meaningful connection between these two."
“Now, I don't want to get off on a rant here ...”
By your definition, most science is "personal anecdotes" if building knowledge through systematic observation and study of hundreds of test cases is merely "personal anecdotes". (See "empirical research"[1].)
By contrast, you wrote "Liking history has nothing to do with coding" but you supplied nothing whatsoever to substantiate that statement, just as you supplied no basis for the statement that working closely with hundreds of developers is not sufficient to establish a connection. Ok, I've managed hundreds, and we've collaborated with thousands. How many developers would be enough to establish any meaningful connection?
You challenged "I'm not sure where you get your link" and I provided the basis for that link: observation of several hundred developers I have hired and employed. That's a reasonable number considering many studies use pools of just a couple dozen test subjects.
Throughout this thread, you have countered remarks I've based on experience, with your own unsubstantiated assertions, sometimes insulting in nature. For example, you wrote "In the 'old days', you must have been writing small programs" when the opposite was true.
Just because you "can't see the value" in a printed code review back in the 80's, or haven't noticed a link between coders with an appreciation for history and an exceptional ability to architect software systems, that doesn't obviate the need to provide at least as much foundation for your arguments as I've provided, particularly if you're trying to call me out for lack of basis.
You said this is "entirely tangential and has nothing to do with what we're discussing", yet this concept of holding the complex in mind is precisely what I opened with, and the theme I've stayed with.
In my experience – which I've scoped so readers can decide for themselves if it's relevant – reading well, taking time to contemplate and be thoughtful, remembering and understanding complexly woven tapestries of information (whether multi-volume literature or world history or computer code), and being able to read long code (or threads) and form a structure of it in one's mind, are all skills signaling good developers.
In my experience, a love of reading, stories, and history in particular, signals a desire for learning, a sense of proportion and place, and a respect for 'the shoulders of giants' likely to help a good developer become great.
“... of course, that's just my opinion. I could be wrong.”
I have serious doubts about the quality of your "systematic observation and study of hundreds of test cases". I don't believe at all that you've been systematically tracking which of your programmers are history buffs and how it correlates to performance. I think you're a history buff and so you assume that it must correlate meaningfully, just as programmers who are into music, or art, or whatever else do.
No number of developers is enough to turn offhand observation into meaningful correlation. That would require measuring and tracking. It would require more than your gut feeling and confirmation bias. I frankly find it worrisome that you actively choose history majors over CS majors for development work. That says nothing to me except that you have a personal bias.
I stand by my statement that you must be writing small programs if you're willing to print them in entirety on paper. You can find that insulting if you want (it wasn't intended to be), but it's a fact. 10K lines is not a large program in modern terms (though I say it's too large to waste the paper on). It's definitely not large when your team involves multiple developers.
As for code reviews in the 80s, the value is in the review. Whether it's printed on paper or not isn't really very meaningful. It can be nice to have a paper copy sometimes, but it's just that, nice. It's not really a substantive change.
And yes, this history argument is extremely tangential. You did not start with that. You started by saying that you used to spend compile times reviewing the printed code. That has nothing to do with being a history buff. Someone could love doing paper code reviews and hate reading about history. And any link that might exist was certainly not established before you transitioned into talking about history buffs being good architects.
> I have serious doubts about the quality of your "systematic observation and study of hundreds of test cases".
You still haven't provided your basis for positions, and you're still littering your responses with assumptions or accusations.
> I don't believe at all that you've been systematically tracking which of your programmers are history buffs and how it correlates to performance.
You are mistaken. When, out of hundreds of hires, a handful match the criteria, it's easy to look back at data collected at hiring time, and find out if there are correlations (not causations).
> I think you're a history buff and so you assume that it must correlate meaningfully, just as programmers who are into music, or art, or whatever else do.
Wrong. History isn't a primary interest. I enjoy a variety of things more. I believe most are unrelated to ability to architect software.
> No number of developers is enough to turn offhand observation into meaningful correlation. That would require measuring and tracking. It would require more than your gut feeling and confirmation bias.
See above. Your assumption was mistaken.
> I frankly find it worrisome that you actively choose history majors over CS majors for development work. That says nothing to me except that you have a personal bias.
Prefering to hire a History major with a minor in CS over a pure CS major typically results in better rounded individuals more capable of software architecture, dealing with clients, and collaborating with peers. Unfortunately, out of hundreds of hires, again, only a handful have fit that bill. But none of those who did fit that bill, had to be let go.
This was data collected at interview time, and in fact, on the first such individuals, I was as skeptical as you. Looking back at the collected hiring data about outperformers revealed this correlation. Since then, I've confirmed this curious correlation with several peers.
> I stand by my statement that you must be writing small programs if you're willing to print them in entirety on paper.
Again, your assume that I was printing the large programs. I was not. I was invovled with software developed and used by scientific research organizations, hospitals, and universities. They would hand me a box of fan-fold paper (once even on a hand truck), and say, "Here's the code." I found learning the overall picture faster reading through the stack than scrolling the code on a CRT, particularly thanks to the ability to use a highlighter and Post-Its.
Today, when confronted with a similar task, I prefer an iPad and a good reader with markup tools.
> As for code reviews in the 80s, the value is in the review. Whether it's printed on paper or not isn't really very meaningful. It can be nice to have a paper copy sometimes, but it's just that, nice. It's not really a substantive change.
Given today's technology, mostly agreed. However, research suggests tangible artifacts cement concepts more firmly in our organic brains than digital exposure alone.
> And yes, this history argument is extremely tangential. You did not start with that. You started by saying that you used to spend compile times reviewing the printed code. That has nothing to do with being a history buff. Someone could love doing paper code reviews and hate reading about history. And any link that might exist was certainly not established before you transitioned into talking about history buffs being good architects.
You're right, I didn't bring up history first, I brought up the concept of understanding "a map of the whole". To me, that equates closely with a long multifaceted narrative. You brought up the inability to hold a few pages of code in one's head, and I countered with literature and history. Both code and history are something like the Bayeux Tapestry, where the local is most useful when understood in context of the whole. Come to think of it, the word for our oldest long historical texts, and for what you do reviewing code on a the screen, are the same: scroll.
My ravioli's done baking. Enjoyed the discussion. Cheers.
To be fair, you haven't provided any legitimate basis for your claims about the link between history buffs and software architects. I'm just saying that your claims are unfounded, and so I'm not making any claims that demand support. Your claim that interest in history is correlated closely with ability to architect software is not obvious prima facie, so it demands support to be believable.
I also don't think your handful is a very large sample size, regardless of what correlations you think you see. Honestly, I can't imagine how you're even attempting to gathering this info. Are you just randomly asking people if they've read 1776 during the interview?
But no, hiring "hundreds of developers" is still not the same as actually measuring. I do not believe that you have files that track how history-oriented your developers are (make them take a survey?) vs how productive they are, so that you can find a proper correlation. A proper study of this might yield a strong correlation (though I doubt it), but it would require a lot more than casual observation during your hiring.
I did assume that you were the one printing the programs, because that's how I read your reply. In the 'old days', compile time was a chance to print out your code on fanfold paper ...". If that was a misunderstanding, then I guess it changes the situation. Sure, if someone's already handed you a stack of printed code, why not look through it while compiling?
I’m not sure how much sense it makes to measure them in the context of video card reviews as the article alludes to.
In my experience frame time spikes are more often a result of software rather than hardware. It would make more sense in the context of game reviews I think.