Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What Programming a Game in 48 Hours Taught Me About Programming Games (jeffwofford.com)
146 points by pertinhower on Aug 28, 2013 | hide | past | favorite | 71 comments


>if I could transfer the pace of production from Ludum Dare into my normal work, I would complete a game like House of Shadows in less than 2 months

While the article lists many plausible reasons why the pace of work turned out to be faster on the shorter project, there is also one that is not mentioned but perhaps the most important: complexity doesn't scale linearly. It's always faster to write the first 1000 lines of code from scratch than to add 1000 lines to a project that is quite large already, mostly because each newly added component has a growing chance to interact with an ever increasing number of older components.


Another big difference is that at Ludum Dare, the programmer is also the product manager - is that feature going to take too long to implement? Well, good news, you don't have to write a report to the product manager explaining why it's going to take too long to implement and suggest three alternatives. No, you just identify the problem, pick an alternative that achieves the overarching aim in a different manner, and implement it.

This effect is particularly insidious, because often the long slow complicated thing that the product manager wants is just the sort of interesting challenge that developers enjoy. They don't point out to the product manager that there is a simpler solution, because that would take away the shiny toy. LD does not have this problem because the developer knows that she can't allow herself to go off on a 3 week exploration of compiler theory, because otherwise her product won't be ready in time.


From my experience those slow complicated things are often avoided by developers even if it would allow them to delve further in some obscure interesting programming. The programmer's performance is also measured by their ability to achieve the goals within the defined deadlines and accepting a task just because it's interesting even if it means a big delay is not very common. At least in my working environment which is very deadline driven(as I assume most are).

But I am not dismissing your argument on the delay imposed by the extra layer between the project manager and the developer, specially if they have no programming experience.


Sorry in advance for this; I think I put my pedant pants on this morning by mistake.

I agree that complexity doesn't scale linearly. That's a very insightful thing to note. However, the fact that LOC correlates with "probability of a change causing unintended consequences" in many codebases is coincidental rather than causal. The underlying cause of the unintended consequences is almost always poor structure and/or poor planning.

Two words: cohesion and coupling.

Highly cohesive systems which are loosely coupled are far easier to extend and modify without fear of weird and unintended "spooky action at a distance" interactions.


Beyond Aylw's very good point that loose coupling demands its own share of effort, I'd also like to add that it only delays the inevitable. Even if you manage to organize your modules in a perfect, non-leaking tree and to only reference bug-free, well-documented external libraries, you still have two fundamental problems:

1. Each module necessarily has access to functionality provided by all its "ancestors". In the best case, this number still grows logarithmically with project size, and accordingly the difficulty of adding or modifying code in a "leaf" module.

2. New libraries and language features are generally seen as a neverending bonanza. But before you can effectively modify a module you need to understand how it currently works, which necessarily involves understanding the external tools used in it (quirks and all). As the variety of these external tools increases with code size (at quite possibly a super-linear rate), the complexity of working with any given module deepens accordingly.

The Holy Grail of software development would of course be perfect layering: each layer completely masks its predecessor, and is no more complex. Someone who achieved it could indefinitely roll off as much useful code on day 500+ as they did on day 1. But I'm not aware of any practice or methodology that dares to even remotely promise such utopian benefits.


Again, I was being pedantic. I should really work on that.

I didn't explain clearly enough that I agree with your assessment that as a codebase grows the effort involved in maintaining/adding to/modifying it grows as well. However I disagreed with what you said was the main reason for this. I think that your "main reason" is definitely a contributing factor, but it's too complex to be boiled down like that.

I also agree that maintaining a properly structured project definitely takes considerable effort. Further I'd argue that when this is done well the effort is largely front-loaded. This means that as the program matures, the effort required to implement new features of a given complexity will largely taper off. I believe this is the best-case logarithmic growth you're talking about.

Obviously worse cases would be linear, quadratic, and so-on. I view the differences between these curves as essentially the amortized value gained by doing this kind of front-loaded architectural work. If you get it perfect, you gain the difference between log(n) and perhaps some kⁿ, depending on project size/scope.

Speaking of scope, this ideal is only achievable where project scope is relatively well defined and relatively static throughout the life of the project. Sure, you can make a spam filter play chess, but, well, you get the idea...

And yes, leaky abstractions, complicated tools, all of that definitely contributes in horribly awful, ugly, hard-to-predict ways. If you care to read my thoughts on this topic in particular, I think I summed them up nicely in my reply to this comment [1] on the short-lived topic "Why I hate Frameworks."

Boiling all of this down, I think all I'm really getting at is that complexity drives effort, but there are ways of minimizing the complexity of large codebases. Sure external forces often make this minimization effort obnoxiously difficult, but it's still possible. While we might disagree on what "often" really means in the prior sentence, I think otherwise we're wholly on the same page.

Anyway, I'll try to go change my pants...

1: https://news.ycombinator.com/item?id=6284038


While this is true, keeping a large project from getting ... err, decohesed and coupled, takes a lot of time as well. You will have less lines of code, but you'll definitely spend a lot more time than that first 1000 lines on a large project.


Hah, very interesting.

On the other hand companies generally put their very best developers on new products and features - i.e. writing those first 1000 lines. While the mediocre devs get to extend and maintain those products/features, by copy paste or what got you.


That's because those first 1000 lines of code contain the project architecture and style which will heavily influence the next 10000 lines.


I have never heard of a place where "mediocre devs" get siloed off as a normal way of operating a business. If you have under-performing engineers you should change your hiring and think about mentoring or exits for those you have already.


It's more common in enterprise-y settings, since you don't have to pay them as much. Hire a few senior people to do the heavy lifting and stick all the maintenance programming stuff on the others.


Do they know they are considered mediocre? I think I might change my career if I ended up like that.


that's because its more fun to work on that new 1000 lines, vs having to dredge thru old code to add the 1000 lines.


The linked http://strlen.com/java-style-classes-in-c was both horrifying and tantalizing. I'd expected a preprocessor that generates appropriate .hpp/.cpp, but instead I learned about yet another corner of C++ I hadn't exactly induced from experience.


Have a look at juce_amalgamated.cpp in the JUCE library for a real world example of this. The author of this java style classes article has taken one extreme view and I have taken another, that no code, inline or otherwise belongs in a header. Just declarations all the way.

It's remarkable that modifying C++ headers is such a chore that it comes up as a reason for getting something done in a tiny fraction of the time for a roughly comparable task.


It's enough of a chore that entire products exist to help. I rather like "Visual Assist X".


>It's remarkable that modifying C++ headers is such a chore that it comes up as a reason for getting something done in a tiny fraction of the time for a roughly comparable task.

Couldn't agree more. Though I don't find it all that painful - lots of editors/IDEs have a 'go to header file' shortcut that makes this take a couple seconds at worst.

But then I have repeatedly wondered why IDEs don't seem to have a way to synchronize your cpp changes to your header automatically. Rename function in .cpp -> automagically rename function in .h. That's all you really want a lot of the time.


It's not the editing of the header that's the problem, it's doing the edit correctly so that the program still builds. I call it include hell.

In addition changing a header file can at times trigger a lengthy build.

Visual Studio and ctags based 'IDE' style features allow you to go to definitions or declarations fairly easily.

Unless I'm mistaken I think the eclipse CDT for c++ can change/refactor file and header as you wish for. [1] [2]

[1] http://help.eclipse.org/helios/topic/org.eclipse.cdt.doc.use...

[2] http://r2.ifs.hsr.ch/cdtrefactoring

I'm unaware of anything that can change function prototypes correctly, but maybe eclipse CDT can do it.


>In addition changing a header file can at times trigger a lengthy build.

Well yeah, but that's true for doing it by hand too. If it's that slow, you won't have automatic builds on anyway, and you can usually cancel it if you need the CPU for some other reason (in which case, again, why do you have automatic builds on).

And even if it were only a 90% thing, that's a 90% savings. I mean, literally, if you change the name alone, change the header name. You get immediate notification that other locations might need changing if/when the build fails. Similarly for argument order / name (type will probably cause build failures).

But all those build failures would happen anyway if you changed it by hand, so it's not any different, just faster. Isn't faster the goal here?


I'm not making a distinction between manually editing and automated editing of a header file. Both are a PITA.

Changing the header name sounds like a reasonable suggestion, but I don't think it's a goer with large projects.

The problem with headers is that as soon as you are doing anything beyond trivial you can get in to a big fight with the compiler, leading to the following sorts of problems:

http://stackoverflow.com/questions/12573816/what-is-an-undef...

When these problems are C++ template related, then you have to remember exactly what you changed and where because even on the intel compiler, the compiler isn't really going to tell you what's wrong.


No intrinsic limit, just that c++ is so incredibly hard to tool that so many useful tools (say for refactoring) will never get written for it.


In fact, I think the Visual Studio team once said that it's so excruciatingly hard to build C++ refactorings that won't destroy code in unhappy circumstances that they won't bother shipping such things.


I suppose clang is the great white hope for C++: https://www.youtube.com/watch?v=yuIOGfcOH0k

link taken from here: http://stackoverflow.com/a/13840863


New C++ code should be outlawed. Really, the language was just designed to make everything difficult, problems that just don't exist in modern programming languages. Crazy syntax, use of type information during parsing, convoluted namespaces...there is a reason C++ code is twice as hard to write, let alone trying to write a decent compiler for it. Heck, for the longest time there was only one portable parser that worked available for licensing, and the licensing costs were north of $150K.

C++ has no future by design.


I'm not in a position to advise people on what they should or shouldn't do, I am ignorant of a great many things. Until I wrap my head around the following, I won't be writing C++ unless I have to:

http://en.cppreference.com/w/cpp/language/except_spec

The fact this is deprecated in c++11 is my personal pet peeve.

I can't reason about what the impact is without putting it in to practice, but given that exception safe C++ can be hard to write and it's deprecated does that mean it's going away in C++14 and what are the implications of that?

The throw() specifier (counter intuitively meaning 'should not throw anything') is everywhere! If it turns out that all the advocacy for using exceptions has resulted in written code that uses deprecated conventions and requires maintenance and re-understanding, then I will be very very disappointed.


C++ exceptions are not checked at compile-time like Java exceptions (excluding Java's unchecked RuntimeExceptions). The C++ throw() specifier is a run-time check that aborts your program if an exception not listed in the throw() specifier escapes the function.

C++'s design combines the worst of checked and unchecked exceptions and adds some additional run-time overhead for good measure. Writing exception-safe C++ code is essentially impractical for large C++ programs that include C libraries unless you write your own RAII classes for everything. But then your C++ code is littered with unreadable std::shared_ptr<whatever> everywhere. If C++11 had adopted some Rust-like shorthand syntax for std::shared_ptr and std::unique_ptr, it might actually be palatable.


"C++ exceptions are not checked at compile-time"

are you sure about that [1][2], this is what throw() was deprecated for:

[1]http://en.cppreference.com/w/cpp/language/noexcept

[2]http://en.cppreference.com/w/cpp/language/noexcept_spec

Unless I am mistaken, on 32bit platforms exceptions incur a runtime overhead but for 64bit zero cost exceptions were developed, and only incur overhead if they are triggered [3]

[3] http://gcc.gnu.org/onlinedocs/gnat_ugn_unw/Exception-Handlin...

I think that any calls to the C library can be handled as followed in an exception safe manner unless I misunderstand you:

EDIT:( I misunderstood you, you meant wrapping resources, what follows is nonsense as a C library isn't going to throw any exception)

void library_function_wrapper()

{

  try
  {
    library_function();
  }

  catch(...)
  {

  }
}


I'm not familiar with C++11's noexcept specifier, but the throw() example from [1] shows how throwing a W object is caught at run-time instead of compile-time. I think compile-time exception checking is infeasible because C++ functions without a throw() specifier might throw any exception, so the caller can't actually check at compile-time what exceptions might be thrown.

  void f() throw(X, Y) 
  {
      int n = 0;
      if (n) throw X(); // OK
      if (n) throw Z(); // also OK
      throw W(); // will call std::unexpected() at run-time, not a compile-time error
  }
[1] http://en.cppreference.com/w/cpp/language/except_spec


Similarly, Eskil Steenberg talked about building tools for doing simple things quickly rather than for doing complicated things at all.

http://www.youtube.com/watch?v=f90R2taD1WQ

His main point is that building a game is no longer about what you can do, but rather what you can get done.


Does anyone have a reliable system for creating artificial deadlines?

There are no external constraints on my work. When one artificially arises, I get my best work done, quickly. It doesn't burn me out because a rest comes afterwards.

I have no idea how to replicate this urgency consistently.


This is the point behind Agile with its two/three week deadlines. You take a day, set your goal and break down the tasks for your sprint and have a quick scrum every day to make certain you're on track.

The nice thing about it is that it stops you from getting distracted on a constant basis and you have something done at the end of your sprint.


Stuff like beeminder or stickK lets you pledge money to achieve a goal, which creates a real deadline for you. Beeminder works better with regular tasks, but you can probably use it for one-offs too.


The Pomodoro Technique is a winner for quite a few of us.

Also reading up on perfectionism might be interesting (hint: it often doesn't end with a perfect result but it is always time consuming.) YMMW.


The drawback with pomodoro in Programming is Flow - If it takes even 5 minutes to "ramp up" your brain back into the problem space, then you're throwing away 10 minutes every half hour (and that estimate was highly optimistic) - and you're doing that deliberately.

Much better to eliminate distractions, in my experience (selective site blocker, headphones with white noise or coffee noise or wordless trance, and an IDE that never stalls.)


>IDE that never stalls

Why I hate Eclipse


Yeah, I use pomodoro. I'm thinking more of productivity over a 1-3 day period. Maybe I just need better default habits, when I use pomodoro consistently I'm very productive.

I don't think I'm a perfectionist, and agree it is to be avoided.


I don't think there is a way to consistently improve your total output without side effects. I worry that this chase for productivity always comes with negative fallout, and people neglect it when pursuing 'more output'. Better output, perhaps? More output? Probably unsustainable.


You could try a Personal Kanban with a very low work-in-progress limit (maybe 3 or less?). This would help you create a rule to force yourself to finish one task before being able to start another, lest you go over your WIP limit.


Could you go into more detail on this? It sounds interesting. I think one problem I have is that I have too many projects and I'm not good at bringing just a small number into focus.


If you liked ActionScript but want to get back some performance I would recommend checking out NME. It looks a lot like ActionScript but outputs C++ (among others) and can be compiled as usual. Not the best, hand-tuned, C++ but probably good enough for a lot of projects.


It's OpenFL now, NME being just a native (cpp & neko) part of it.


As someone who works in another legendarily slow, computer-heavy artform (3D animation), this was a really fascinating read.

His point about metawork really hit home hard. I'm at the end of a 4-year project now, and I'd estimate 90% of what I'm doing is what he calls metawork.

The major takeaway I got from it was that I should try doing some 48-hour projects myself - or perhaps find some 48-hour film competitions to enter - and see what the results are.

And, of course, write them up as a blog entry.


>> During Ludum Dare, I remained tightly interfaced with Flash. I was continually in the midst of the edit -> compile -> test -> edit loop. >> This was one of the largest reasons for the high pace of production.

I discovered this when I started building game ideas in Javascript instead of Xcode. I'd have a skeleton pounded out in an hour or two, and I'd add features in minutes instead of hours. I took a list of things that I though would take me the weekend, and I completed them in an evening.


"Do not read Hacker News".


Sound advice. I blocked it from my laptop. I still get much value from a daily browse, but it's not a time filler. It grows.


Can anyone tell me what's "Wouter’s dual-pass C++ class solution for game objects." ?


There's link in article:

http://strlen.com/java-style-classes-in-c

You write all the code for your classes inside header files and include them inside another class in main.cpp. That way you get around "declare before use" rule using uncommon feature of C++.


>I could drag a bitmap into Flash to import it, then place it, position it, add filters, animate it, and attach the animations to code all in one tight motion

I want a tool where I can do this but in JavaScript on a canvas.


I haven't really had a chance to play around with it, but have you tried Adobe Edge Animate? http://html.adobe.com/edge/animate/


Flash can actually export to HTML5/JS for you, so you might try Flash itself.


> Intensity of focus. Almost all my waking hours were dedicated to programming during the 48 hours of the contest. I even took less sleep. This intensity of focus allowed me to maintain contact with the concepts and issues in the game so that I was able to remain productive without costly ramp-up and ramp-down times.

Makes me wonder if it would make more sense to cram a 40 hour workweek into 3 days instead of 5. I think it'd be more enjoyable as well.


I'd like the option to do that at times, but I'm not so sure I'd stick with that pace for good. I get a lot more pleasure out of evenings at home with my family than I do out of that intensity of focus.


consider checking out unity which has a very rapid testing flow and should be even more productive than flash!


Epic quote: >The C++ programmer is a deer sniffing the air for the scent of boots and gunpowder: everything’s an opportunity for gain; everything’s an opportunity for calamity.


>For the past several years I’ve spent most of my development time with C++11 in Xcode.

C++11 has been around since `past several years'!?


> C++11 has been around since `past several years'!?

Yes, but it used to be called C++0x and the official spec was still in the works. However, you could actually use most of the features in GCC and some other compilers years before the spec was out.


Does game programming require C++? Ocaml can be just as performant, and Clojure might also be a good candidate (high-level functional programming, potential to get strong performance).

I admit that, for Java, the gain in language expressiveness isn't worth taking the performance hit, because the former is so minimal. On the other hand, moving to a truly high-level language with half-decent performance (e.g. Ocaml, Clojure) might ameliorate the schedule slippages for which that industry is known.


Just to dispel a common myth: in general, Ocaml cannot be as performant as C++ today. As far as I'm aware, no Ocaml compiler performs auto-vectorization of loops, which means no SSE for you, leading to a 3-4x slowdown on last-generation CPUs and even more on future CPUs. Vectorization is a big deal if you want to maximize performance.

I do agree, however, that this doesn't make C/C++ necessary for game programming. That's probably why modern game engines let you write a great deal of the game logic in a scripting language.

Of course, most C++ code I see - even in language performance benchmarks - doesn't vectorize either. g++ isn't particularly good at it and most people don't know how to do it.


3-4x game perf differences from CPU code autovectorization sounds way high? From what I've seen you'd be pretty lucky to average 3x speedups on isolated vector-friendly loops with hand rolled ASM, translating to much smaller whole app speedups.


It's hard to write game in a language without side effects, even if you don't care about performance.

Games are made in iterative process, you change sth, test, etc. You don't know if something is fun and looks good without testing it. And it's really hard to upfront design for fun.

Without side effects most of the time programming in this way is spent maintaining long arguments lists and passing them around to be able to add even very small features.

You want to play sound when player leveled up? Pass soundManager through this 7-functions-long callstack. Now you also want to shoot stars around the player on the occassion? Pass this ParticleEffect handle too. Oh, but the effects should vary depending on the region the player is in? Pass that information to that function too. And time of the day, while you're at it, so stars can be blue at night and yellow at day.

In the end you give up and pass WorldState variable to every function. I don't think pure functional programming is good for prototyping games.

I thought about this problem a little, and I think this could be solved by structuring the WorldState variable into a tree, and writing a script that you run after the game is done, that looks inside functions for which parts of the WorldState they really need, and changing them accordingly, so the code is easy to read and functional after the experiments are done.


Just for the record, Clojure has a good system for managing mutable state when you need it, of course has java interop so there's nothing from stopping you from mutating java objects.

Edit: Here's a guys experiences from making a game in Clojure a year ago:

http://stackoverflow.com/questions/9755882/are-functional-pr...

http://clojurefun.wordpress.com/2012/09/03/ironclad-steam-le...


Thanks for the links, very interesting. I see the guy behind this had the same problem:

> Long parameter lists – I struggled a bit with this, and still don’t have a great solution.


While Java might not be as expressive as Clojure, you definitely take an even harder penalty to performance when using it over using Java. And you can gain some pretty strong benefits of using pure Java these days, what with Java 8 having lambdas, streams (parallel and synchronous), and all the wonderful libraries out there that help with concurrency. (RxJava, quasar, etc.) Among other things, such as a very nice wrapper over the OpenGL library. (LWJGL is utterly fantastic) You do lose a nice matrix library (glm) though.

Of course, Clojure has wonderful concurrency as well, so you can ameliorate the cost of using Clojure by spinning up more threads. However, games are a tricky thing to parallelize, from what I know. Especially as the only place you can actually make OpenGL calls is from the thread a program is started from.

I wouldn't recommend Ocaml for games though. Use Haskell or Go instead. Ocaml might be speedy, but it has a GIL, a la Python or Ruby. Also as far as I've found, there are basically no bindings libraries out there for opening windows or making OpenGL calls. And I might be misremembering, but the libraries that did allow you to use OpenGL used 1.x or 2.x versions, meaning you can't use a modern pipeline.


I'm part of the small but growing crowd that thinks Javascript is a good way forward, at least for small Indie developers and Ludum Dare entries.

There is nothing worse in LD than seeing the voting screen filled with Windows binaries you don't want to download.

Here my entry this time if anyone's interested ;)

http://www.ludumdare.com/compo/ludum-dare-27/?action=preview...


The AAA game industry is extremely risk averse - if you are using something it either has to be successfully used before or you need to make a demo that proves its viability.

For graphics or physics it's easy to make a demo - you render a spinning buddha or simulate a hundred balls, show how much CPU and GPU it takes and everyone can extrapolate this to the full game. I don't know what could be a sufficient prove of Ocaml viability other than a full game written in Ocaml.

I am not familiar with indies or mobile but I imagine, they are even less likely to risk as their chance for success is already pretty low and it would be rather reckless to add more unknowns from an untested technology.


I have looked at clojure for doing this somewhat recently (about 8-10 months ago) and the main issue I ran into was lack of bindings to some of the more necessary APIs for doing it these days (SDL, OpenGL, etc.). Some of them existed but were very badly documented/maintained which put me off of it a bit, but I do think that there are some definite things that could be gained with using Clojure at least (Of course you can use the JVM bindings directly, but that was starting to look very ugly as I played with doing so).


While Clojure-esque bindings don't exist (that I know of), you can always just use direct calls to OpenGL functions via the LWJGL library. Even write a nice wrapper around them that could be open-sourced. Clojure has complete interop with Java libraries, remember :)

You can also check out the nice wrapper around the Processing library called quil: https://github.com/quil/quil


Last time I've checked (a few years ago) Penumbra was nice clojure-sque OpenGL wrapper. But it seems it's abandoned. Shame, it was really nice, even had clojure DSL to write shaders.

https://github.com/ztellman/penumbra


Or Try libGDX http://libgdx.badlogicgames.com

It is a framework developed around LWJGL for the desktop and also has an Android backend. libGDX lets you use OpenGL without worrying about how OpenGL works.


It depends on which type of games you're talking about.

AAA game engines are still all about performance and tight memory budgets due to the nature of the platforms they're developing on - consoles and PCs. Hence the aversion to garbage collected languages, although higher level languages usually make it in the engines as scripting languages for game logic: UnrealScript for UDK, C# and UnityScript for Unity, Lua for CryEngine... The industry is traditionally oriented towards C++ and imperative / OO languages, but there's a lot of potential for functional languages in the "embedded scripting" area. In his 2013 QuakeCon keynote [1], John Carmack talks a bit about his experience with functional languages, and how, for example, Scheme could be an ideal candidate as an embedded scripting language.

As far as smaller teams are concerned, and especially indie development, there is a lot of potential for OCaml, Clojure and other functional languages. Many mobile and indie dev teams already use a heck lot of C#, Python, Java and other specialized tools like Haxe, ActionScript. But all these languages have well liked and mature frameworks geared towards game development (XNA / MonoGame, PyGame, OpenFL, libgdx, ...), which is maybe what Clojure and OCaml are missing right now.

[1] http://www.youtube.com/watch?v=1PhArSujR_A


Have you seen the slides from Tim Sweeney's "Next Mainstream Programming Language: A Game Developer’s Perspective" talk? Sweeney worked on Epic Games' Unreal engine (C++) and discusses how functional programming could benefit game programmers. He doesn't mention OCaml by name, but he mentions Haskell a couple times.

http://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced-...

Lambda the Ultimate has more discussion of the talk, including comments from Tim Sweeney:

http://lambda-the-ultimate.org/node/1277




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: