Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Makefile hacks: print the value of any variable (2010) (melski.net)
90 points by Tomte on April 15, 2017 | hide | past | favorite | 25 comments


Eric (who wrote TFA) and I worked together at Electric Cloud on GNU make stuff. He took over from me writing a column called "Ask Mr. Make".

The columns I wrote are all available here: http://blog.jgc.org/2013/02/updated-list-of-my-gnu-make-arti...

Or, if you like, there's a book version: https://www.nostarch.com/gnumake


I started with your first article which begins with "There aren't any debuggers for Make" Actually Rocky's remake existed in June 2004 already. Bad, very bad...

    commit 60f9a6ab18f12802c79a977d1b6b65b106355c10
    Author: R. Bernstein <rocky@gnu.org>
    Date:   Sun Apr 4 01:38:36 2004 +0000

    Initial revision


Hmm. "Bad, very bad..." seems like an exaggeration. I wrote that article in June 2004. Although the initial commit for remake (which I covered in 2007) was in April 2004 the initial release was... at the same time as my article.

My article was published on June 27, 2004: https://web-beta.archive.org/web/20080202093023/http://www.c...

remake preparing for first release on June 12, 2004: https://github.com/rocky/remake/commit/f9ad86874dae33ea89485...


OT but make (GNU make specifically) is an amazing piece of software.

It seems every build tool since then has intentionally disregarded the file-based dependency graph that is the elegance of make. Crazy.

(There are modern Blaze-clones which continue the spirit, but with a steeper learning-curve.)


Make has some serious shortcomings though. Some that I can think of: it relies on timestamps instead of content hashes, it doesn't rebuild things when e.g. compiler flags change, its approach to header dependencies in C/C++ is clumsy, it needs to parse the Makefile each time (which for large projects takes non-negligible time), it doesn't have an integration with inotify (which means it needs to stat() every file to check its timestamp).


> it relies on timestamps instead of content hashes

If it relied upon content hashes, then make would have to hash the files each time it ran (taking time---checking the timestamp is faster). For a large project, this might be too excessive.

> it doesn't rebuild things when e.g. compiler flags change

Just make the Makefile a dependency on each target. That problem solved.

> it needs to parse the Makefile each time

Um ... of course. The only way around this is for make to compile into some easier format to read or some database type thing (perhaps a .makefile?) and it would need to track changes to Makefile to update the database type thing (well, make does dependency tracking so that shouldn't be too hard).

Just a question---do you have a project where reading the Makefile takes a non-negligible time?

> it doesn't have an integration with inotify

inotify is a) Linux only (make, and specifically GNU Make, runs on nearly everything) and b) it's an API, not a service that can be queried. Doing this implies that make will have to always be running checking all files for a project. Which project? All projects that have a makefile on my system? After a certain number of files being tracked, I'm certain that checking "inotify" (a daemon perhaps?) is the same as checking "stat()" for file metadata.


These are solvable approaches, though it's easy to see why make did what it did.

> hash the files each time it ran

Bazel & co relies on file hashes, but only checks then when the timestamp changes.

Downside: now you have to keep hashes as metadata somewhere.

> it needs to parse the Makefile each time

Many build tools run a long running process either in the foreground (SBT, new Maven) or in the background (Gradle).

Downside: Now you have to run a long running process...

> it doesn't have an integration with inotify

It could.

Downside: Not universal, uses memory, more complex.


> Just a question---do you have a project where reading the Makefile takes a non-negligible time?

I have a project where reading the Makefile (and makefiles `include`d by it) takes about 12 seconds. The little profiling that I've done leads me to believe that most of it is spent in the garbage collector. Regardless of whether that is true, for Makefiles that take a long time to "parse", the time isn't spent in parsing per se, it's spent managing the datastructures that parsing creates (or spent evaluating expensive things like $(shell) that because of lazy evaluation end up getting evaluated many times; but that's a smell of a poorly written Makefile).

> Um ... of course. The only way around this is for make to compile into some easier format to read or some database type thing (perhaps a .makefile?)

That is what ninja was designed for.


> If it relied upon content hashes, then make would have to hash the files each time it ran (taking time---checking the timestamp is faster). For a large project, this might be too excessive.

You can combine timestamps with content hashes. Or if you have inotify integration, you only need to process a file when it changes.

> Just make the Makefile a dependency on each target. That problem solved.

You can override variables when invoking make without changing the Makefile. Problem not solved. Of course there are some hacks to add dependencies on variable values. The majority of Makefiles in the wild don't do this, requiring you to do 'make clean; make' if you change flags. Defaults matter.

> Just a question---do you have a project where reading the Makefile takes a non-negligible time?

At the moment, no.

But just to give you an idea of how it could become a problem: a simple C++ file including <iostream> leads to a 25KB .d file with 200 dependencies (generated by clang++ -MD). For a project with 1000 files, I made a 25MB file with one target and 20000 dependencies on the same file. Make took one second to process that. That is about the simplest Makefile to read. Throw in some includes, multiple Makefiles, variable expansions, and external shell invocations and that time will go up.

> inotify is a) Linux only (make, and specifically GNU Make, runs on nearly everything)

I should have said inotify or similar system (bsd has kqueue, I imagine Windows has something similar).

> b) it's an API, not a service that can be queried. Doing this implies that make will have to always be running checking all files for a project. Which project?

The one I'm working on. I'll start a daemon when I'm working on a project. As long as we're dreaming, it would be nice to have a filesystem that integrates Merkle trees and exposes an API you could query to see if anything has changed in a subdirectory.

Apple kludged something together with their file systems events API [1] that provides similar functionality. It provides an API to query what has changed in a directory subtree since a previous invocation. No need for a persistent process. They use it for their Time Machine backup software.

> After a certain number of files being tracked, I'm certain that checking "inotify" (a daemon perhaps?) is the same as checking "stat()" for file metadata.

No. With inotify the work is proportional to the number of changed files. With stat() you need to check every file, so the work is proportional to the total number of files in your project.

As I said elsewhere in this thread, most of the design decisions of Make make sense given its history. That doesn't mean they are still optimal today.

[1] https://developer.apple.com/library/content/documentation/Da...


> it relies on timestamps instead of content hashes

Which means it can be faster. If you need to be able to write a file, but have the build system recognize that the new one is the same as the old; this can be accomplished with a simple `sponge`-like script that only writes the file if the new version is different than the old. (I like to call the script write-ifchanged)

> it doesn't rebuild things when e.g. compiler flags change,

If you don't tell it to, no it doesn't. You can easily combine the above write-ifchanged with the technique described in the article to declare dependencies on variable values.

> its approach to header dependencies in C/C++ is clumsy

Sure, but not clumsier than other build systems.

> it needs to parse the Makefile each time (which for large projects takes non-negligible time)

Truth.

> it doesn't have an integration with inotify (which means it needs to stat() every file to check its timestamp).

Which means that it works with NFS, doesn't have race conditions involving renaming directories, ...


>Which means it can be faster. If you need to be able to write a file, but have the build system recognize that the new one is the same as the old; this can be accomplished with a simple `sponge`-like script that only writes the file if the new version is different than the old. (I like to call the script write-ifchanged)

> If you don't tell it to, no it doesn't. You can easily combine the above write-ifchanged with the technique described in the article to declare dependencies on variable values.

You can make Make do a lot of things with enough hacking, but the defaults matter. The majority of projects that use make don't bother. Meaning you need a 'make clean; make' if you change compiler flags.

> Which means that it works with NFS, doesn't have race conditions involving renaming directories, ...

It has plenty of race conditions without that: if you change a header file in the middle of a make run, it will happily continue without warning.

A lot of Make's design decisions make sense given its history. Given its ubiquity it's also the best choice for a lot of projects. That doesn't mean it's perfect.


That's true if you limit yourself to, say, GNU make -- but there are GNU-make-compatible alternatives like Electric Make, part of ElectricAccelerator (http://electric-cloud.com/products/electricaccelerator), that add features like ledger, to trigger rebuilds when compiler flags change; filesystem monitoring for truly accurate dependency detection; and parse avoidance, to avoid reparsing the makefile on every run; as well as a long list of other enhancements.

Disclaimer: I'm the author of TFA and Chief Architect for Electric Make.


https://ninja-build.org/ addresses many of these issues while staying mostly compatible with the spirit of make (simple file-based dependency rules).


I hesitate to divert the topic, however: I'm working in a startup that has a solution to three of these shortcomings. We're not publicly available yet, but we are interested in anyone who'd want to try us out.


Yep, I even prefer it in place of gulp/grunt for JS-based projects. Steeper initial learning curve? Maybe - but it's a good tradeoff for well tested, simple build processes.


Steeper initial learning curve?

I don't think so, I think there just aren't any good tutorials.


> It seems every build tool since then has intentionally disregarded the file-based dependency graph

How do you mean that?


Does Meson do file-based dependency?


Does this count as a make hack? I have (or had, can't seem to find it right now) a script to generate a makefile from a list of URLs of files to be downloaded. In this case, it was for when I needed to download hundreds or thousands of files. The makefile just called curl to do the downloading. But the beauty of using make was that I could easily control the download concurrency with -j, and of course that it would skip already downloaded files. I think I may have even made it properly resume partial downloads by checking for the temporary target file and using its size to send the right range request.

Anyway, it's been a few years, but I thought it was pretty clever at the time.


Yeah, I think it counts, though make was designed for general purpose jobs and not just compilation, so it gets a lot of credit for being a solid and general tool. Being able to resume jobs is awesome, and free concurrency is too. I've done the same thing and used make for batch downloads and batch image resizing and batch frame rendering, etc. etc. I did it enough times that I generalized my makefile so I could easily schedule resumeable parallel command line batches. I called it "mbatch" and was getting ready to publish it... but then I discovered gnu parallel. :P


I'm not a guru, but that sounds like the sort of thing wget is built to solve. I'm also not a make guru - that sounds like a nifty use.


You have this posted anywhere? I'd be interested in studying it.


Most folks don't use it as such, but make is a logic programming language, ala Prolog. It can use a long chain of translations without you telling it the sequence. For example, I could #include "foo.pdu.der.h" in a C file, and make figured out that it needed to find foo.pdu.xer, translate it from XER to DER as a "pdu" PDU, then hexlify it as an char array into foo.pdu.der.h.

It is a very flexible tool if you can get past the syntax (and always use --warn-undefined-variables).


I had a similar need except I didn't which variable I was looking for. I think I found this on stack overflow:

    .PHONY: variables
    variables :
        $(foreach v, $(.VARIABLES), $(info $(v) = $($(v))))
        @echo
which prints all variables and their respective values.


Here's a similar "hack", but it is not limited to gnu make:

  printvar:
          @echo $(VAR)
You can execute it like this:

  make printvar VAR='$(PWD)'




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: