Jplus wrote:*Essay alert*
Oh, I can beat that.
My warning is a lot of what I say is probably kinda appropriate to religious wars, but what the heck.
Things are not necessarily quoted in order or without editing.
Shivahn wrote:I'm getting by fine with SciTE, valgrind, gdb, makefiles and g++.
As a build tool, SCons
are all more convenient than the traditional Make, though in very different ways.
I can't reiterate these enough. In my opinion, using plain Makefiles, especially for C and C++ programs, is something that you just shouldn't do. Either you wind up with a build system that is inaccurate (in the sense that it rebuilds too much or not enough) or it requires too much manual effort to get headers right. Make + a makedepends-style solution helps this a bit, but there are still other problems and there are tools that I consider leave Make in the dust. To the point where I'll say "if you're using unassisted Make for C/C++ programs by free choice, you don't actually care about your tools."
Way better is either an entirely different build system (like SCons and Boost.Build, or the unmentioned Waf) that will deal with all that for you, or makefile generators (like CMake or the autotools; if they even handle these things which I'm not sure if they do on a changing code base or not). By a "makefile generator" what I mean is that you run a program (either cmake/ccmake/etc. or configure) which produces your makefiles, then you run make. CMake has a major advantage in that it will generate more than just makefiles for *nix-like systems -- it can also produce things like Visual Studio projects for your Windows users, "for free" (I believe). I'm pretty sure that a developer would have to go well out of their way and put in a lot of effort to get a MSVC-based build using the autotools, but I think you'd have to go out of your way to not
get one using CMake (assuming all your C/C++ code is portable).
The autotools (autoconf+automake) have a big benefit of people knowing exactly what to do when untarring a project and seeing the autotools files. I know that I go through the "configure && make && make install' sequence, I know that I can pass '--prefix' to 'configure', I know that if 'configure' can't find libraries it will usually tell me the flag to pass, etc. However, my impression of those tools from the developer's side is... let's just say "less flattering". From what I can tell, they're pretty arcane. But I don't have first-hand knowledge in using the autotools, so I can't speak fairly or intelligently.
I have similar things to say about CMake. I actually like CMake even more when building stuff than the autotools. It's got a wonderful mechanism for sort of incremental configuration -- you can use it so that you configure, then when it hits a dependency it can't find, it'll stop and you can set it, and it will pick up where it left off. It will also give you a nice "GUI" if you want (either ncurses or an actual GUI) that lists all the configuration variables it pays attention too, and where you can set right there. (It's much better about this than autoconf, which seems sort of hit & miss. There's more sort of folk-knowledge to do things with that.) It's also a lot better at cross-platform stuff, as mentioned above. Finally, it also very nicely gives you a progress indicator while building, which is something that almost nothing else does. However, it's also a bit funky to use; it's got its own syntax for configuration files and stuff like that. Again, I have little experience using it from the developer's side.
I can talk about SCons, since that's what I use for nearly all of my builds. SCons is a build system proper -- you run it and out comes the built file. It is actually pretty nice to use as the developer. Your build scripts are all written in Python, so if you know Python you already know how to do some things. (Of course you still have to learn the API and how to us it.) It definitely does proper tracking of #include files (well, almost
proper). It also treats both the executable it's running and (more usefully) the *command line* it uses to build a file as dependencies -- so if you change the build script in a way that will affect some files and not others, it will just rebuild those files. IMO this is a killer feature. It also has a nice change-detection thing; instead of looking at mtimes, it can take an MD5 of the file's contents. This means if you change a file then change it back, it won't rebuild (or if you change a source file in a way that doesn't affect the outputted object file, it won't relink); it also deals better with clock skew (very nice when working in a networked environment). What I uses is a compromise between the extra time of the hash and the benefits, so that it will look at the mtime first and if it is different from the last build, it will take the hash.
SCons isn't perfect though. First, it's cross-platform support is decent but not great. You can specify some common things like "look in this directory for include files" in a compiler-independent way (is it -I or /I?), but not others like "compile with debugging information" or "turn on warnings", so you need an if statement in your build script to pick. (I would guess this is better than the autotools, but it's not as good as CMake or Boost.Build.) And from the perspective of someone who is trying to build, it's unfortunately not as nice, as there's not good built in support for things like "look in this location for this library" or "build to this prefix". While it's not hard to add this yourself as a developer, it means that the interface isn't as good for the user.
Boost.Build I can't say much about either. It's got it's own specialized language and is a builder on its own, but is more strongly cross-platform than SCons is. One note: I think Boost may be trying to move to a CMake-based system; I'm not sure what this means about the future of Boost.Build. It's definitely not widely-used; I build a lot
of software and I don't think I've ever come across anything else that uses it.
Waf seems like a promising up-start, but who knows if it'll reach its potential. It's based off of an old version of SCons, though the APIs now bear basically no resemblance. Like SCons, you write your build script in Python. I think most of SCons's shortcomings are still there, but I don't have much experience with it so can't say for sure. It does have some nice things though, like a more
consistent interface for someone building (though not at the autotools level), and it's the only system I know besides CMake that gives you a progress indicator without a lot of work.
CMake and SCons seem to me like the primary choices (with Waf a strong contender); which one you pick seems to me like the decision between making life better for the developer or the person building the package.
: software that keeps track of your changes, lets you maintain multiple versions in parallel, lets you easily find back specific changes and restore/discard them, provides a (central) backup of your project, lets you check who did what and when, etcetera. Examples that are currently popular include Subversion
. IDEs usually integrate support for some VCSs.
Apart from tools many other things can make your job easier, for example websites (like GitHub).
Yes, definitely. Not using version control -- even for single-person projects -- to me is craziness. And the three options JPlus listed are definitely the three that I would recommend. If you're totally unfamiliar with the topic, I can toot my own horn a little and link to a version control overview
I wrote for a course I taught. It tries to give a presentation that makes sense for Subversion, Git, and Mercurial.
(There are really good overviews of individual tools out there, but I didn't run across any that gave a good overall picture and explained how to use all three in an effort to be tool-neutral).
While the difference isn't entirely
in favor of using Git/Mercurial, there's enough to recommend either of those over Subversion that I would do so unless you have some specific reason to not. This is true 10x if you're working on a project that you think might be a useful open-source project for the world; using a distributed
tool like Git or Mercurial dramatically lowers the bar for contributors. And Github is
The choice of Git vs Mercurial is largely personal preference. Personally I like Git and I have good, concrete reasons for this (I looove
the index and git add --interactive
), but the differences are pretty minor overall.
(Visual) profiling: optimizing compilers (like g++) usually have a built-in profiling tool, but some IDEs offer visual graphs that make the results easier to interpret. By analogy, some web browsers (such as Safari and Chrome) offer visual profiling information on your page loads.
Note that Valgrind, if you run it with valgrind --tool=callgrind blah
, will give you a profile. (And it's much more useful than gprof ones that G++ has built in.) Run your program like that, then open callgrind.out.pid
with something like kcachegrind
Related to profiling is coverage testing -- GCC + gcov + lcov
does a pretty good job at this.
OK, so what else can I think of.
For unit testing, I like the gtest
(or googletest) library. Boost also has a good one, but I think Google's is better; among other advantages, its "death tests" I consider a killer feature. (This allows you to write tests like "make sure this function asserts under this condition.")
Unfortunately I've never actually used this, but the Data Display Debugger
looks like it could be really cool. (Check out this screenshot
Make sure you know your shells. One sort of obscure feature of Bash-like shells is ctrl-R's reverse-incremental-search. Also make sure you know about **
patterns for recursive globbing (present in Bash 4, or in Zsh for quite a long time). Set up aliases for things that you find useful. (I should start a thread about stuff like that.) One thing I like is to have functions path_append
that I can use like path_append PATH /something/cool
(instead of export PATH=$PATH:/something/cool
-- more annoying with longer variables like LD_LIBRARY_PATH, especially considering you lose a finger holding shift). It is occasionally very useful to write a one-off function at the shell for carrying out a pretty-specialized but repetitive task -- it's nice to know how to do this.
Know about TAGS files and how to use them in your editor. They're still shit compared with a good "go to definition" from an IDE, but they're still a lot better than just grepping for stuff.
As a more obscure tool, check out either this
"Delta Debugging" tool. If you've ever tried to diagnose a problem by doing something like "comment out half of the code, then try again, then comment out a smaller region, etc.", this automates that task. It's really good at minimizing test cases. The overhead to use it is fairly high though; you have to either write some Python code or a shell script to both run the failing task and determine whether a later run fails, succeeds, or does something else. Only very occasionally useful, but when it's useful, it can be really
For drawing graphs (of the Excel chart variety), check out matplotlib. It's a Python library that uses matlab-like syntax (function names, parameters, etc.) to draw very nice charts. For drawing graphs (of the nodes/edges variety), check out Graphviz dot and neato.
If you're programming in Java, check out The Omniscient Debugger
. I don't do any Java, but if it's as useful as it sounds like it'd be, then it'd be amazing. Bascially it records a trace of your program's execution. Then you can open it up afterword in a "debugger" that lets you step forward, step backward, you can set reverse breakpoints and watchpoints, see all the values a variable took on during execution, stuff like that. It also means that any nondeterminism is fixed -- if you can make a concurrency bug happen under the debugger, then you can actually effectively debug it. GDB 7 can sort of do something similar
, which I forgot about but now have to try out. (I'm not sure if they're using the same mechanism or not; there are a couple different ways you can do reverse debugging.)
Also make sure you know about GDB 7's Python support, which allows you to have (and write) nice pretty printers for STL containers and such.
Tools like nm
can help diagnose linker errors, or just give you information about an executable.