DevOps Zone is brought to you in partnership with:

Troy Hunt is a Software Architect and Microsoft MVP for Developer Security. He blogs regularly about security principles in software development at and is the author of the OWASP Top 10 for .NET developers series and free eBook of the same name. Troy is also the creator of the recently released Automated Security Analyser for ASP.NET Websites at Troy is a DZone MVB and is not an employee of DZone and has posted 68 posts at DZone. You can read more from them at their website. View Full User Profile

The 10 Commandments of Good Source Control Management

  • submit to reddit

Ah source control, if there’s a more essential tool which indiscriminately spans programming languages without favour, I’m yet to see it. It’s an essential component of how so many of us work; the lifeblood of many development teams, if you like. So why do we often get it so wrong? Why are some of the really core, fundamentals of version control systems often so poorly understood?

I boil it down to 10 practices – or “commandments” if you like – which often break down or are not properly understand to begin with. These are all relevant to version control products of all types and programming languages of all flavours. I’ll pick some examples from Subversion and .NET but they’re broadly applicable to other technologies.

1. Stop right now if you’re using VSS – just stop it!

It’s dead. Let it go. No really, it’s been on life support for years, taking its dying gasps as younger and fitter VCS tools have rocketed past it. And now it’s really seriously about to die as Microsoft finally pulls the plug next year (after several stays of execution).

In all fairness, VSS was a great tool. In 1995. It just simply got eclipsed by tools like Subversion then the distributed guys like Git and Mercurial. Microsoft has clearly signalled its intent to supersede it for many years now – the whole TFS thing wasn’t exactly an accident!

The point is that VSS is very broadly, extensively, almost unanimously despised due to a series of major shortcomings by today’s standards. Colloquially known as Microsoft’s source destruction system, somehow it manages to just keep clinging on to life despite extensively documented glitches, shortcomings and essential functionality (by today’s standards), which simply just doesn’t exist.

Visual Source Safe 2005 boxVSS = a great big steaming pile of shit - JUST SAY NO!

2. If it’s not in source control, it doesn’t exist

Repeat this mantra daily – “The only measure of progress is working code in source control”. Until your work makes an appearance in the one true source of code truth – the source control repository for the project – it simply doesn’t exist.

Sure, you’ve got it secreted away somewhere on your local machine but that’s not really doing anyone else any good now, is it? They can’t take your version, they can’t merge theirs, you can’t deploy it (unless you’re deploying it wrong) and you’re one SSD failure away from losing it all permanently.

Once you take the mindset of it not existing until it’s committed, a whole bunch of other good practices start to fall into place. You break tasks into smaller units so you can commit atomically. You integrate more frequently. You insure yourself against those pesky local hardware failures.

But more importantly (at least for your team lead), you show that you’re actually producing something. Declining burn down charts or ticked off tasks lists are great, but what do they actually reconcile with? Unless they correlate with working code in source control, they mean zip.

3. Commit early, commit often and don’t spare the horses

Further to the previous point, the only way to avoid “ghost code” – that which only you can see on your local machine – is to get it into VCS early and often and don’t spare the horses. Addressing the issues from the previous point is one thing the early and often approach achieves, but here’ a few others which can make a significant difference to the way you work:

  1. Every committed revision gives you a rollback position. If you screw up fundamentally (don’t lie, we all do!), are you rolling back one hour of changes or one week?
  2. The risk of a merge nightmare increases dramatically with time. Merging is never fun. Ever. When you’ve not committed code for days and you suddenly realise you’ve got 50 conflicts with other people's changes, you’re not going to be a happy camper.
  3. It forces you to isolate features into discrete units of work. Let’s say you’ve got  a 3 man day feature to build. Oftentimes people won’t commit until the end of that period because they’re trying to build the whole box and dice into one logical unit. Of course a task as large as this is inevitably comprised of smaller, discrete functions and committing frequently forces you to identify each of these, build them one by one and commit them to VCS.

When you work this way, your commit history inevitably starts to resemble a semi-regular pattern of multiple commits each work day. Of course it’s not always going to be a consistent pattern, there are times we stop and refactor or go through testing phases or any other manner of perfectly legitimate activities which interrupt the normal development cycle.

However, when I see an individual – and particularly an entire project – where I know we should be in a normal development cycle and there are entire days or even multiple days where nothing is happening, I get very worried. I’m worried because as per the previous point, no measurable work has been done but I’m also worried because it usually means something is wrong. Often development is happening in a very “boil the ocean” sort of way (i.e. trying to do everything at once) or absolutely nothing of value is happening at all because people are stuck on a problem. Either way, something is wrong and source control is waving a big red flag to let you know.

4. Always inspect your changes before committing

Committing code into source control is easy – too easy! (Makes you wonder why the previous point seems to be so hard.) Anyway, what you end up with is changes and files being committed with reckless abandon. “There’s a change somewhere beneath my project root – quick – get it committed!”

What happens is one (or both) of two things: Firstly, people inadvertently end up with a whole bunch of junk files in the repository. Someone sees a window like the one below, clicks “Select all” and bingo – the repository gets polluted with things like debug folders and other junk that shouldn’t be in there.

A commit window showing a lot of files that shouldn't be in source control

Or secondly, people commit files without checking what they’ve actually changed. This is real easy to do once you get things like configuration or project definition files where there are a lot going on at once. It makes it really easy to inadvertently put things into the repository that simply weren’t intended to be committed and then of course they’re quite possibly taken down by other developers. Can you really remember everything you changed in that config file?

A commit window showing a web.config file with unkown changes

The solution is simple: you must inspect each change immediately before committing. This is easier than it sounds, honest. The whole “inadvertently committed file” thing can be largely mitigated by using the “ignore” feature many systems implement. You never want to commit the Thumbs.db file so just ignore it and be done with it. You also may not want to commit every file that has changed in each revision – so don’t!

As for changes within files, you’ve usually got a pretty nifty diff function in there somewhere. Why am I committing that Web.config file again?

A diff window showing inadvertant changes in the web.config file

Ah, I remember now, I wanted to decrease the maximum invalid password attempts from 5 down to 3. Oh, and I played around with a dummy login page which I definitely don’t want to put into the repository. This practice of pre-commit inspection also makes it much easier when you come to the next section…

5. Remember the axe-murderer when writing commit messages

There’s an old adage (source unknown), along the lines of “Write every commit message like the next person who reads it is an axe-wielding maniac who knows where you live”. If I was that maniac and I’m delving through reams of your code trying to track down a bug and all I can understand from your commit message is “updated some codes”, look out, I’m coming after you!

The whole idea of commit messages is to explain why you committed the code. Every time you make any change to code, you’re doing it for a reason. Maybe something was broken. Maybe the customer didn’t like the colour scheme. Maybe you’re just tweaking the build configuration. Whatever it is, there’s a reason for it and you need to leave this behind you.

Why? Well there are a few different reasons and they differ depending on the context. For example, using a “blame” feature or other similar functionality which exposes who changed what and hopefully, why. I can’t remember what I was doing in the Web.config of this project 18 months ago or why I was mucking around with app settings, but because I left a decent commit message, it all becomes very simple:

A blame log with a descriptive log message

It’s a similar thing for looking at changes over time. Whether I want to see the entire history of a file, like below, or I just want to see what the team accomplished yesterday, having a descriptive paper trail of comments means it doesn’t take much more than a casual glance to get an idea of what’s going on.

A series of well formed commit messages

And finally, commit messages are absolutely invaluable when it comes to tracking down errors. For example, getting to the bottom of why the build is breaking in your continuous integration environment. Obviously my example is overtly obvious, but the point is that bringing this information to the surface can turn tricky problems into absolute no-brainers.

A descriptive commit message on a failing TeamCity build

With this in mind, here are some anti-patterns of good commit messages:

  1. Some shit.
  2. It works!
  3. fix some fucking errors
  4. fix
  5. Fixed a little bug...
  6. Updated
  7. typo
  8. Revision 1024!!

Ok, I picked these all out of the Stack Overflow question about What is the WORST commit message you have ever authored but the thing is that none of them are that dissimilar to many of the messages I’ve seen in the past. They tell you absolutely nothing about what has actually happened in the code; they’re junk messages.

One last thing about commit messages; subsequent commit messages from the same author should never be identical. The reason is simple: you’re only committing to source control because something has changed since the previous version. Your code is now in a different state to that previous version and if your commit message is accurate and complete, it logically cannot be identical. Besides, if it was identical (perhaps there’s a legitimate edge-case there somewhere), the log is now a bit of a mess to read as there’s no way to discern the difference between the two commits.

6. You must commit your own changes – you can’t delegate it

As weird as this sounds, it happens and I’ve seen it more than once, most recently just last week. What’s happening here is that the source control repository is being placed on a pedestal. For various reasons, the team is viewing it as this sanitised, pristine environment of perfect code. In order to maintain this holy state, code is only committed by a lead developer who carefully aggregates, reviews and (assumedly) tweaks and improves the code before it’s committed.

It’s pretty easy to observe this pattern from a distance. Very infrequent commits (perhaps weekly), only a single author out of a team with multiple developers and inevitably, conflict chaos if anyone else has gone near the project during that lengthy no-commit period. Very, very nasty stuff.

There are two major things wrong here: Firstly, source control in not meant to be this virginal, unmolested stash of pristine code; at least not throughout development cycles. It’s meant to be a place where the team integrates frequently, rolls back when things go wrong and generally comes together around a single common base. It doesn’t have to be perfect throughout this process, it only has to (try to) achieve that state at release points in the application lifecycle.

The other problem – and this is the one that really blow me away – is that from the developer’s perspective, this model means you have no source control! It means no integration with code from peers, no rollback, no blame log, no nothing! You’re just sitting there in your little silo writing code and waiting to hand it off to the boss at some arbitrary point in the future.

Don’t do this. Ever.

7. Versioning your database isn’t optional

This is one of those ones that everyone knows they should be doing but very often they just file it away in the “too hard” basket. The problem you’ve got is that many (most?) applications simply won’t run without their database. If you’re not versioning the database, what you end up with is an incomplete picture of the application which in practice is rendered entirely useless.

Most VCS systems work by simply versioning files on the file system. That’s just fine for your typical app files like HTML page, images, CSS, project configuration files and anything else that sits on the file system in nice discrete little units. Problem is that’s not quite the way relational databases work. Instead, you end up with these big old data and log files which encompass a whole bunch of different objects and data. This is pretty messy stuff when it comes to version control.

SQL Source Control boxWhat changes the proposition of database versioning these days is the accessibility of tools like the very excellent SQL Source Control from Red Gate. I wrote about this in detail last year in the post about Rocking your SQL Source Control world with Red Gate so I won’t delve into the detail again; suffice to say that database versioning is now easy!

Honestly, if you’re not versioning your databases by now you’re carrying a lot of risk in your development for no good reason. You have no single source of truth, no rollback position and no easy collaboration with the team when you make changes. Life is just better with the database in source control :)

8. Compilation output does not belong in source control

Here’s an easy way of thinking about it: nothing that is automatically generated as a result of building your project should be in source control. For the .NET folks, this means pretty much everything in the “bin” and “obj” folders which will usually be .dll and .pdb files.

Why? Because if you do this, your co-workers will hate you. It means that every time they pull down a change from VCS they’re overwriting their own compiled output with yours. This is both a merge nightmare (you simply can’t do it), plus it may break things until they next recompile. And then once they do recompile and recommit, the whole damn problem just gets repeated in the opposite direction and this time you’re on the receiving end. Kind of serves you right, but this is not where we want to be.

Of course the other problem is that it’s just wasteful. It’s wasted on the source control machine disk, it’s wasted in bandwidth and additional latency every time you need to send it across the network and it’s sure as hell a waste of your time every time you’ve got to deal with the inevitable conflicts that this practice produces.

So we’re back to the “ignore” patterns mentioned earlier on. Once paths such as “bin” and “obj” are set to ignore, everything gets really, really simple. Do it once, commit the rule and everybody is happy.

In fact I’ve even gone so far as to write pre-commit hooks that execute on the VCS server just so this sort of content never makes it into source control to begin with. Sure, it can be a little obtrusive getting your hand slapped by VCS but, well, it only happens when you deserve it! Besides, I’d far rather put the inconvenience back on the perpetrator rather than pass it on to the entire time by causing everyone to have conflicts when they next update.

9. Nobody else cares about your personal user settings

To be honest, I think that quite often people aren’t even aware they’re committing their own personal settings into source control. Here’s what the problem is: many tools will produce artefacts which manage your own personal, local configurations. They’re only intended to be for you and they’ll usually be different to everyone else's. If you put them into VCS, suddenly you’re all overwriting each other’s personal settings. This is not good.

Here’s an example of a typical .NET app:

Typical .NET app showing user setting files

The giveaway should be the extensions and type descriptions but in case it’s not immediately clear, the .ReSharper.user file and the .suo (Solution User Options) file are both, well, yours. They’re nobody else's.

Here’s why: Let’s take a look inside the ReSharper file:

    <string />
    <integer />
      <setting name="SolutionAnalysisEnabled">True</setting>
      <File id="F985644D-6F99-43AB-93F5-C1569A66B0A7/f:Web.config" 
caret="1121" fromTop="26" />
      <File id="F985644D-6F99-43AB-93F5-C1569A66B0A7/f:Site.Master.cs" 
caret="0" fromTop="0" />

In this example, the fact that I enabled solution analysis is recorded in the user file. That’s fine by me, I like it, other people don’t. Normally because they’ve got an aging, bargain basement PC, but I digress. The point is that this is my setting and I shouldn’t be forcing it upon everyone else. It’s just the same with the recent files node; just because I recently opened these files doesn’t mean it should go into someone else’s ReSharper history.

Amusing sidenote: the general incompetence of VSS means ignoring .ReSharper.user files is a bit of a problem.

It’s a similar story with the .suo file. Whilst there’s not much point looking inside it (no pretty XML here, it’s all binary), the file records things like the state of the solution explorer, publishing settings and other things that you don’t want to go forcing on other people.

So we’re back to simply ignoring these patterns again. At least if you’re not running VSS, that is.

10. Dependencies need a home too

Works on my machine badgeThis might be the last of the Ten Commandments but it’s a really, really important one. When an app has external dependencies which are required for it to successfully build and run, get them into source control! The problem people tend to have is that they get everything behaving real nice in their own little environment with their own settings and their own local dependencies then they commit everything into source control, walk away and think things are cool. And they are, at least until someone else who doesn’t have some same local decencies available pulls it down and everything fails catastrophically.

I was reminded of this myself today when I pulled an old project out of source control and tried to build it:

Failing build due to missing NUnit dependencies

I’d worked on the assumption that NUnit would always be there on the machine but this time that wasn’t the case. Fortunately the very brilliant NuGet bailed me out quickly, but it’s not always that easy and it does always take some fiddling when you start discovering that dependencies are missing. In some cases, they’re not going to be publicly available and it can be downright painful trying to track them down.

I had this happen just recently where I pulled down a project from source control, went to run it and discovered there was a missing assembly located in a path that began with “c:\Program Files…”. I spent literally hours trying to track down the last guy who worked on this (who of course was on the other side of the world), get the assembly, put it in a “Libraries” folder in the project and actually get it into VCS so the next poor sod who comes across the project doesn’t go through the same pain.

Of course the other reason this is very important is that if you’re working in any sort of continuous integration environment, your build server isn’t going to have these libraries installed. Or at least you shouldn’t be dependent on it. Doug Rathbone made a good point about this recently when he wrote about Third party tools live in your source control. It’s not always possible (and we had some good banter to that effect), but it’s usually a pretty easy proposition.

So do everyone a favour and make sure that everything required for your app to actually build and run is in VCS from day 1.


None of these things are hard. Honestly, they’re really very basic: commit early and often, know what you’re committing and that it should actually be in VCS, explain your commits and make sure you do it yourself, don’t forget the databases and don’t forget the dependencies. But please do forget VSS :)

Published at DZone with permission of Troy Hunt, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)


Balázs Bessenyei replied on Sun, 2012/03/04 - 4:58am

I think you have missed some. 

Link VCS commits with the Bug/Issue tracking system. All commit should be linked to a concrete issue in the bug tracking system. With only a very few exceptions.
At least the commit comment should contain the issue number. This helps to track changes committed in multiple steps. 

If multiple teams are working with the same code base. In this case the comment should contain the team id as well. Even, if the issue id contains team specific information.

Andrew Thompson replied on Sun, 2012/03/04 - 12:54pm

Nice article. The only comment I would make is that item 10 is controversial in the Java land. Personally, I'd rather track down spring-security-3.1.0.RELEASE.jar as defined in a pom file any day over trying to figure out what hibernate.jar (or worse hibernatePatch2341.jar) is as a binary in svn. -Andy

Balázs Bessenyei replied on Sun, 2012/03/04 - 2:32pm in response to: Andrew Thompson

We have solved this by checking in the 3rd party dependencies as well. Then used that as the dependency repository. In our case Ivy, but I`m reasonably certain that Maven can be configured to work with this setup as well.

Jan Philipp replied on Sun, 2012/03/04 - 7:25pm in response to: Balázs Bessenyei

But why break the standard and default Maven way? If you need to archive jars (especially in coporate enviroments) then do it w/ an own maven repository. Actually, this will exists already used as a proxy repo and additional non-maven compatible stuff.

For me: 3rd party libs are binaries, too (note: in terms of library depdencies). 

Finding binaries in the scm repo means often: here went something wrong. It's not static link, but a static file. And it is redundant in the global point of view when used more than one time.

Scot Mcphee replied on Mon, 2012/03/05 - 4:15am

The last point is wrong. "Dependencies" are compiler artifacts; point 10 contradicts point 8.

Dependency management is not a source control domain problem. Get a dependency management solution (Maven, Ivy, Gradle, there are many).


Mihai Dinca - P... replied on Mon, 2012/03/05 - 6:04am

The image of the article is so wrong :)

There is no VIIII in roman numerals, it's IX !

Rick Ross replied on Mon, 2012/03/05 - 7:23am in response to: Mihai Dinca - Panaitescu

You're absolutely correct, Mihai! I will raise that point with the team this morning. Someone apparently cannot count to X!


Mitch Pronschinske replied on Mon, 2012/03/05 - 12:37pm

That image has been replaced.

Balázs Bessenyei replied on Mon, 2012/03/05 - 5:19pm in response to: Jan Philipp

We have requirement that every 3rd party dependency must be under version control. Downloading any dependency from an an outside source, including public Maven repos are not allowed.

So we have the repo committed in version control in the appropriate format including the meta data. More precisely that is the master repo. During the build and development this repo is accessible via proxy to mask the inherent slownes from getting all dependencies directly via the version control.

Ray Hulha replied on Tue, 2012/03/06 - 3:06am

The timing for this list couldn't've been better.

I especially needed item nr. 8 for my boss !

 Good Job !

William Fields replied on Tue, 2012/03/06 - 10:10am

I've been using source code control in a team environment for over 15 years (started with VSS! and am now using TFS) and I whole heartedly agree with most points, except with # 3 "commit early and often".

My point of view is that I might have to work a week or more on certain functionality, and most of the source files I'm working on will be "broken" until I'm finished. Why would I check in these broken files and possibly cause other developers on my team to retrieve broken files? This might just be a procedural issue, but I know I would not want files from others that break the application for me!

Our normal procedure is to copy our locally modified souce files up to individual dev folders on the server so they get backed up each night. I know this only gives me a "day by day" restore capability, and I do see the value of being able to easily revert to "a point in time on this particular day", but I don't see how to get around infecting other developers with my non functioning files, unless I make a project branch just for my use.

Also, my development environment happens to use binary files for the source, so multiple checkout and merging changes is not possible. So branching a project is not very fun when I have to merge the changes back into the main.

Everyone seems to have the same recommendation (check in early/often), but what am I missing? Even with multiple checkouts and merging, it would seem to me that checking in code that doesn't work would cause everyone problems.

Bill Cernansky replied on Tue, 2012/03/06 - 1:02pm in response to: William Fields

One of the most common problems in SCM is constant backpressure from developers and managers who don't want to take the steps necessary to secure the source. "Merging is not very fun" is not a reasonable argument against the working in an isolated project branch (sometimes called a task or developer branch), which is a secure and perfectly reasonable solution.

Sure, checking in early/often would break the source base if everybody's working directly in it, but that's not a good SCM model to begin with. Why does one dude's commit always have to put the entire build at risk? That's just bad practice. TFS has shelving, so you could shelve after each day's work, perhaps, which is a kludge but better than nothing.

Let's say that the work for a complicated task is going to take a month. The developer is not checking in uncompleted changes to avoid fouling the source base. It's good to care about the sanctity of the working code and/or the build. But if, after 90% complete, the developer is suddenly incapacitated and can't complete this for a long time (typical reason: bad car accident or even just plain quits), where is the work? How can the other devs pick up the slack? What happens if the dev's hard drive crashes, or worse, the SAN controlling backups of the home directories pukes? What if the backups all prove to be corrupt or never actually worked in the first place?

I've seen all these possible problems arise in 25 years as a developer and SCM professional. They happen.  Either the developer in question or someone else ends up having to do the entire thing over. Sometimes the dev effort for the changes is completely scrapped, because everything is lost and budget doesn't allow. If you find merging to be a pain, imagine the inconvenience of a complete do-over.

That's what you are missing. For pete's sake, at least shelve every night.

Lund Wolfe replied on Sat, 2012/03/10 - 6:43pm

I agree except for #10.  The dependencies should be declaratively defined in a file in source control but the dependencies should be stored in a company or public repository and pulled for the build, or they should at least be stored in some lib folder on CI server so builds are done independent of individual developers and the build will break immediately if anything is hardcoded to the developer's environment.

I wouldn't commit until the code works and hopefully is cleaned up, but if the application is still in development, you may want to commit nonworking code frequently.  It should at least build successfully to verify that everyone is still coding to the expected interfaces.

@Ray - You might want to save each version of your production distribution artifact in your repository or a zip of the entire build that generates the artifact.  That should make you and your boss feel more secure.

 As mentioned, the bug # from the bug tracker should be included in the commit comment, as well as the information about what has changed and why.  All the developer progress and details should be in the bug and available to all individuals who create, track, and search historical bug information on the application.

Eddy Young replied on Mon, 2012/03/12 - 3:31pm

Reminds me of my own top five version control related DOs and DONTs.

Mark Unknown replied on Mon, 2012/03/19 - 6:34pm in response to: Andrew Thompson

Andrew, I would go further and say that 10 violates 8. Binaires should never go in a code repository.

Mark Unknown replied on Mon, 2012/03/19 - 6:43pm in response to: Balázs Bessenyei


  You can have a local "remote" repository. One behind your firewall. In fact, this is what the Maven repo people want you to do - even if you dont have proxy issues or other requirements.


 Having a requirement to commit binaries to a code repository, is silly and wasteful. You can still meet the requirement by having  the version of the dependency in a Artifact Repo and the have the build file with the correct version of the artifact in it and the build file in the code repot (because it is code).

Mark Unknown replied on Mon, 2012/03/19 - 6:56pm in response to: William Fields

William, if you used VSS and are using TFS, you probably are using VS.NET. That means you you need to commit even more than like someone using Eclipse. I have had so many times that i need to rollback my local code and have been saved by Eclipse's local "versioning"

You might consider looking at Git or Mercurial. These typically allow you to have your own "local" versioning that you can then push to the main repo.  Using folders to store code at any point is never good, unless is an "output" - i.e. you publish binairies and source as part of a build.

Another thing to consider is breaking your code down into more modules/projects. Typically people put way too much in a single "project" and thus make it difficult to have good development processes. You should be using something like OSGi to help ensure that your modules small and independant.  Of course, VS.NET projects will make both of these things more difficult. 



Raphael Miranda replied on Fri, 2012/03/23 - 1:57pm

10 is plain wrong. 

Dependencies do not require tracking of modifications, the only thing important is their version(ie: the name hibernate-4.0.1). Commit the version requirements definition in form of maven's pom.xml, Ivy's ivy.xml or similar.   

Compiled binaries do not belong in source tracking repository.  

gopinath mr replied on Mon, 2012/03/26 - 12:12am

One extra step I would add to this to ensure we don't break build is create another workspace on your machine. After you check-in from your "coding workspace", take latest update on "read-only build check workspace" and run whole build. This ensures if you are missing adding new files or check-in of some updated files having signature change etc, you get compilation error.  This is extremely useful if you are doing lot of changes across different modules/folders for an enhancements.

Yaron Levy replied on Sun, 2012/06/10 - 10:29am

The last one's a bit iffy. Plenty of frameworks have systems where external dependencies don't need to be in source control and can be accessed via an external service. Take Maven in Java, for example.

charly clairmont replied on Wed, 2013/01/09 - 3:51am

 Hi All,

A good and open source solution to versioning your database is neXtep

You can manage the evolution of your database in agile way : development and deployment.


Philippe Lhoste replied on Wed, 2013/01/09 - 4:42am

 4. is easy to respect by peer review. We practice that at work, and I find this very useful. Not only a fellow programmer take a look at your work and can make useful remarks, but you are forced to inspect each and every change, line per line, and so a debug println() or "temporarily" commented out code or unchanged file or personal setting file cannot be commited by error (in theory at least).
If you have no colleague sitting next to you and if transmitting files is impractical, your VCS can help you: latest Perforce version offers shelving that allow a remote programmer to inspect your changes, Git allows to commit in a branch that won't "pollute" the main one, etc.

5. is true and alas not always respected. At my work, we are often faulty by just pasting the bug id and the corresponding description as commit message... Useful, but often not enough.
Of course, we might put comments in the code telling what and why we did, but it is hard to comment on removed code... And they are too focused on a particular area of the code. So the commit message can deliver essential information, and overview of the whole design / changes.
I saw somewhere that a project needed to remove some profanity (and personal attacks!) from the commit log before making the project open sourced... I was quite surprised that these messages went into the code base from the start... At long least because they are not constructive nor useful. A commit message isn't a place to vent off, but a information to convey to other programmers having to maintain your code!

6. is a bit controversial. It also depends on the VCS and project flow.
There is a rule often enforced: a commit must not break the build (if the language / project needs to be built, of course). It seems to be a golden rule. At least for the trunk / main branch.
Now, there are nuances. If the VCS supports easily many branches, I suppose you can have your own branches with non-working state, as long as the merged result is stable.
And, realistically, most open source projects work this way: one gatekeeper (or more) receives patches (pull requests in Git terminology) and if they feel these are OK, they integrate it. Of course, there can be several "blessed" commiters with write rights on the repository, but it is still a good idea to keep the peer review exposed in point 4.

8. is rather obvious, in general. In an open source project, one can put some binary dependencies in the project, but perhaps it is better to leave them outside of the repository and put them as separate download. If they don't change frequently, at least. Otherwise, perhaps a separated repository can be used.

9. can be discussed. It can be useful to put some settings, like an Eclipse launch configuration or code rule settings, in a project, at least as a blueprint. Of course, then, it can be argued if they are personal settings anymore...

10. joins actually what I commented for 8... I see several commenters argue with this point. Something like Maven (for Java) can handle these dependencies by bringing them from a central repository, complete with the precise version.

Jes Chergui replied on Tue, 2013/01/15 - 3:14am

Nice set of rules. I personally learned these the hard way :( 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.