Any experience with working on larger projects?

A place to discuss the implementation and style of computer programs.

Moderators: phlip, Moderators General, Prelates

circumlocuted
Posts: 104
Joined: Thu Jun 21, 2007 8:17 am UTC

Any experience with working on larger projects?

Postby circumlocuted » Thu Jun 21, 2007 10:12 am UTC

I love coding.
Now that I am starting to get into larger projects I am running into some major pitfalls.
Either I can't keep track of all of the different members of my classes, and I end up writing redundant functions, or I lose track of what I am doing.
I try to make visual diagrams and map out my progam logic, but it seems like I am spending most of my time keeping myself from getting derailed.
Is it normal to have this much overhead when working on a large project?

Does anyone have any advice for me?

Also, I think it's probably time that I start using some sort of source management system.
None of this was ever taught to me in any of my computer science classes, and I never bothered to learn about it, so where should I start?
Right now I am mostly using NetBeans because I like many of it's features, should I use NetBeans for my source management?

Any advice would be greatly appreciated.

User avatar
evilbeanfiend
Posts: 2650
Joined: Tue Mar 13, 2007 7:05 am UTC
Location: the old world

Re: Any experience with working on larger projects?

Postby evilbeanfiend » Thu Jun 21, 2007 10:26 am UTC

yes you will always have more overhead with a large project. good source control is pretty much essential (id recommend subversion, no idea about netbeans), a good set of unit tests will help heaps too.

the key to success is usually to keep things as modular as possible, with as few dependencies as possible, so that you can treat parts in isolation.

this book is pretty much considered to be the holy scripture of large projects
http://preview.tinyurl.com/32ksmv
its aimed at c++ but im sure you can translate it to whatever you want (java i'm assuming)
in ur beanz makin u eveel

User avatar
Strilanc
Posts: 646
Joined: Fri Dec 08, 2006 7:18 am UTC

Postby Strilanc » Fri Jun 22, 2007 5:56 am UTC

If you can't keep track of what you've written, it's very important that it be error-resistant and at least lightly commented.

By error resistant I mean every function you write should validate its input before using it, and give some sort of notification for REALLY incorrect input. Speed be damned, you want things to WORK.
Don't pay attention to this signature, it's contradictory.

User avatar
adlaiff6
Posts: 274
Joined: Fri Nov 10, 2006 6:08 am UTC
Location: Wouldn't you rather know how fast I'm going?
Contact:

Postby adlaiff6 » Fri Jun 22, 2007 7:27 am UTC

Subversion works.

Depending on what language you use, sometimes there are built-in (unintended) crutches you can use to help you design. For example, in C++, assuming you're working with one .h/.cc pair per object, you can create all your .h files first (only declarations, no definitions) and make sure they make sense, and then go back and write the .cc file for each one. Alternatively, you can just write all your design ideas down in whatever format you find best. I've personally found that keeping a notebook (of graph paper, it's easier to draw things if you decide you want to) nearby is best. When you start opening programs and picking tools to draw with and figuring out what this menu does oh my that sounds cool what does it do?...you tend to lose track of what it was you originally wanted to write down. I prefer pen and paper because I can get anything I think up written down quickly. I also end up with thirty different versions of my design, all conflicting and constantly changing. I don't worry about it because I assume that with constant improvement, as long as I don't loop, I'm going to reach perfection in the limit. YEMV.

tl;dr version:
Use pen and paper to design. Use your language's built-in separation of design and implementation to keep you organized.
3.14159265... wrote:What about quantization? we DO live in a integer world?

crp wrote:oh, i thought you meant the entire funtion was f(n) = (-1)^n
i's like girls u crazy

Manix
Posts: 2
Joined: Fri Jun 22, 2007 4:59 pm UTC
Contact:

Re: Any experience with working on larger projects?

Postby Manix » Fri Jun 22, 2007 5:27 pm UTC

circumlocuted wrote:I love coding.
Now that I am starting to get into larger projects I am running into some major pitfalls.
Either I can't keep track of all of the different members of my classes, and I end up writing redundant functions, or I lose track of what I am doing.
I try to make visual diagrams and map out my progam logic, but it seems like I am spending most of my time keeping myself from getting derailed.
Is it normal to have this much overhead when working on a large project?

Does anyone have any advice for me?

Also, I think it's probably time that I start using some sort of source management system.
None of this was ever taught to me in any of my computer science classes, and I never bothered to learn about it, so where should I start?
Right now I am mostly using NetBeans because I like many of it's features, should I use NetBeans for my source management?

Any advice would be greatly appreciated.


IIRC, NetBeans is just a visual java editor.

For some tips, look around on the web for stuff on software engineering. The problems you face are actually quite common, especially since, in the real world, you spend more time working on hand-me-down code.

Off the bat, I can say yes, go for version control. Free ones are Subversion(SVN) and CVS. SVN is the successor to CVS, and is better in many ways(basing revision numbers on the tree instead of the file is amazing).

Also, spend time on design, as someone mentioned. Good design saves so much time on programming and bug fixes, its not even funny, since better organization = fast bug fixes.

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Location: Madison, WI
Contact:

Re: Any experience with working on larger projects?

Postby EvanED » Sat Jun 23, 2007 1:43 am UTC

Manix wrote:Off the bat, I can say yes, go for version control. Free ones are Subversion(SVN) and CVS. SVN is the successor to CVS, and is better in many ways(basing revision numbers on the tree instead of the file is amazing).


At this point, I would go so far as to say that if you're using CVS for a new project, you're stupid.

This is being deliberately inflammatory and I don't 100% believe it, and you could probably come up with some reason to use CVS, but I think you would have to try pretty hard. CVS is, IMO, broken.

There are other models of source control that you can use (for instance, Linus Torvolds feels that the CVS model is fundamentally broken, and that the Subversion people are "idiots" because their motto is to do CVS right and there is no way to do that), but I think it's fine for at least some things, and SVN is the way to go if you accept that model.

User avatar
adlaiff6
Posts: 274
Joined: Fri Nov 10, 2006 6:08 am UTC
Location: Wouldn't you rather know how fast I'm going?
Contact:

Postby adlaiff6 » Sat Jun 23, 2007 3:12 am UTC

SVN is probably the best (also factoring in support here) free versioning system out there.

There are some, like perforce (used at current job), that are pay-only, and may or may not be better.
3.14159265... wrote:What about quantization? we DO live in a integer world?

crp wrote:oh, i thought you meant the entire funtion was f(n) = (-1)^n
i's like girls u crazy

Rysto
Posts: 1460
Joined: Wed Mar 21, 2007 4:07 am UTC

Postby Rysto » Sat Jun 23, 2007 3:16 am UTC

I'd prefer git or mercurial to SVN. Torvalds is right: for a large software project, the centralized model is broken.

moquel
Posts: 9
Joined: Fri Jun 08, 2007 12:49 pm UTC

Postby moquel » Mon Jun 25, 2007 7:12 am UTC

Rysto wrote:I'd prefer git or mercurial to SVN. Torvalds is right: for a large software project, the centralized model is broken.


Use darcs. I have no real reason to claim it's better than any other non-centralized revision control system, except that it was written in Haskell. That sort of thing should be encouraged. :)

User avatar
torne
Posts: 98
Joined: Wed Nov 01, 2006 11:58 am UTC
Location: London, UK

Postby torne » Mon Jun 25, 2007 1:09 pm UTC

moquel wrote:Use darcs. I have no real reason to claim it's better than any other non-centralized revision control system, except that it was written in Haskell. That sort of thing should be encouraged. :)

I have several reasons :)

1) It's interactive by default, which is great for both people new to VCS and experienced people who are tired/hungover/stoned. Much more likely to record the right changes. It also can undo just about everything, including reverts. :)

2) Its crazy patch algebra does often make merging easier, and rarely makes it worse (the problem being the case where the merge takes exponential time for the algorithm to churn through, which results in darcs hanging for all intents and purposes).

3) It supports a great model for OSS-type development - 'darcs send' can send patch bundles via email in a suitable form to be directly applied to another darcs repo. An email handler script at the far end can check a GPG signature on the bundle and if it's signed by one of the known contributors, automatically apply it to the 'common' repository. Otherwise, you bounce it to the -devel mailing list for people to take a look at.

The script can also do test merges to make sure the patches don't require manual intervention to integrate, or even entire build-and-test cycles of the resulting code (and if they fail, reply to the sender and/or -devel list with the output). This is possible with other VCSes too, I'm sure, but darcs makes it very easy to do. This way, no remote write access to the repository is required.

4) Darcs only requires an HTTP server for read access, with no special configuration - just stuff the repository somewhere the web server can read and anyone can clone that repository.

User avatar
warhorse
Posts: 203
Joined: Fri Mar 09, 2007 6:42 pm UTC
Location: Möbius Strip
Contact:

Postby warhorse » Mon Jun 25, 2007 5:38 pm UTC

Did you write a specification before writing code? When I was in school, projects were small enough that getting to code hacking right away and tacking on features was the most efficient method.

Now that I'm in the real world, I've learned that this technique doesn't work for long lived projects.

I like to have a design document for each class/module. Each document discusses what the module does, any major datastructures that get passed in or out, and any functions that are used to interact with the module. There should be a minumum number of functions, so aggregate when you can (instead of get_x() and get_y(), have get() and pass in flags to indicate what you want).

In my exprience, writing code that isn't inherently confusing means that the design seperated the tasks into individual units that should be so well isolated that one can test individual modules meaningfully. The result is a small number of modules that communicate with only a small number of other modules.

I think of it like an assembly line. There is a module to handle the input, be it a network packet, interrupt, or user input. This module's sole purpose is to pack up the input and pass it onto the next module. The next few modules along the assembly line might display something to the user, collect data needed to send a reply, and generate the reply (3 seperate modules). Each module has one major working function and then whatever convienent helper functions one might want.

This way, if you're looking at code in a particular module, you can say things like "I know where the input came from and what should be in it", "I know what this function needs to do to generate the result that the next module needs", and "this particular module is the only code in the program that operates on a particular aspect of the system." The last bit there sounds like it can help your problem of redundant functions because if, say, you have only one module that interacts with the GUI, then you know all of your GUI functions can be contained in there. Then if the user clicks on something that generates a network packet, have the GUI module invoke the network module so that you don't have network related functions replicated in more than one module.

Of course, you'll need some globally accessible helper functions, but they should be limited to manipulators of data structures that get passed between modules or things like wrappers for commonly used functions.

/me wonders if any of this makes sense :)
It's OK to be social, just don't tell anyone about it.

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Location: Madison, WI
Contact:

Postby EvanED » Mon Jun 25, 2007 9:52 pm UTC

warhorse wrote:There should be a minumum number of functions, so aggregate when you can (instead of get_x() and get_y(), have get() and pass in flags to indicate what you want).


I disagree here. Having get_x() and get_y() functions allows the compiler to do more checking for you (there's no chance for an invalid flag) and the combined method doesn't deal well with multiple types. What if you're writing a person class, and a person has a name and age. You can't have one get() function, because it can't return both a string and an int (unless you're using unions, which gives you even MORE potential for an ugly error and, if you're using a built-in union in C++ as opposed to something like boost::any, removes the possibility of returning a non-POD type), so what do you do? Have a get_string(STRING_FIELD what) and a get_int(INT_FIELD what) function?

I also dispute that your suggestion actually helps at all. If there are x get_blah() functions, there are still x things your get() functions can do if you combine them, so the interface isn't actually any simpler.

If you run some sort of static analysis tool over your code, having separate functions can also lead to better results.

The rest of what you say seems pretty good though.

iw
Posts: 150
Joined: Tue Jan 30, 2007 3:58 am UTC

Postby iw » Tue Jun 26, 2007 4:51 am UTC

EvanED wrote:
warhorse wrote:There should be a minumum number of functions, so aggregate when you can (instead of get_x() and get_y(), have get() and pass in flags to indicate what you want).


I disagree here.


Above all: be consistent in what you do.

User avatar
warhorse
Posts: 203
Joined: Fri Mar 09, 2007 6:42 pm UTC
Location: Möbius Strip
Contact:

Postby warhorse » Tue Jun 26, 2007 3:09 pm UTC

EvanED wrote:
I disagree here.


Point taken. When I thought of the get_x() stuff I was working on a network application, so each get() in that program results in an RPC call. In that case, stuffing it all into one call saves extra round-trips.
It's OK to be social, just don't tell anyone about it.

User avatar
coldie
Posts: 4
Joined: Thu Jun 07, 2007 1:52 pm UTC
Location: Canberra
Contact:

Re: Any experience with working on larger projects?

Postby coldie » Tue Jun 26, 2007 9:19 pm UTC

circumlocuted wrote:I love coding.
...
I try to make visual diagrams and map out my progam logic, but it seems like I am spending most of my time keeping myself from getting derailed.
Is it normal to have this much overhead when working on a large project?


Methinks you want a whiteboard.

If you are finding yourself writing redundant functions or losing track of what you are doing, you have to take a step back and start planning how you do things rather than just doing them. It may feel like you are just wasting your time, especially if you are working on your own, but don't believe for a second that any time you spend thinking about what you are doing is wasted.

Find some way of documenting what's already in your system at a high level that you are comfortable with. Follow through by documenting what will be in your system.

Source control, unit tests, "configuration management"... These are associated with large projects. But I think design is what you crave. UML is worth learning if you want a rigid, consistent way of documenting your designs. And the Gang of Four book really helps get you thinking at a higher level.

iw
Posts: 150
Joined: Tue Jan 30, 2007 3:58 am UTC

Re: Any experience with working on larger projects?

Postby iw » Wed Jun 27, 2007 12:04 am UTC

coldie wrote: And the Gang of Four book really helps get you thinking at a higher level.

Be sure to get the right message out of this book, though. The Design Patterns book shouldn't be seen as an Aristotle-like categorization of all possible Patterns. The idea you should come away with is that using design patterns (as a general term) helps make your software more uniform, and the patterns in the book are just examples. I think you should use Design Patterns as a book to enrich your vocabulary rather than restrict it (as seems to happen to many a coder I've seen online). (See http://perl.plover.com/yak/design/)

circumlocuted
Posts: 104
Joined: Thu Jun 21, 2007 8:17 am UTC

Postby circumlocuted » Wed Jun 27, 2007 1:42 am UTC

I often do write out a map of my program on paper, with all the functions I plan to implement and everything.
The problem is I often don't know exactly what is required of my program until I actually start *writing* it, and then I want to make changes mid-way though.

I guess everyone has that problem I suppose.

User avatar
warhorse
Posts: 203
Joined: Fri Mar 09, 2007 6:42 pm UTC
Location: Möbius Strip
Contact:

Postby warhorse » Wed Jun 27, 2007 2:32 am UTC

circumlocuted wrote:I often do write out a map of my program on paper, with all the functions I plan to implement and everything.
The problem is I often don't know exactly what is required of my program until I actually start *writing* it, and then I want to make changes mid-way though.

I guess everyone has that problem I suppose.


This is a tricky problem. I think that the problem stems from requirements changing when the code is half written (say, from a customer's demand). However if you're writing your own program then I argue that you shouldn't start coding until you know what you want it to do!

Maybe you can try breaking your task up into subtasks and then first write the parts whose requirements probably won't change. That way, if something changes, then it will only be in code that hasn't been written yet.
It's OK to be social, just don't tell anyone about it.

circumlocuted
Posts: 104
Joined: Thu Jun 21, 2007 8:17 am UTC

Postby circumlocuted » Wed Jun 27, 2007 3:00 am UTC

warhorse wrote:
circumlocuted wrote:I often do write out a map of my program on paper, with all the functions I plan to implement and everything.
The problem is I often don't know exactly what is required of my program until I actually start *writing* it, and then I want to make changes mid-way though.

I guess everyone has that problem I suppose.


This is a tricky problem. I think that the problem stems from requirements changing when the code is half written (say, from a customer's demand). However if you're writing your own program then I argue that you shouldn't start coding until you know what you want it to do!

Maybe you can try breaking your task up into subtasks and then first write the parts whose requirements probably won't change. That way, if something changes, then it will only be in code that hasn't been written yet.


Of course I wouldn't start work on a project until I have it completely mapped out on paper, but sometimes, as I am getting my hands dirty implementing my project, I will see things from a point of view that I didn't at the design stage, and find a much more efficient way of doing something.

For example, I was working an ms-paint clone for a school project, and when I was 80%, it hit me that I could save a ton of memory and processing time, but I would have to tear out most of what I had done.
Maybe it's because I can be really obsessive about efficiency, but I ended up working right up until the deadline to implement my program as efficiently as possible.

User avatar
evilbeanfiend
Posts: 2650
Joined: Tue Mar 13, 2007 7:05 am UTC
Location: the old world

Postby evilbeanfiend » Wed Jun 27, 2007 7:55 am UTC

Rysto wrote:I'd prefer git or mercurial to SVN. Torvalds is right: for a large software project, the centralized model is broken.


that really depends on other things than size imho. for a software company centralised repository is perfectly fine no matter how big a project it is.
in ur beanz makin u eveel

User avatar
torne
Posts: 98
Joined: Wed Nov 01, 2006 11:58 am UTC
Location: London, UK

Postby torne » Wed Jun 27, 2007 11:42 am UTC

evilbeanfiend wrote:that really depends on other things than size imho. for a software company centralised repository is perfectly fine no matter how big a project it is.

My experience working at software companies that use centralised repositories leads me to disagree. Multi-tier integration (you submit your changes to your team's branch, someone submits that to a higher level integration branch, etc until it gets to some main tree) was/is used everywhere I've worked and doing that in a centralised VCS is, largely, crap. I couldn't count how many hours I've lost to doing this stuff that wouldn't've been needed in another system.

User avatar
evilbeanfiend
Posts: 2650
Joined: Tue Mar 13, 2007 7:05 am UTC
Location: the old world

Postby evilbeanfiend » Wed Jun 27, 2007 12:07 pm UTC

the point i was trying to make was not that centralised always works for a company, just that the size of the project is not the sole consideration on choosing between vcs. for starters it depends on what you mean by size of a project i.e. large number of developers or large code base? certainly a large code base being maintained by a few developers will work perfectly fine with a centralised vcs. it depends what sort of systems you are working with as well as the size (granted on a small project almost anything suffices)
in ur beanz makin u eveel

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Location: Madison, WI
Contact:

Postby EvanED » Wed Jun 27, 2007 1:16 pm UTC

torne wrote:
evilbeanfiend wrote:that really depends on other things than size imho. for a software company centralised repository is perfectly fine no matter how big a project it is.

My experience working at software companies that use centralised repositories leads me to disagree. Multi-tier integration (you submit your changes to your team's branch, someone submits that to a higher level integration branch, etc until it gets to some main tree) was/is used everywhere I've worked and doing that in a centralised VCS is, largely, crap. I couldn't count how many hours I've lost to doing this stuff that wouldn't've been needed in another system.


Can someone explain why something like git would help here? I watched an hour-long Google Tech Talk Linus did on git and I still don't know. I can sorta understand the benefits of doing it for an open source project, but for a company I don't see the benefit.

Where's the flow that improves with git?

|333173|3|_||3
Posts: 124
Joined: Wed Jun 13, 2007 9:41 am UTC
Location: Adelaide, SA, Aus

Postby |333173|3|_||3 » Thu Jun 28, 2007 6:18 am UTC

One thing I have found to be helpful is drawing the class hierarchy, using the data model diagrams (at least in a simplified form), which means that you know where everything needs to be, and where the method code belongs.
The other thing which helps in Java is making sure that your Javadoc comments are linked to the main Java API, so that a search of the API will show both your methods and standard library versions.
The voices in my head tell me that I should write something here. Unfortunately, they won't tell me what to write.

User avatar
taggedunion
Posts: 146
Joined: Fri Jul 06, 2007 6:20 am UTC
Location: BEHIND YOU

Postby taggedunion » Fri Jul 06, 2007 4:47 pm UTC

I suppose a summary of my own ideas, plus of all the excellent ones of peopl who already commented, are:

1) Write up/prototype/stub out your modules/functions/classes/etc. beforehand. While programming, remember to keep things as modular & orthogonal as circumstances allow.

2) Keep a notebook AND a whiteboard to scribble stuff on. Whiteboards are especially helpful for figuring out the references in data structures.

3) Use a Version Control System. People here are complaining about how some suck, some don't, blah blah. The wisdom holds here that something is better than nothing, and nothing's more frustrating than overwriting or accidentally rm -rf -ing part or the whole of your source tree and having to start from scratch. I use SVN myself, I leech off of the server on Google Code for the most part, and I've heard DARCS is good. GNU also has a simple VCS that doesn't require a server, but I don't recall its name -- just Google it.

4) If you wish, use a nice IDE like Eclipse which keeps track of all of your classes and methods and all that fun stuff.

5) Make a schedule for yourself, like "I should finish such-and-such feature on such-and-such date". Remember those weight loss tips -- don't give yourself huge, unattainable goals -- just get a portion of the GUI working, debug a troublesome function, make the CLI insult the user upon invalid input, etc. Don't get too far ahead of yourself, either. If you don't keep schedule, don't panic, just set things back and continue to work. Start with the small tasks first, because finishing them gives you the confidence and the mental warmup to do the harder ones. Finally, schedule some time every day, even if just a half-hour, to at least think about the project. One never has spare time, so you have to explicitly allocate it.

If all of this seems like a lot... well, it is. I'm working on a big project myself (in C, no less!) and I have a hard time with a number of these, mostly the schedule. I keep a notebook with modules & their functions, global variables, etc., and I find that helpful.

Now, if only I could follow my own advice about keeping a schedule! :P
Yo tengo un gato en mis pantelones.

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Location: Madison, WI
Contact:

Postby EvanED » Sat Jul 07, 2007 12:58 am UTC

taggedunion wrote:GNU also has a simple VCS that doesn't require a server, but I don't recall its name -- just Google it.


Just to make clear: SVN doesn't need a server either.

User avatar
djn
Posts: 610
Joined: Mon May 07, 2007 1:33 pm UTC
Location: Oslo, Norway

Postby djn » Sat Jul 07, 2007 1:14 am UTC

taggedunion wrote:GNU also has a simple VCS that doesn't require a server, but I don't recall its name -- just Google it.



Git, I guess, or did you mean Arch?

User avatar
taggedunion
Posts: 146
Joined: Fri Jul 06, 2007 6:20 am UTC
Location: BEHIND YOU

Postby taggedunion » Sat Jul 07, 2007 3:26 am UTC

djn wrote:
taggedunion wrote:GNU also has a simple VCS that doesn't require a server, but I don't recall its name -- just Google it.



Git, I guess, or did you mean Arch?


Probably Arch.
Yo tengo un gato en mis pantelones.

ToLazyToThink
Posts: 83
Joined: Thu Jun 14, 2007 1:08 am UTC

Postby ToLazyToThink » Wed Jul 18, 2007 5:04 am UTC

EvanED wrote:
torne wrote:
evilbeanfiend wrote:that really depends on other things than size imho. for a software company centralised repository is perfectly fine no matter how big a project it is.

My experience working at software companies that use centralised repositories leads me to disagree. Multi-tier integration (you submit your changes to your team's branch, someone submits that to a higher level integration branch, etc until it gets to some main tree) was/is used everywhere I've worked and doing that in a centralised VCS is, largely, crap. I couldn't count how many hours I've lost to doing this stuff that wouldn't've been needed in another system.


Can someone explain why something like git would help here? I watched an hour-long Google Tech Talk Linus did on git and I still don't know. I can sorta understand the benefits of doing it for an open source project, but for a company I don't see the benefit.

Where's the flow that improves with git?


The problem as I've seen it is this:

A lot of projects you end up maintaining, or even enhancing, have been neglected over the years or were junk to begin with. There tends to be a lot of coupling between modules.

So you end up with situations where you need to share changes with other developers in you team, but those changes aren't stable enough for the central repository. They may break functionality others are working on, you may be unsure the feature can be completed by deadline, or PHBs could be collecting stupid metrics.

Every large project I've worked on so far, developers have ended up sharing changes that aren't "ready" for the central repository out of band. Usually by sharing zip/tar files of the changes. I've lost a week of changes by accidentally deleting a directory whose changes weren't "ready" for the central repository.

With distributed source control each developer can have their own repository, or even create temporary repositories for experiments/merges/etc. You can merge (even unstable changes) with your fellow developers at will. This way at the changes stay under source control, instead of scattered through emails and shared drives.

Of course I'm not sure it would actually play out that way, since I've never had a chance to use distributed source control at work. I'm sure they'd wrap the process up in enough red tape to destroy any benefits.
:cry:

User avatar
torne
Posts: 98
Joined: Wed Nov 01, 2006 11:58 am UTC
Location: London, UK

Postby torne » Wed Jul 18, 2007 11:37 am UTC

Yup, that's exactly it. Passing changes around that aren't ready for integration, or at least, not integration all the way to the top.

Managing a tree of branches like that is possible in many centralised VCSes, but they're usually crap at it, and it ends up being a lot of work. It's also usually very hard to cherrypick changes in centralised VCSes (though this applies to many decentralised ones too, which is why darcs is love - all works there. *grin*)


Return to “Coding”

Who is online

Users browsing this forum: No registered users and 10 guests