ackan wrote:This is the problem with mixing low-level code with high-level code.
Low level code can often be faster, smaller, better... but high level code is "pretty."
Unfortunately, high level code is far easier to optimize than low-level code... hence the whole point of getting rid of and putting the whole "EVIL" tag on goto.
You want to argue about GOTO? I'm game, although it's a dead issue. Nobody cares anymore.
Thing is... our high-level (well, mid-level) code has not radically evolved where as the hardware has.
We want more complex code running on more powerful hardware... but don't have "acceptable" replacements for low-level commands. (ie, continue, break, goto).
But all this is rather moot... "mainstream programming" is no longer populated by people who care about how well their software runs... only that it compiles. Like the "64-bit revolution," --- which I consider to only be about people being too lazy to optimize their memory footprint [in the domestic context, of course] --- "common practice" creates programmers that, while not violating the sacred "goto" sacrament, create quite horrific code.
The way I see, it, they had problems which were very hard to solve. So instead of trying to solve those hard problems, they redefined the problem to give themselves problems which were not only very hard to solve but also very hard to tell how good their solutions were!
So for example, in the old days you had people who wanted to use their special skills to make things faster. But if you use special tricks to make your code faster, in a few short years there will be faster hardware available that your code will no longer be so fast on. So how much were your special skills worth? To keep the speed advantage somebody has to rewrite your code every few years.
Instead, they decided that the most important thing was to make it easy for somebody else to rewrite your code in a few years. The more expensive it is to maintain your code, the less valuable it is. Worse, if it's hard to understand when you're done, how bad will it be after a few rounds of revision by other people? So the important thing is to make it easy to maintain.
So here you are writing your code, and how will they find out how hard it is to maintain it? They'll find that out a few years down the road when you are gone and somebody else is maintaining it. A lot of good that does you! But when they ask why it's taking you so long to produce working code, you can tell them that you're doing it a special way that will be easier to maintain, and see if they agree to that. More likely they'll want it to work first, and then they'll want a maintainable version later. But languages and skills are often oriented around that....
I mean, goto is NOT BAD... we replaced many of the instances where people would normally use goto... but so long as we stick with this structure... we will still have instances were people feel goto could make things so much easier. Hence the structure is the greater fault, having not progressed to fill the voids removing goto created.
Put it this way -- it's typically easier if you have your high level code organized into a series of low-level pieces. Call a low-level routine that does one thing. Then call a low-level routine that does a second thing. Etc. Each time you return to the place in the source code that you are looking at right now, so it makes sense to you. If you are using a routine exactly once, you might as well just put it into your code at the point you'd GOTO it rather than GOTO it. But if you use it from two places, then you can GOTO it from both of them, and then you need to decide which of them to continue running, which is likely to result in a conditional GOTO there. Often easier if you just call the routine and return.
But sometimes that's less efficient. You return when it's the last routine you call, and the calling routine itself is about to end. So you have two returns when just one would do. OK, no big deal. If your optimizing compiler sees that it's the last call from this routine with nothing after it, then the compiler can substitute a GOTO for that last call. Very simple rule. If there's something to do in the current routine after calling, then call-and-return. If there's nothing left to do here, GOTO. That gets most of the advantages of GOTO without the programmer having to pay any attention.
Another time that GOTO can be useful is when people just want to bail out of some complicated situation, for example when there has been an error. Just GOTO the code that handles that error, and let it sort out the mess. Various languages have a catch-throw pair of commands that handles this circumstance. It handles a lot of the mess inside its own simple structure, and you can handle as much more as you want to. It works better than GOTO for that circumstance.
So I don't think that GOTO would solve a lot of problems. The issue for me is more that people try to hide complexity in half-baked ways. Ideally you would handle low-level stuff in ways that are very simple, that simply map what actually goes on, onto something that's conceptually simple. Hide the details that high-level code can ignore completely, keep the high level calls to it simple, and still have the high level calls match up well to what's actually going on so that people who use the routines will instinctively have a feel for everything that could cause problems.
Instead often the calls to low-level stuff matches up better with the preconceptions of high-level programmers than with what is actually going on. So they have it nice and easy as long as things work, but when there's a problem they're clueless.
I don't know how to solve that. It would help if all the code except the lowest-level stuff was written in the same language. But in reality it's often considered necessary to glue together some sort of Frankenstein's monster out of C, perl, TCL, scheme, and python. You just have to live with it. If it seems easier in the short run to use code that works from a variety of sources than to reinvent wheels hoping to make them easier to use.
The various gimmicks that were supposed to make it better, probably have not. For example, OO had some promise when the idea was that you hired a few geniuses to build the objects and then you could hire programmers of average intelligence to use those objects to do mundane maintenance. But in practice it seems like you get programmers of average intelligence who make and change objects whenever they see the need, and the things which were supposed to impose structure etc just add overhead while the complexity grows....
I don't expect this to get sorted out any time soon. When I was a kid listening to computer science students talk, they argued about how COBOL was the dominant programming language because so many companies had such a big investment in it. And their view was usually along the lines of "That is really a negative investment. It costs them more to maintain it in COBOL than it would cost to rewrite it in C." Because C was then considered the perfect language to do anything. Since then there have been lots of different best languages to replace COBOL, and yet how will a manager know that a new approach will work better than what he's doing until he has already committed to something that can get him fired? If he keeps doing what he's doing the results are at least predictable.
It's a puzzle. If you try to be conservative you'll probably be using something that's been obsolete for decades, that costs a lot to run. If you try to keep up with the times, there's no point working hard to get stuff right because it will all be replaced in a few years anyway. You can't win for losing.
The Law of Fives is true. I see it everywhere I look for it.