In most programming languages != is more "correct" as ^ involves a cast to int and back in most programming languages.
I think "most" is a great exaggeration. In C bools are ints, so I can't see any objection. In most other languages I can think of offhand, ^
is either overloaded for bools or outright illegal for bools. In a language like C++ where you're casting to ints... well, I agree it's ugly... although I still think it's more readable. Is there actually any real issue with this other than elegance? I do use !=
in C++, but I tend to think it's only a matter of pedantry on my part and I might be better off using ^
I also think that a==b
is way clearer than !(a^b)
. ... I almost have to mentally construct the truth table to make sense of the xor one.
I think that's somewhat of an exaggeration too, but I do agree that a==b
is way better. That was the only thing jaap wrote that I took issue with (although he did say he'd probably use ==
for a simple case like this.)
You, sir, name? wrote:
Programmers deal with simple if-statements all the time, and as such very strong intuition of how they work. There very obviously are no gotchas hidden in the first variant of the code. I can immediately tell what it does for any value of isRunFooMode and hasSomeProperty(X), because I've seen code like that thousands upon thousands of times, multiple times a day most days.
But the first, with a XNOR, to make sense of that I need to stop and think about what the code does, and even then it's far less intuitive, so every time I read the function, I'm like "Let's see, if isRunFooMode is true, and hasSomeProperty(X) is true, then the XOR is false, but it's negated, so we return true...", and then I sit trying to squeeze an entire XNOR truth table into my thought process, making it pretty crowded in terms of actually figuring out whether the code does what I think it does. This is because I have no intuitive model of XNOR, because I encounter explicit XNOR logic at the very most once or twice a year.
The == option is slightly less bad in that regard, as it's also deeply ingrained in the programmer's intuition. The ternary operator (pred?yes:no) is also a viable option for feeble-fingered programmers.
As I mentioned above, I'm totally with you on the a==b
. (I didn't bring that up originally because it wasn't really relevant to the point I was making.)
Still, I'd claim even with the !(a^b)
, the single return statement is probably better and certainly not worse.
I agree in the example in question, the two return statements aren't particularly problematic. I use multiple exit points regularly myself, so I'm not some kind of structured programming nazi. But I do think there's always a cost. It's always going to make it harder to reason about code and increase the chance of errors. Only a little in this case, admittedly.
But I think you're massively overstating the difficulties in reading a simple logical expression. I agree you have to think about !(a^b)
* and that's always a bad thing. But we're only talking about a second here. It's a cost, but it's even less than the cognitive overhead on splitting up your return statement.* Really, that's not even true. If you have the identity !(a^b) ≡ a==b ingrained, you'll be able to read it without pausing. Admittedly, I don't. I'm really lazy about this stuff. I just think it through each time. I do think it's worthwhile, though: it increases the fluency with which you read logic and keeps your mind free of distractions.edit
: okay, I didn't really take on what you're saying. You're serious about having trouble reading the expression. Fair enough, but that's the problem, not the expression itself: you should get used to it. I think it's important to be able to use logical notation fluently. If you can't... well, you'll be forced to replace simple declarative expressions with chains of verbose imperative instructions. It's kind of like taking a simple math formula and replacing it with a paragraph of contorted English. It might be more readable to some people, but you'd never recommend it. The proper advice is "learn to read the notation".
The intuitive model for xor is just "exactly one of these is true" (a secondary intuition is "these two truth values are different"). Negating it you get xnor="both or neither are true" ("the two truth values are the same"). If you remember the intuitive meaning of xor, the negation will be very quick (you do it before
you consider the actual values).
If you start actually using logical expressions more, you'll find it's virtually no effort at all. In fact, it'll reduce
the load on your concentration (just like any good mathematical notation). Hell, you'll probably start to use it to write notes to yourself because it's easier to read.