I'm with Team Literal-Should-Be-Literal, mainly for the same reasons as Chenille. Yes, one can almost always tell the meaning from context, but for any language there should be a continuum of context-dependency to context-irrelevance.
Suppose we had a word which meant "yes" and another which meant "yes or no" but not a word which unambiguously meant "no". Assuming that other negative words existed, then we would still be able to communicate, but it would be a bit harder, and in some contexts very awkward.
I can imagine an alternate universe in which people kept coming up with words like "no" and "not" which started out as negatives but (due to some fascinating quirks of human psychology) gradually become generic intensifiers, so that if you were at a restaurant and you didn't want the soup, you would have to say "No, I really don't want – as in, it is false and untrue that I want – the soup." This is where we are with "literal".
It should be noted that technically, no one uses "literally" to mean
"figuratively". Otherwise, the following exchange would make sense:
"He wanted to kill me!"
"Oh my God! Did you go to the police?"
"What? No, I meant that literally."
Instead, literally is used as a generic intensifier, where the figurative context is being taken for granted by the speaker. This can imply an unimaginative presumption about reality, as in "Well, this situation would never
be true for reals, so no matter what intensifier I use, it will still be clear that I'm being figurative." What's the poor sap whose friend literally laughed his ass off supposed to say? (Yes, I know that therre are okay answers to that, but still.)
Anyway, I don't believe all grammar pedantry is created equal. "Language evolves" is actually a decent argument for tolerating change; it's just that sometimes, the change is in a direction of less value (whether that value is usefulness, beauty, elegance, fun, etc). But "it's just the rule" or "it's just what the word means
" is a terrible rule.
An example of a grammar peeve I definitely do not have is the modification of "unique". If unique is unmodifiable, then everything and nothing is unique, and the word is pointless. On the other hand, if we can say somewhat unique, very unique, a little more unique
, etc, then unmodifed "unique" actually makes sense.
Here's another analogy: suppose that some people insisted that "big" could never be modified. Then one could argue that everything is big except for quarks, and also nothing is big except for the universe, since other object you can name is "big" relative to some things and "not big" relative to others. Saying "that dog is big" would be meaningless, because all dogs are bigger than a paramecium, and thus all dogs are "big". Conversely, if bigness in understood as coming in dgerees, then "That dog is big" makes sense – we're saying that the dog is somewhere in the upper range of bigness for dogs.
With unmodifiable unique
, saying "That dog is unique" is meaningless even though the reference class is clearly "dogs", because this means that only one dog can be unique at any given time
. Whereas the way I see it, you're actually saying "in the upper range of uniqueness for dogs". (Although it helps to specify why
the dog is unique, unlike with the more straightforward word "big".)
Rant over. :
I wonder if I'm the only one who holds those two positions at the same time. Am I literally unique?