Page 1 of 1

Posted: Sun Oct 30, 2011 6:07 pm UTC
{I screwed up and mistakenly called associativity, transitivity; I am talking about associativity. I HAVE CHANGED THE TITLE OF THE THREAD TO REFLECT THIS, but have left my errors intact in my post.}

I need a symbol for a "generic" operator; I don't know what the mathematical standard for that is, so for now I am going to use the "@" character as a symbol for a "generic" operator.

In my math classes, one has proved transitivity by proving that (a@b)@c=a@(b@c); this does prove transitivity over 3 operands, but can one prove that transitivity over 3 operands implies transitivity over any number of operands? To use 4 operands as an example, there are 5 or 6 possible cases to be proven:

((a@b)@c)@d
a@(b@(c@d))
(a@(b@c))@d
a@((b@c)@d)

The last case(s) is: (a@b)@(c@d); for all the operations that have been considered in my math classes, the operator is "memory-less"; ie, (a@b) has the same result whether it is done before or after (c@d) [likewise for (c@d)], which would make (a@b)@(c@d) a single case, thus resulting in 5 cases to be proven. However, it is possible to imagine that the operator is a function is a computer program, which means that the operation could have a memory, meaning that the result of (a@b) could depend whether it is done before or after (c@d) [likewise for (c@d)], which would make (a@b)@(c@d) 2 cases, thus resulting in 6 cases to be proven.

There are two things to consider. One is that I have only given the cases for 4 operands, but I am asking about the question of proving that transitivity over 3 operands implies transitivity over any number of operands.

The other is that it is possible that while it may be proving that transitivity over 3 operands implies transitivity over any number of operands for a "memory-less" operation, it may not be true that proving that transitivity over 3 operands implies transitivity over any number of operands for a operator with a memory. Considering that I have never been in a situation where I have needed to prove transitivity of an operation with a memory, I am far more interested in whether or not proving that transitivity over 3 operands implies transitivity over any number of operands for a "memory-less" operation, than I am for an operation with a memory.

Posted: Sun Oct 30, 2011 6:55 pm UTC
Just to clear up your notation, you're talking about associativity, not transitivity. Transitivity is a property that a relation has that has some similarities but it's nice to keep the distinction. Operators are associative. (To give another example of things that look the same but have different words, x@y = y@x would mean that the operator is commutative, but x~y implying y~x would mean that the relation ~ was symmetric.) @ is a good ASCII symbol for a generic operator -- you'd probably tend toward * on a blackboard but that's too easily confused with multiplication in a computer context like this.

As to your question, yes. This is known as generalized associativity, and it does follow from associativity by strong induction (as opposed to the general one we normally use). As I recall, most people handwave over the proof because the details detract from the central argument. The basic concept is that you assume that you can "perturb" any expression that is made up of fewer than n iterations of @ without affecting its value and imagine two expressions with n iterations of @. So if you think about the "outermost" @'s in the two expressions, you see that you can perturb the left and right sides however you want by the induction hypothesis and turn it into some expressions of the form (x@y)@z and x@(y@z), which are also equal by the induction hypothesis. You can find the whole proof in any decent book on abstract algebra, but it has to go on for pages to formally describe that process.

I don't know what you mean by "memoryless" operations. In general, if the value of a@b depends on something other than the values of a and b, then all bets are off mathematically.

Posted: Sun Oct 30, 2011 7:05 pm UTC
A "binary operation with a memory" isn't a function of two arguments, and hence isn't a binary operator. (It is also a function of previous operations, so that state is one of its arguments)

For your last example, if we have (a@b)@(c@d), and A@(B@C) = (A@B)@C, we substitute (a@b) for A, b for B, and c for C and get: ((a@b)@c)@d

And yes, the general proof would be induction based. The trickiest part would be formally expressing what you mean by "n-ary transitivity". The approach I'd try is to demonstrate that all binary expression trees, given 3-way transitivity, are equivalent to the tree with no grandchildren on the left.

Posted: Sun Oct 30, 2011 7:45 pm UTC
Tirian wrote:I don't know what you mean by "memoryless" operations. In general, if the value of a@b depends on something other than the values of a and b, then all bets are off mathematically.

Yakk wrote:A "binary operation with a memory" isn't a function of two arguments, and hence isn't a binary operator. (It is also a function of previous operations, so that state is one of its arguments)

IE, I'd have to prove generalized associativity for a operation with memory; treating the memory as an additional argument would be one way that might make that easier/possible.

Tirian wrote:As to your question, yes. This is known as generalized associativity, and it does follow from associativity by strong induction (as opposed to the general one we normally use). As I recall, most people handwave over the proof because the details detract from the central argument. ... You can find the whole proof in any decent book on abstract algebra, but it has to go on for pages to formally describe that process.

However, if the operation is "memory-less", one can prove in the general case that associative of 3 implies generalized associativity. My idea was that if one can prove it for the general case, one doesn't have to prove it in each specific case where associativity of 3 operands is proven. It would seem though that the actual proof is well beyond the level at which the idea of associativity is introduced, which is why I have not actually seen it proven.

Posted: Sun Oct 30, 2011 8:13 pm UTC
The way we proved generalized associativity in my real analysis 1 class was by induction on a few different cases. We first proved that "the insertion of any one pair of parenthesis does not change the value of an n-fold product."

Our base case is for 3-fold products, which is given as an axiom (at least for real numers): a_1*(a_2*a_3)=(a_1*a_2)*a_3=a_1*a_2*a_3

Assume it's true that you can insert a pair of parenthesis into an n-fold product and show it's true for an n+1 fold product.
There were 3 cases we considered. First off, here's our definition of an n+1 fold product [a_1*a_2*...*a_n]*a_n+1. It's recursively defined. It says an n+1 fold product is an n fold product multiplied on the right by the n+1th term.

The 3 cases we considered were:
case 1)the insertion of a pair of parenthesis where the opening parenthesis is before the first term a_n.
case 2)the insertion of a pair of parenthesis where the closing parenthesis is after the last term a_n+1
case 3) both the opening and closing parenthesis appear "inside" the product.

Details are left out. I didn't really like the proof too much, but it got the job done.

Try googling "generalized associativity" to see if you can find a better one. Good luck!