## "Oh no! We forgot how to say... math... stuff!"

For the discussion of math. Duh.

Moderators: gmalivuk, Moderators General, Prelates

saus
Posts: 176
Joined: Fri Nov 02, 2007 12:19 am UTC

### Re: "Oh no! We forgot how to say... math... stuff!"

moiraemachy wrote:Sorry to necro this thread, but... I have to get this out of my chest. Index notation is reversed! It should be columns X rows (a row vector should be a n x 1 matrix) in order to obey cartesian coordinates.

+1
This would have prevented so much brain pain when learning linear algebra. Whenever I see mxn I have to say in my head "rows and columns" and be very careful about it.. if it was the horizontal size x vertical size that we're so used to maybe it'd be less of a struggle.

z4lis
Posts: 767
Joined: Mon Mar 03, 2008 10:59 pm UTC

### Re: "Oh no! We forgot how to say... math... stuff!"

moiraemachy wrote:Sorry to necro this thread, but... I have to get this out of my chest. Index notation is reversed! It should be columns X rows (a row vector should be a n x 1 matrix) in order to obey cartesian coordinates.

I'm not sure what you mean by "obey cartesian coordinates", but think about it this way: with the current notation, an n x m matrix times a m x l matrix gives an n x l matrix. The m's inside "cancel". With your way, it'd be a m x n matrix times a l x m matrix gives a l x n matrix, which seems kind of yucky to me. Related to this, when multiplying matricies, rather than the nice formula

$[AB]_{ij} = \sum_{k=1}^n A_{ik}B_{kj}$

where we just stick a k in the middle and sum, we would get the formula

$[AB]_{ij} = \sum_{k=1}^n A_{kj}B_{ik}$

and we have to put the k's on the outside and flip the i and j and ewww. So the choice of specifying rows first instead of columns is a result of our convention on how to multiply matricies.
What they (mathematicians) define as interesting depends on their particular field of study; mathematical anaylsts find pain and extreme confusion interesting, whereas geometers are interested in beauty.

moiraemachy
Posts: 190
Joined: Wed Jan 04, 2012 9:47 pm UTC

### Re: "Oh no! We forgot how to say... math... stuff!"

By "more cartesian", I mean that we're used to say horizontal stuff X vertical stuff, in that order. It's (x, y) because x is horizontal. We say "base times height", not the reverse. It's "length X height X width" when talking about the size of something. Index notation is the only place it's not like that.

I don't think the convention was made with matrix multiplication in mind... my guess is that our matrix notation comes from spreadsheets and tables, in which talking about rows generally is a better indication of size (when you add stuff to your table, you are supposed to add a row, not a column), so the row X columns convention gives you a fuzzy big-endian feeling.

You could simply reverse how matrix multiplication is done, though I'm still undecided about the pros and cons of this approach. I like the fact that stacking successive linear transformations on top of a column vector would be xABC, not CBAx.

Cosmologicon
Posts: 1806
Joined: Sat Nov 25, 2006 9:47 am UTC
Location: Cambridge MA USA
Contact:

### Re: "Oh no! We forgot how to say... math... stuff!"

z4lis wrote:
moiraemachy wrote:It should be columns X rows in order to obey cartesian coordinates.
...With your way, it'd be a m x n matrix times a l x m matrix gives a l x n matrix, which seems kind of yucky to me.... and we have to put the k's on the outside and flip the i and j and ewww. So the choice of specifying rows first instead of columns is a result of our convention on how to multiply matricies.

No, your two problems both go away, and it looks exactly like the current convention, if you define matrix multiplication as dotting a column of the first matrix with a row of the second matrix (rather than a row of the first with a column of second like we do now).

Qlexander
Posts: 2
Joined: Mon May 28, 2012 6:51 pm UTC

### Re: "Oh no! We forgot how to say... math... stuff!"

I'd quite life to see postfix notation for functions, i.e. (x)f instead of f(x). That way, compositions would make sense: '(x)f g' would mean "Apply f, then g".

Also, a "convergent sequence" should mean a sequence that's Cauchy. Everyone always seem to get confused when they hear that (1, 1/2, 1/3, 1/4, ...) is not convergent in (0,1). It would be much nicer to say "It's convergent, but the limit isn't in the space". (After all, a Cauchy sequence in a space X is always convergent in the completion of X)

Mapar
Posts: 129
Joined: Wed Jun 16, 2010 11:26 am UTC

### Re: "Oh no! We forgot how to say... math... stuff!"

Qlexander wrote:I'd quite life to see postfix notation for functions, i.e. (x)f instead of f(x). That way, compositions would make sense: '(x)f g' would mean "Apply f, then g".

Also, a "convergent sequence" should mean a sequence that's Cauchy. Everyone always seem to get confused when they hear that (1, 1/2, 1/3, 1/4, ...) is not convergent in (0,1). It would be much nicer to say "It's convergent, but the limit isn't in the space". (After all, a Cauchy sequence in a space X is always convergent in the completion of X)

I read (f ° g) as "f after g", as in "f is applied after g". Helps keeping things in order.
Hi.

Yakk
Poster with most posts but no title.
Posts: 11103
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

### Re: "Oh no! We forgot how to say... math... stuff!"

Cauchy requires a notion of distance, while Convergence only requires a notion of closeness. What name would you give the Convergence criteria?
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

Max™
Posts: 1792
Joined: Thu Jun 21, 2012 4:21 am UTC
Location: mu

### Re: "Oh no! We forgot how to say... math... stuff!"

The_Spectre wrote:Pi would be 6.283... . A full circle of radians. ei Pi= 1, and all sorts of wonderful things.

http://www.math.utah.edu/~palais/pi.pdf

I'd like to cosign this... get it?

Oh, can we redesign the numerals while we're at it?

I did this once for my dyslex/calc gf, started fiddling around with it, wound up adapting it to base 16!
Last edited by Max™ on Sat Jun 23, 2012 6:30 pm UTC, edited 1 time in total.
mu

f5r5e5d
Posts: 104
Joined: Tue May 08, 2012 3:22 am UTC

### Re: "Oh no! We forgot how to say... math... stuff!"

could the Gibbs Cross Product, "axial vectors"/"pseudovectors" stay lost? - the better replacement: Grassman exterior product, Clifford multivectors, Bivectors, explicity taking the dual of a (3-D) Bivector to get the normal Vector (the Dual relation only works in 3-D)

clarifies a lot, scales to higher dimensions, many tensors have multivector representations - why not start with math scaling nicely to higher dimension coordinate free physics

Posts: 1656
Joined: Sun Oct 19, 2008 5:28 pm UTC
Location: Space Florida

### Re: "Oh no! We forgot how to say... math... stuff!"

Again, unified syntax for operators and functions.

Lack of an operator shouldn't specify multiplication. It should specify concatenation. So if x=[1,2] and y=[3,4,5], then xy equals [1,2,3,4,5]. Concatenating reals doesn't really make sense but "concatenation" could be extended to a number of cases: vectors, matrices (l by n and m by n becomes a l+m by n), sets (union), and functions (gf becomes f(g(x))).

Name "real" and "imaginary" numbers to "parallel" and "perpendicular".

I vote 12 as the default base. 30 (2 * 3 * 5) seems too large (900 entries in the multiplication table) and 6 too small (memorable phone number space would be just 279,936).

Edited - section removed due to misunderstanding of math terminology

skeptical scientist wrote:No. Equality is not an operator, and you can't write a b = for a = b. You just... can't.
You're right: it's two . It takes two self comperables and returns boolean value. I would like to see pure math have separate declarative and descriptive equalities. Though unlike programming, it'd give "=" to descriptive since it's used as such in most statements in most proofs I've seen. I'd also eliminate the word "let" from every declarative equality. So "Let f(x) = x2 - 3x where x within R" would become "f(x within R) ¥ x2 - 3x", with a better declarative equality symbol than "¥". Also "within reals" would be implied if type of x isn't stated.

I like postfix for one place functions. So for "( (x) g ) f" the information is in the same order as used for computation.
Infix for 2 place, keep parentheses as a disambigifier. Functions with no arguments are treated as first class functions and are themselves arguments.
Three place functions are badly specified and common ones should be restructured to two place. So
Integral (a,b,f(x)) would become
f Integral (a range b )

We also have conventions where a,b,c... are constants; f,g,h... are functions; x,y,z... are variables; S,T,U... are sets; etcetra. We should have symbols for the main groups that we can append to the names when we want to be explicit.
Last edited by Quizatzhaderac on Fri Mar 28, 2014 8:06 pm UTC, edited 3 times in total.
The thing about recursion problems is that they tend to contain other recursion problems.

Yakk
Poster with most posts but no title.
Posts: 11103
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

### Re: "Oh no! We forgot how to say... math... stuff!"

Quizatzhaderac wrote:With respect to relational databases "one-to-one" means bijective, "many-to-one" means injective, "one-to-many" surjective. Using "one-to-one" as injective seems just plain wrong to me.

What?

No really. What?

"many-to-one" means "many input values map to one output value". This is not what injective means. In the DB world, do they actually think that "many-to-one" means injective?

"one-to-many" means that a given value maps to a set of outputs.

"one-to-one" means that each input maps to one output, and that output is not mapped to by anything else.

"surjective" means that every element in the target space can be reached by something in the source space under this function.

"injective" means that each input maps to one output, and that output is not mapped to by anything else.

"bijective" means that each input maps to one output, and that output is not mapped to by anything else, and each element in the output space can be reached by some element in the input space.

All "injective" functions can be made bijective by restricting your output space to the image of the transformation.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

gmalivuk
GNU Terry Pratchett
Posts: 26577
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

### Re: "Oh no! We forgot how to say... math... stuff!"

Yeah, I think the confusion between the two things one-to-one sometimes means comes from the fact that all injective functions are bijective with the image.
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

Posts: 1656
Joined: Sun Oct 19, 2008 5:28 pm UTC
Location: Space Florida

### Re: "Oh no! We forgot how to say... math... stuff!"

Yakk wrote:This is not what injective means.

Sorry, you're right. It'd been awhile since formal set theory and got I got my terms wrong.
The thing about recursion problems is that they tend to contain other recursion problems.

Macbi
Posts: 941
Joined: Mon Apr 09, 2007 8:32 am UTC
Location: UKvia

### Re: "Oh no! We forgot how to say... math... stuff!"

Adopt the convention of only writing down expressions that are equal to zero (if you want to mention some other expression then you can put it in quote marks). Then redefine "=" as a very low precedence subtraction sign. Then we could just write ei pi+1, for example. Moving stuff from one side of the equation to another is revealed as just pushing a "-" sign into a bracket.
Indigo is a lie.
Which idiot decided that websites can't go within 4cm of the edge of the screen?
There should be a null word, for the question "Is anybody there?" and to see if microphones are on.

Dason
Posts: 1311
Joined: Wed Dec 02, 2009 7:06 am UTC
Location: ~/

### Re: "Oh no! We forgot how to say... math... stuff!"

Macbi wrote:Adopt the convention of only writing down expressions that are equal to zero (if you want to mention some other expression then you can put it in quote marks). Then redefine "=" as a very low precedence subtraction sign. Then we could just write ei pi+1, for example. Moving stuff from one side of the equation to another is revealed as just pushing a "-" sign into a bracket.

Maybe I'm failing to see why you think this would be a good idea... but I don't like it.
double epsilon = -.0000001;

eSOANEM
:D
Posts: 3652
Joined: Sun Apr 12, 2009 9:39 pm UTC
Location: Grantabrycge

### Re: "Oh no! We forgot how to say... math... stuff!"

Quizatzhaderac wrote:Lack of an operator shouldn't specify multiplication. It should specify concatenation. So if x=[1,2] and y=[3,4,5], then xy equals [1,2,3,4,5]. Concatenating reals doesn't really make sense but "concatenation" could be extended to a number of cases: vectors, matrices (l by n and m by n becomes a l+m by n), sets (union), and functions (gf becomes f(g(x))).

Except now your definition is inconsistent depending on how you view matrices.

If you consider them as linear transformations then concatenating them leads to normal matrix multiplication whereas you have suggested some sort of augmentation. Clearly this is a bad situation as a natural but ambiguous to the point of unusability convention is far worse than an arbitrary but clear one.

I don't think it's a bad thought, I just think you need to think it through more, lack of a specified operator between operators should indicate concatenation (with a consistent order of precedence) but it is not clear what to do with things which are not operators. As matrices can represent transformations which are operators, it makes sense to treat all matrices this way and so matrix multiplication is the most natural form of concatenation. Because scalars, by their name, scale vectors, matrices etc. it seems clear that when placed next to such objects the implied operation should be scalar multiplication; extending this to include two scalars placed next to each other and you recover an identical convention to the current one (except all operators nest in the same direction).

In short, the current system is not as arbitrary as it seems, but can be derived fairly simply from some idea of "concatenation" and a few simple extensions.
my pronouns are they

Magnanimous wrote:(fuck the macrons)

cyanyoshi
Posts: 392
Joined: Thu Sep 23, 2010 3:30 am UTC

### Re: "Oh no! We forgot how to say... math... stuff!"

Something that has bugged me for a while is that there are a few conflicting conventions for defining spherical coordinates. In my vector calc class, our convention was:

x = r * cos(θ) * sin (φ)
y = r * sin (θ) * sin (φ)
z = r * cos (φ)

or something like that (I may have mixed up φ and θ). You can imagine standing with your arm (length "r") straight up, facing the x-axis. Then bring your hand down by angle φ in a Nazi salute and rotate your body counterclockwise by angle θ. There are several conventions you can take for spherical coordinates rearranging the sines and cosines, as in geographic-esque coordinates or whatever the heck they use in physics. but the convention I find most appealing comes from hyperspherical coordinates:

x = r * cos(θ)
y = r * sin(θ) * cos(φ)
z = r * sin(θ) * sin(φ)

There are a few things I like about this. For starters, when the third variable φ is zero, this reduces to plain old polar coordinates in 2 dimensions. When both angles are zero, this likewise reduces to the number line along x. Secondly, the Cartesian coordinates x, y, and z just match up better to r, θ, and φ, respectively. You start a distance r along the x-axis. Then you rotate toward the y-axis by angle θ, followed by loosely rotating toward the z-axis (specifically by rotating around the x-axis). This also preserves orientation of the axes when r>0, which is nice. Lastly, this convention readily generalizes to higher dimensions like with regular hyperspherical coordinates (though I personally like to avoid indices unless it is especially convenient to do so).

Also, there is a nifty little way to write a quaternion in spherical coordinates that I found:

exp( exp(φi) * θk) *ri

This is rather concise and illustrates the pseudovector-ness of quaternions. Go out r along i, rotate by θ about k, then rotate by φ about i. I'd better stop before I get too off-topic...

Max™
Posts: 1792
Joined: Thu Jun 21, 2012 4:21 am UTC
Location: mu

### Re: "Oh no! We forgot how to say... math... stuff!"

I cosign the hyperspherical coordinate convention as well.
mu

Posts: 1656
Joined: Sun Oct 19, 2008 5:28 pm UTC
Location: Space Florida

### Re: "Oh no! We forgot how to say... math... stuff!"

eSOANEM wrote:
Quizatzhaderac wrote:Lack of an operator shouldn't specify multiplication. It should specify concatenation. So if x=[1,2] and y=[3,4,5], then xy equals [1,2,3,4,5]. Concatenating reals doesn't really make sense but "concatenation" could be extended to a number of cases: vectors, matrices (l by n and m by n becomes a l+m by n), sets (union), and functions (gf becomes f(g(x))).

Except now your definition is inconsistent depending on how you view matrices.

If you consider them as linear transformations then concatenating them leads to normal matrix multiplication whereas you have suggested some sort of augmentation. Clearly this is a bad situation as a natural but ambiguous to the point of unusability convention is far worse than an arbitrary but clear one.

I don't think it's a bad thought, I just think you need to think it through more, lack of a specified operator between operators should indicate concatenation (with a consistent order of precedence) but it is not clear what to do with things which are not operators. As matrices can represent transformations which are operators, it makes sense to treat all matrices this way and so matrix multiplication is the most natural form of concatenation. Because scalars, by their name, scale vectors, matrices etc. it seems clear that when placed next to such objects the implied operation should be scalar multiplication; extending this to include two scalars placed next to each other and you recover an identical convention to the current one (except all operators nest in the same direction).

In short, the current system is not as arbitrary as it seems, but can be derived fairly simply from some idea of "concatenation" and a few simple extensions.

I'll certain agree my ideas need more thought (I assume none of us are actually ready to replace all mathematical notation). In the case of matrices I'd say they need separate notation for the matrix as data and a matrix as a function. So A would be a matrix and ¶A would be the function based off A. Personally, skipping the construction of the function from the matrix on an implicit operation seems sloppy; just because a matrix can represent a transformation doesn't mean it must always do so.

However I would say is that if new students didn't intuitively think augmentation was concatenation for matrices and appending was the natural concatenation of lists, you simply couldn't use an implicit operator on them.

Anyway my "concatenation" of functions is new, not my concatenation of list or strings. As I see it, the intuitive appeal of "Concatenation as the default operator" is that concatenating the symbols also concatenates the referents.
The thing about recursion problems is that they tend to contain other recursion problems.

eSOANEM
:D
Posts: 3652
Joined: Sun Apr 12, 2009 9:39 pm UTC
Location: Grantabrycge

### Re: "Oh no! We forgot how to say... math... stuff!"

Quizatzhaderac wrote:I'll certain agree my ideas need more thought (I assume none of us are actually ready to replace all mathematical notation). In the case of matrices I'd say they need separate notation for the matrix as data and a matrix as a function. So A would be a matrix and ¶A would be the function based off A. Personally, skipping the construction of the function from the matrix on an implicit operation seems sloppy; just because a matrix can represent a transformation doesn't mean it must always do so.

However I would say is that if new students didn't intuitively think augmentation was concatenation for matrices and appending was the natural concatenation of lists, you simply couldn't use an implicit operator on them.

Anyway my "concatenation" of functions is new, not my concatenation of list or strings. As I see it, the intuitive appeal of "Concatenation as the default operator" is that concatenating the symbols also concatenates the referents.

Treating transformations and matrices as separate entities is going to get very messy as, in order to maintain some non-arbitrary sense in this system without introducing ambiguity, you need to be able to unambiguously distinguish between matrices in an abstract sense and transformations which introduces extra notation making the resulting system less elegant. If you want elegance and a lack of arbitrariness and ambiguity then matrices and transformations must be treated the same.

If your matrix is simply an array of values and doesn't have any of the other structure associated with matrices then thinking of it as an ordered set makes sense and so augmentation would be the natural operation to perform however such an object would not be a matrix which comes with additional structure such that any group of matrices is isomorphic to some group of transformations. It is this extra structure and the existence of an isomorphism between matrices and linear transformations which makes it more reasonable to treat them as transformations than ordered sets.

And concatenating functions is not a new idea at all. fgh(x) is (where f(x), g(x) and h(x) have been previously defined) a fairly common abbreviation for f(g(h(x))). As I said, what to do with functions, operators, transformations etc. is easy, what to do with passive objects (such as matrices when not acting on other object as operators) is where it becomes tricky and, as I've argued, it's hard to see a reasonable way to determine what "concatenation" means in all such contexts without producing something identical to current convention (except with consistent direction of nesting).
my pronouns are they

Magnanimous wrote:(fuck the macrons)

Posts: 1656
Joined: Sun Oct 19, 2008 5:28 pm UTC
Location: Space Florida

### Re: "Oh no! We forgot how to say... math... stuff!"

eSOANEM wrote:If your matrix is simply an array of values and doesn't have any of the other structure associated with matrices.... however such an object would not be a matrix which comes with additional structure such that any group of matrices is isomorphic to some group of transformations. It is this extra structure and the existence of an isomorphism between matrices and linear transformations which makes it more reasonable to treat them as transformations than ordered sets.

It appears you're using a consistent definition of "Matrix". Kudos, that is a good thing. Unfortunately, it seems I have a consistently different definition than yours. I default to the programming definition of matrix, which is "An array of same length arrays, or an array of same dimensioned matrices, or an array of references to one of the previous two". In programing languages, square brackets a generally reserved for this, with () brackets not used for collections to make the difference more easily read.

Now for a context where you only write an array of arrays of reals for "extra structure" it makes sense to both define and use it in a way where that extra structure is central. In my example (if x=[1,2] and y=[3,4,5], then xy equals [1,2,3,4,5]) the square brackets were intended to indicate arrays (or a totally ordered sets if you prefer). But for a physicist doing vector transformations it would make sense to say "let A be the linear transformation [[1,2][3,4]]". I'm not a physicist so "[[1,2][3,4]]" doesn't suggest a linear transformation to me.

As for "If you want elegance and a lack of arbitrariness and ambiguity then matrices and transformations must be treated the same." Even restricting to pure math it seems pretty arbitrary to me to assume a matrix is used for linear transformation instead of representing a graph, or a system of linear equations, or anything else. If you allow isomorphism of referents to be equality of symbols you're always going to have some arbitrariness where you invoke some property the referents are not isomorphic on. Also, since computability is important to me, I consider distinguishing isomorphs to be worthy of extra notation.

In short: I say you can let A be a list, a linear transformation, a system of linear equations, or whatever. But you have to pick one and note if/when you switch. So you could say: let A = ¶[[3]], let B = ¶[[4]], let x = (1,2,3), let y = (4,5). Using () to denote a ordered set. AB would equal ¶[[12]], xy = (1,2,3,4,5), and xB would be "wait, what operation are you doing between those two?"
Last edited by Quizatzhaderac on Thu Jun 04, 2015 5:20 pm UTC, edited 3 times in total.
The thing about recursion problems is that they tend to contain other recursion problems.

eSOANEM
:D
Posts: 3652
Joined: Sun Apr 12, 2009 9:39 pm UTC
Location: Grantabrycge

### Re: "Oh no! We forgot how to say... math... stuff!"

Ah, I spotted a flaw in my logic. I was thinking only of square matrices. Other matrices can be made to be isomorphic to linear transformations by augmenting them with zeroes to be square but then you're not using the original matrices.

Hmmm... This does lead to problems. Concatenating ordered tuples (or sets generally for that matter) should lead to augmentation (or union for sets), this is clear. However, as [1,2,3,4,5] is a matrix (albeit one you'd probably expect to be treated as a vector instead), if the operation should depend solely on the object then it should be augmentation for all matrices.

As I've said before, this seems like it would be horrendously messy because of the situation with linear transformations (and so the convention would depend on whether "A" referred to a transformation or the matrix being used to do calculations with that transformation). From a physics point of view, I quite like abuses of notation to reinforce calculational aids (I write the curl of a vector field as a cross product, I use Leibniz's notation for derivatives and integrals and then treat it as a fraction and I write inverse trig functions with a superscript -1) so it seems most natural to treat the actual object (the transformation) the same as the calculational aid and so, having square matrices augment but transformations apply in turn does not sit easy with me.
my pronouns are they

Magnanimous wrote:(fuck the macrons)

Yakk
Poster with most posts but no title.
Posts: 11103
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

### Re: "Oh no! We forgot how to say... math... stuff!"

eSOANEM wrote:Ah, I spotted a flaw in my logic. I was thinking only of square matrices. Other matrices can be made to be isomorphic to linear transformations by augmenting them with zeroes to be square but then you're not using the original matrices.

Um, what? All rectangular matrices can be treated as linear transformations. Just non-square ones go between spaces of different dimension.

...

And concatenation is an obscure operation on matrices, while multiplication is useful ridiculously often. In general, concatenation isn't all that interesting of an operation. Making it "cheaper" seems relatively pointless.

Rings are common. Making a+bc take less symbols has payoff all over the place. Making adjacency be concatenation is a bad idea, outside of maybe some computer programming cases.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

eSOANEM
:D
Posts: 3652
Joined: Sun Apr 12, 2009 9:39 pm UTC
Location: Grantabrycge

### Re: "Oh no! We forgot how to say... math... stuff!"

Yakk wrote:
eSOANEM wrote:Ah, I spotted a flaw in my logic. I was thinking only of square matrices. Other matrices can be made to be isomorphic to linear transformations by augmenting them with zeroes to be square but then you're not using the original matrices.

Um, what? All rectangular matrices can be treated as linear transformations. Just non-square ones go between spaces of different dimension.

I was thinking about this. I used transformation poorly. What I meant were transformations preserving the dimension (although augmenting with zeroes cheats a bit here really).

With that in mind, I stand by my original logic that as matrices under multiplication are isomorphic to transformations under concatenation, it makes more sense to have the natural operation of matrices be multiplication rather than augmentation which, as Yakk says, is not often used.

This however does lead to inconsistencies as an ordered tuple could be interpreted as a matrix so you'd need to distinguish the two notationally (such as using different brackets).

...

That reminds me, I'd stop all this nonsense about whether matrices use square brackets or parentheses, I'd pick one and stick with it, the other one can go to ordered tuples, vectors would use the same as matrices.
my pronouns are they

Magnanimous wrote:(fuck the macrons)

Posts: 36
Joined: Mon May 28, 2012 7:11 pm UTC

### Re: "Oh no! We forgot how to say... math... stuff!"

I don't know if it's been said already, but to me the gamma function should be put out of common use and be substituted with the Pi function used by Gauss, which to my understanding is more elegant, as the integral for it is simpler and it matches up with the factorial function, it just seems more natural to use. however I am not familiar with many formulas involving the gamma function, so perhaps in the end it makes up for it somehow....

cyanyoshi
Posts: 392
Joined: Thu Sep 23, 2010 3:30 am UTC

### Re: "Oh no! We forgot how to say... math... stuff!"

A few common reasons are that the pole at zero rather than negative one is somehow more elegant, or that the recursion formula looks cleaner [compare Γ(n+1)=n*Γ(n) to (n+1)!=(n+1)*n!]. But then there's also the matter of some other formulas looking prettier, particularly the one involving the beta function.

The ubiquitous factorial function exactly matches up with the pi function, which is equal to the popular gamma function shifted by one unit. This is just absurd. Somehow, one function over the course of mathematical history has split apart into three functions that are essentially the same (apart from the domains). This is similar to the e^x vs. exp(x) situation, where the same function can be understood multiple ways. (In this case, the number "e" literally multiplied by itself x times vs. the more general power series 1 + x + x2/2 + ... + xn/n! + ...) These two interpretations are used interchangeably, so why can't it be the same for n factorial? Making specialized functions for high-level math a bit uglier is a small price to pay for being consistent with THE factorial (or pi) function. Practically speaking, I've encountered the factorial function a lot more than the gamma function in my schooling as an undergrad non-math major. If I had used the gamma function all along, Taylor series, combinatorics, etc. would have been less appealing.

It seems to be kind of like the π vs. τ debate, but if τ were in widespread use already.

Max™
Posts: 1792
Joined: Thu Jun 21, 2012 4:21 am UTC
Location: mu

### Re: "Oh no! We forgot how to say... math... stuff!"

Ugly math doesn't go far?
mu

Cosmologicon
Posts: 1806
Joined: Sat Nov 25, 2006 9:47 am UTC
Location: Cambridge MA USA
Contact:

### Re: "Oh no! We forgot how to say... math... stuff!"

cyanyoshi wrote:the recursion formula looks cleaner [compare Γ(n+1)=n*Γ(n) to (n+1)!=(n+1)*n!].
I can't speak to the overall question of Gamma vs Pi, but I don't like this. I don't want a formula for (n+1)!, I want a formula for n!, which is how it's usually defined. So the comparison you should be making is Γ(n)=(n-1)*Γ(n-1) vs. n! = n*(n-1)!.

Posts: 36
Joined: Mon May 28, 2012 7:11 pm UTC

### Re: "Oh no! We forgot how to say... math... stuff!"

cyanyoshi wrote:A few common reasons are that the pole at zero rather than negative one is somehow more elegant, or that the recursion formula looks cleaner [compare Γ(n+1)=n*Γ(n) to (n+1)!=(n+1)*n!]. But then there's also the matter of some other formulas looking prettier, particularly the one involving the beta function.

Although I am not familiar with the beta function, I realized that if you can add one to the x and y values of the beta function, for lack of a better word, it makes the integrals look nicer and B(x,y) becomes (x!*y!)/(x+y+1)! (I think), which isn't quite as good as the standard Beta functions equivalent when using the gamma function, but may be worth it due to the nicer integrals, and matching with the nice factorial function. The recurrence relation could also be n!=n*(n-1)! , which is just as clean as the one for the gamma function you have. And poles at all negative integers is perhaps more elegant than poles at all non-positive integers. The reason some formulas look better with the gamma function could be that those functions were designed to work with the commonly-used gamma function.
Still, the most compelling reason for the change is that the Pi function has a more elegant integral definition and that it matches up with the beloved factorial function.

PM 2Ring
Posts: 3664
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Mid north coast, NSW, Australia

### Re: "Oh no! We forgot how to say... math... stuff!"

cyanyoshi wrote:A few common reasons are that the pole at zero rather than negative one is somehow more elegant, or that the recursion formula looks cleaner [compare Γ(n+1)=n*Γ(n) to (n+1)!=(n+1)*n!]. But then there's also the matter of some other formulas looking prettier, particularly the one involving the beta function.

The ubiquitous factorial function exactly matches up with the pi function, which is equal to the popular gamma function shifted by one unit. This is just absurd. Somehow, one function over the course of mathematical history has split apart into three functions that are essentially the same (apart from the domains).
[etc]

I tend to agree that it's a bit silly. I never forget that gamma is factorial shifted by one, but I have to admit that I don't always remember which way it's shifted.

Another important formula involvng the gamma function is the one for the Riemann zeta function (see http://en.wikipedia.org/wiki/Riemann_zeta_function#The_functional_equation), and it'd look OK using the Pi function instead.

FWIW, (n+1)! = n*(n! + (n-1)!); this recursive definition is also satisfied by the subfactorial function; see http://en.wikipedia.org/wiki/Subfactorial#Counting_derangements.

Posts: 1656
Joined: Sun Oct 19, 2008 5:28 pm UTC
Location: Space Florida

### Re: "Oh no! We forgot how to say... math... stuff!"

Yakk wrote:And concatenation is an obscure operation on matrices, while multiplication is useful ridiculously often. In general, concatenation isn't all that interesting of an operation. Making it "cheaper" seems relatively pointless.

Rings are common. Making a+bc take less symbols has payoff all over the place. Making adjacency be concatenation is a bad idea, outside of maybe some computer programming cases.

That's a fair point, I'll admit there's a good chance I'm the only one fussy enough to not want the most common operator to be the default one.

Anyway what should we do about brackets? I think we might actually want to go beyond the three types we currently use (Does any pure math use <> brackets?). We currently use them for:
1. sets (unordered)
2. lists (fully ordered)
3. Clarifying order of operations
4. Specifying function invocation
5. Vectors
6. Matrices
7. Ranges
8. Type (maybe programming only)
9. Asides (technically prose, not math notation)
Please correct me if I missed any. Some of these can obviously be expressed in terms of others; the function invocation (for example) is just a list of arguments.

I'd say one set of brackets should be for order of operations only. This has been tried in programming for readability and I think it has worked well.

One for lists, and lists of lists, where the order written is intended, and without assuming extra semantics.

One for things that are written out like lists, but are not lists: like linear transformations, points in permutation space, graphs, linear systems, ect. Where you state (or let context imply) that A is a linear transformation or whatever at the start and just do all your calculations with the isomorphic representations. So eSOANEM could use this the way he currently uses [] brackets. Sets could be considered a special case of this, but I think most would consider them important enough to demand their own pair of brackets.

My previous function notation would eliminate the need for parens on one place ( 4 f ) and two place functions ( 4 f 3 ). I suppose if someone really needed a three or more place function they could explicitly use a list of arguments.

Ranges are a whole 'nuder bag of meerkats. Ceteris paribus it would be nice to have universally matching brackets, but the current notation is sufficient and changing it involves examining most everything about specifying ranges.
The thing about recursion problems is that they tend to contain other recursion problems.

Yakk
Poster with most posts but no title.
Posts: 11103
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

### Re: "Oh no! We forgot how to say... math... stuff!"

This is far from pure math (quite applied!), but it is math, and it uses <> brackets:
http://en.wikipedia.org/wiki/Bra-ket_notation

In pure math, < ., . > is a standard notation for an inner product.

In effect, given a specific sub-domain of mathematics, a different set of operators and notation is appropriate. This is similar to having namespaces in a program, but imagine if every program in the universe lived in the same application, and they only fail to run into each others symboligy via the namespace mechanism.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

Posts: 1656
Joined: Sun Oct 19, 2008 5:28 pm UTC
Location: Space Florida

### Re: "Oh no! We forgot how to say... math... stuff!"

Hmm... can the inner product have potentially more than two arguments? If not I'd say it can be done without using up a set of brackets.

So as I count we've got 1) a pair for brackets for order of operations 2) a pair for lists, and 3) a pair for not-a-list but represented isomorphically to one.
For several different contexts people would want subtypes of two and three with special semantics. Now, if we want to determine how many types of brackets we need we'd need to know the worst case for needing the most types of different mathematical collections at once.
The thing about recursion problems is that they tend to contain other recursion problems.

PM 2Ring
Posts: 3664
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Mid north coast, NSW, Australia

### Re: "Oh no! We forgot how to say... math... stuff!"

Since we're re-inventing mathematical notation from the ground up, we could eliminate brackets used to indicate precedence by getting rid of all infix operators. My preference is for RPN, but I could live with prefix notation.

FWIW, in PostScript (which is an RPN programing language), the opening brackets [ (for array construction) and << (for dictionary construction) are synonyms for the mark operator, the object type is determined by the closing ] or >>. In other words, the "flavour" of the opening bracket is not significant, although I suppose that the use of matching brackets flavours does improve readability at the cost of doubling the number of symbols required to denote different types of grouping.

Posts: 1656
Joined: Sun Oct 19, 2008 5:28 pm UTC
Location: Space Florida

### Re: "Oh no! We forgot how to say... math... stuff!"

Pure math tends to use first class functions, which can be ambiguous with post-fix notation. For instance "ʃabf(x)" would be "abfʃ" with post-fix. The "abf" part usually means you're applying f to a and b. You could say it's context sensitive, but having to read from right to left, then left to right seems to defeat the point of using post-fix.

At any rate, even if it's not technically necessary, at some point you're going to want a disambigufier for readability. "x2^2 3/*3x*-5+x3-/" is unambiguous, but it's still hard on the eyes.
Last edited by Quizatzhaderac on Thu Aug 21, 2014 3:30 pm UTC, edited 1 time in total.
The thing about recursion problems is that they tend to contain other recursion problems.

PM 2Ring
Posts: 3664
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Mid north coast, NSW, Australia

### Re: "Oh no! We forgot how to say... math... stuff!"

Quizatzhaderac wrote:Pure math tends to use first class functions, which can be ambiguous with post-fix notation. For instance "ʃabf(x)" would be "abfʃ" with post-fix. The "abf" part usually means you're applying f to a and b. You could say it's context sensitive, but having to read from right to left, then left to right seems to defeat the point of using post-fix.

At any rate, even if it's not technically necessary at some point you're going to want a disambigufier for readability. "x2^2 3/*3x*-5+x3-/" is unambiguous, but it's still hard on the eyes.

Ok. And pure postfix is only unambiguous when the number of arguments that a function takes is fixed, if the number of args can vary then some kind of delimiter is required. With your integral example, I'd class the limits as arguments of the integration operator, rather than as args of the function to be integrated, so I'd write it as f a b ʃ or maybe even f [a b] ʃ to make the integration limits obvious.

I agree that long streams of RPN can get heavy on the eyes, even when you're very familiar with RPN, but I find that judicious use of spacing can help. When writing PostScript code I often break long expressions up over several lines. And I guess a comma (or similar) could be used to provide hints to the reader to assist them with breaking up big expressions. The comma would have no significance, it'd just be another flavour of white space.

Posts: 1656
Joined: Sun Oct 19, 2008 5:28 pm UTC
Location: Space Florida

### Re: "Oh no! We forgot how to say... math... stuff!"

PM 2Ring wrote:Ok. And pure post-fix is only unambiguous when the number of arguments that a function takes is fixed, if the number of args can vary then some kind of delimiter is required. With your integral example, I'd class the limits as arguments of the integration operator, rather than as args of the function to be integrated, so I'd write it as f a b ʃ or maybe even f [a b] ʃ to make the integration limits obvious.

I agree that the limits are really arguments to the integration operator. Maybe in your example the brackets would be better around the whole quadruplet. By making the function the first argument you've ruled out (in post-fix world) a and b being arguments to f, but ambiguity could still come from the external context. For instance if we had (infix) abf(x) that would become kfabʃ* without delimitation. If we used standard parens we'd probably want k(fabʃ)*.

As to variable argument functions, I think that one can actually be ignored for pure math. I don't think I've ever actually seen function overloading outside of programming. Pretty much anything that even looks like it is a function taking a list,set,vector or some-such.
The thing about recursion problems is that they tend to contain other recursion problems.

Sizik
Posts: 1230
Joined: Wed Aug 27, 2008 3:48 am UTC

### Re: "Oh no! We forgot how to say... math... stuff!"

Quizatzhaderac wrote:I agree that the limits are really arguments to the integration operator. Maybe in your example the brackets would be better around the whole quadruplet. By making the function the first argument you've ruled out (in post-fix world) a and b being arguments to f, but ambiguity could still come from the external context. For instance if we had (infix) abf(x) that would become kfabʃ* without delimitation. If we used standard parens we'd probably want k(fabʃ)*.

As to variable argument functions, I think that one can actually be ignored for pure math. I don't think I've ever actually seen function overloading outside of programming. Pretty much anything that even looks like it is a function taking a list,set,vector or some-such.

It's less ambiguous if you include the dx in there (xd in postfix if you consider d to be a function on x, instead of dx being a symbol), since that usually acts as the "ending bracket" for integrals.
gmalivuk wrote:
King Author wrote:If space (rather, distance) is an illusion, it'd be possible for one meta-me to experience both body's sensory inputs.
Yes. And if wishes were horses, wishing wells would fill up very quickly with drowned horses.

cjquines
Posts: 61
Joined: Thu Jul 21, 2011 5:30 am UTC

### Re: "Oh no! We forgot how to say... math... stuff!"

Everything would still be base 10, but whatever. Except that numbers will have much shorter names. Compare:
One, two, three, four, five, six, seven, eight, nine
And
Un, Bi, Tri, Quad, Pen, Sex, Sept, Oct, Non

Another thing is that there will be no such numbers after the decimal point. People learn with fractions. This is to prepare for the fraction lessons in the future. Algebra will also start using Greek letters instead of regular ones for variables. Oh, and it will be called "cjquines".

eSOANEM
:D
Posts: 3652
Joined: Sun Apr 12, 2009 9:39 pm UTC
Location: Grantabrycge

### Re: "Oh no! We forgot how to say... math... stuff!"

cjquines wrote:Everything would still be base 10, but whatever. Except that numbers will have much shorter names. Compare:
One, two, three, four, five, six, seven, eight, nine
And
Un, Bi, Tri, Quad, Pen, Sex, Sept, Oct, Non

Other than for seven, these are no shorter in terms of speech (or writing as they can just be written 1, 2, 3, 4 etc.). Changing seven to sept (or probably actually sep or set for ease of pronunciation) isn't a bad call though.

cjquines wrote:Another thing is that there will be no such numbers after the decimal point. People learn with fractions. This is to prepare for the fraction lessons in the future.

How would you give approximate values? How would you compare errors between measured values? This is a terrible idea. Decimals are fundamentally necessary for any sort of science or statistics and removing them would make maths a purely academic pursuit which could never have practical benefits (rather than an academic pursuit which has a tendency to find practical uses and so produce practical benefits)

cjquines wrote:Algebra will also start using Greek letters instead of regular ones for variables. Oh, and it will be called "cjquines".

There aren't enough to apply this nicely to science (where Latin and Greek letters are both used and there's already a huge amount of symbols being used for several things). Unless you simply mean to replace the "default" symbols for variables in algebra when kept separate from any sort of physical quantities (so "x", "y" and "z") with Greek letters (which wouldn't mess anything up, just not really change anything), this is a terrible idea which would make science harder to do (or, we'd probably just borrow other symbols like Cyrillic or Coptic or one of the kanas).
my pronouns are they

Magnanimous wrote:(fuck the macrons)