Page **8** of **11**

### Re: Math: Fleeting Thoughts

Posted: **Fri Jul 24, 2015 2:21 pm UTC**

by **The Great Hippo**

I've been reading graph theory -- the definition of adjacency keeps giving me trouble. Everything I read describes adjacency as one of two things: Vertices connected by an edge, or vertices that share an edge. However, both 'connected' and 'shared' confuse me when it comes to directed edges:

If X and Y are connected by the undirected edge e, X is adjacent to Y, and Y is adjacent to X. However: If e is a directed edge that goes from X to Y -- while it's correct that X is adjacent to Y, is it necessarily true that Y is adjacent to X? Is this considered 'sharing' an edge? Is Y connected to X, even though you cannot get to X from Y?

Also: If vertex has an edge loop (an edge that connects a vertex to itself), is that vertex adjacent to itself?

### Re: Math: Fleeting Thoughts

Posted: **Fri Jul 24, 2015 2:45 pm UTC**

by **jaap**

The Great Hippo wrote:I've been reading graph theory -- the definition of adjacency keeps giving me trouble. Everything I read describes adjacency as one of two things: Vertices connected by an edge, or vertices that share an edge. However, both 'connected' and 'shared' confuse me when it comes to directed edges:

If X and Y are connected by the undirected edge e, X is adjacent to Y, and Y is adjacent to X. However: If e is a directed edge that goes from X to Y -- while it's correct that X is adjacent to Y, is it necessarily true that Y is adjacent to X? Is this considered 'sharing' an edge? Is Y connected to X, even though you cannot get to X from Y?

Also: If vertex has an edge loop (an edge that connects a vertex to itself), is that vertex adjacent to itself?

I'm no expert on this, but I don't think I've seen "share an edge" ever used in directed graphs. For example, the

wikipedia article on directed graphs never uses that.

Also, the only places on that page where connected is used is in the first sentence (where it is defining a directed graph in terms of an undirected graph) and the section about weakly/strongly connected graphs. It doesn't use connected to describe two nodes, and instead uses the terms successor/predecessor. It doesn't even use adjacent to describe two nodes, though you could argue that the use of adjacency matrix implies that adjacency goes only one way along a directed edge.

I would say that a node with a loop (directed or not) is indeed adjacent to itself.

### Re: Math: Fleeting Thoughts

Posted: **Fri Jul 24, 2015 2:55 pm UTC**

by **Flumble**

From MathWorld:

Formally, an adjacency relation is any relation which is irreflexive and symmetric.

In other words: you can only speak of "adjacent vertices a and b" when a and b are different vertices and the edges between them are pointing both ways (or the graph is undirected).

Connectivity, as far as I can tell, is reserved for graph components and "shared" should be banned from graph theory entirely.

pseudo-edit: semi-ninja'd

### Re: Math: Fleeting Thoughts

Posted: **Thu Jul 30, 2015 11:32 pm UTC**

by **Elmach**

I'm pretty sure that the term "shared" is an informal term (as is "connected" in that post) used to describe vertex adjacency, and is thus only seen with that meaning in introductory textbooks.

Unless that is the technical term for two vertices having their sets of edges nondisjoint.

Which doesn't seem to have much use.

### Re: Math: Fleeting Thoughts

Posted: **Tue Sep 15, 2015 11:36 am UTC**

by **mosgi**

Fleeting thought: Let's say you have some probability distribution X, but you can't sample it directly. Instead, you can only sample the sum of N independent variables, each with distribution X. How would you estimate, say, the variance of X based on that?

Context: I've written a (very basic) benchmarking program for some code I've been working on recently. I don't need to know the answer for this project, but it came up as something that would be useful in the future.

### Re: Math: Fleeting Thoughts

Posted: **Tue Sep 15, 2015 12:32 pm UTC**

by **doogly**

Can you look at M different sums of size N?

### Re: Math: Fleeting Thoughts

Posted: **Tue Sep 15, 2015 1:35 pm UTC**

by **Flumble**

For uncorrelated variables, the variance of the sum is equal to the sum of the variances.^{[citation here]} So if your N independent variables have the same distribution, you simply divide the total variance by N. This should also hold for the mean^{[citation needed]}, while the deviation is divided by √N.

If it's a weigh(t)ed sum, my guess is that you also have to multiply by the weights. (The variance and mean look too linear for that to require exponents or whatever.)

### Re: Math: Fleeting Thoughts

Posted: **Tue Sep 15, 2015 2:28 pm UTC**

by **mosgi**

doogly wrote:Can you look at M different sums of size N?

What do you mean by that? If it's just "Can you sample M different values from the sum distribution", then yeah, you can do that.

Flumble wrote:For uncorrelated variables, the variance of the sum is equal to the sum of the variances.^{[citation here]} So if your N independent variables have the same distribution, you simply divide the total variance by N. This should also hold for the mean^{[citation needed]}, while the deviation is divided by √N.

If it's a weigh(t)ed sum, my guess is that you also have to multiply by the weights. (The variance and mean look too linear for that to require exponents or whatever.)

That's simpler than I was expecting. Now you mention it, I do remember that coming up when I took probability. It was just long enough ago that I forgot

I'm pretty sure the mean works that way too. Not sure about weighted sums, but I don't need that for this case.

### Re: Math: Fleeting Thoughts

Posted: **Wed Sep 16, 2015 4:10 pm UTC**

by **cyanyoshi**

Flumble wrote:For uncorrelated variables, the variance of the sum is equal to the sum of the variances.^{[citation here]} So if your N independent variables have the same distribution, you simply divide the total variance by N. This should also hold for the mean^{[citation needed]}, while the deviation is divided by √N.

If it's a weigh(t)ed sum, my guess is that you also have to multiply by the weights. (The variance and mean look too linear for that to require exponents or whatever.)

Variance some lovely properties, but it's not linear. For example, var(a*X) = a

^{2}*var(X). It looks linear only because the variables you are adding together are independent.

### Re: Math: Fleeting Thoughts

Posted: **Fri Sep 18, 2015 1:35 pm UTC**

by **xyweasel**

I never really liked the number 5.

To me, 1 is barely a number, and 2 is just two. Both are still really cool, don't get me wrong, but the first number-y number is 3 for me, and I like that because three points define a plane. 4 is a power of 2, and I like 2 because it's the only even prime.

And then there's 5. It's just there, acting like it owns the place.

It gets a special little bump on number pads. It bullies all the other numbers that end in 5 so that they can't be prime. It's hard for children to draw. it's hard for most people to draw a nice-looking pentagon by hand. The atomic number 5 is for boron, and this number is really borong.

Anyway, that's just a very fleeting thought about an integer. it doesn't really matter.

### Re: Math: Fleeting Thoughts

Posted: **Fri Sep 18, 2015 1:40 pm UTC**

by **doogly**

And 5 dimensions is where topology starts to get boring again. Snooooooze.

(Though the fact that it's boring is interesting. Paradox!!?!!???)

### Re: Math: Fleeting Thoughts

Posted: **Fri Sep 18, 2015 3:41 pm UTC**

by **Xanthir**

Platonic solids are just so weird. "Infinite" of them in 1 and 2 dimensions. Five in 3 dimensions. Six in 4 dimensions - an analog of all the 3d ones, and then a new 24-volumed one. Then for 5+, back down to three forever.

That little 4d bump is just the weirdest thing.

### Re: Math: Fleeting Thoughts

Posted: **Fri Sep 18, 2015 4:26 pm UTC**

by **doogly**

Four is just the wackiest place. I am so glad we live here.

### Re: Math: Fleeting Thoughts

Posted: **Fri Sep 18, 2015 4:27 pm UTC**

by **Xanthir**

Here in 3d, where we have knots.

### Re: Math: Fleeting Thoughts

Posted: **Fri Sep 18, 2015 6:26 pm UTC**

by **Xenomortis**

You can have knots in higher dimensions, you're just not usually "knotting" S^{1}

### Re: Math: Fleeting Thoughts

Posted: **Fri Sep 18, 2015 6:33 pm UTC**

by **doogly**

And you get to do fun surgery, there's lots of goodies I don't understand.

### Re: Math: Fleeting Thoughts

Posted: **Sat Sep 19, 2015 3:13 pm UTC**

by **xyweasel**

Xanthir wrote:Then for 5+, back down to three forever.

Man, 5 thinks it can tell everyone what to do. Fuck 5.

### Re: Math: Fleeting Thoughts

Posted: **Sun Sep 20, 2015 3:07 am UTC**

by **Dason**

xyweasel wrote:Xanthir wrote:Then for 5+, back down to three forever.

Man, 5 thinks it can tell everyone what to do. Fuck 5.

I heard 5 saying some pretty harsh things about your mother the other day too. As if you needed any more reason to hate it!

### Re: Math: Fleeting Thoughts

Posted: **Sun Sep 20, 2015 3:12 am UTC**

by **xyweasel**

Dason wrote:I heard 5 saying some pretty harsh things about your mother the other day too. As if you needed any more reason to hate it!

I'd fight back, but I heard 5's best friend is 10, and he's like twice the size he is.

### Re: Math: Fleeting Thoughts

Posted: **Sun Sep 20, 2015 9:52 am UTC**

by **Flumble**

xyweasel wrote:Dason wrote:I heard 5 saying some pretty harsh things about your mother the other day too. As if you needed any more reason to hate it!

I'd fight back, but I heard 5's best friend is 10, and he's like twice the size he is.

Think of him as just A. You can get him down with any bigger 10 –even a proof that there are infinitely many 10s larger than him (and with infinitely many more prime factors) will make him run away.

5 is great though, as you can draw pentagrams with them to summon the most wicked beings and to put

~~pants~~ ~~pents~~ hexes on people.

### Re: Math: Fleeting Thoughts

Posted: **Mon Sep 21, 2015 12:54 pm UTC**

by **Xenomortis**

Flumble wrote:5 is great though, as you can draw pentagrams with them to summon the most wicked beings and to put ~~pants~~ ~~pents~~ hexes on people.

Surely hexes require the aid of 5's next door neighbour?

### Re: Math: Fleeting Thoughts

Posted: **Mon Sep 21, 2015 1:57 pm UTC**

by **cyanyoshi**

I never thought I would be so upset about someone saying bad things about the number 5. It's funny how human brains work.

### Re: Math: Fleeting Thoughts

Posted: **Mon Sep 21, 2015 6:54 pm UTC**

by **Flumble**

Xenomortis wrote:Flumble wrote:5 is great though, as you can draw pentagrams with them to summon the most wicked beings and to put ~~pants~~ ~~pents~~ hexes on people.

Surely hexes require the aid of 5's next door neighbour?

That's what I was thinking too, but then I remembered –actually I asked etymonline– that a hex has nothing to do with sixes, but all with Hexe, German(ic) for a witch. See also

Five-Minute Comics: Part 2 (The Comicking) in which Darth Vader is drawing a pentagram to hex a non-believer.

### Re: Math: Fleeting Thoughts

Posted: **Sat Feb 06, 2016 6:58 am UTC**

by **Lothario O'Leary**

Hackfleischkannibale wrote:@ PM2Ring: Thanks! And you are right that it was Gauss who found the sum over the first 100 integers at the age of six, but:

His point was to one-up the legend of Euler's youth and show how you could sum up the first hundred tenth powers in under a minute by hand.

That is a different story, and I'm sure it was a Bernoulli, who said to have found the answer in six and a half minutes.

I'm pretty sure it's first

thousand tenth powers, and I'm definitely sure it's

seven and a half minutes (or, as it's usually quoted, "half a quarter hour"). But yes, Bernoulli (whichever one came up with the Bernoulli numbers, which are used in this calculation), not Gauss, and certainly not Euler.

I tried to repeat the quick calculation back when I first got told about Bernoulli's feat. It took me about 9 minutes, and I'm pretty sure I messed up the digits somewhere on the way.

Incidentally, the Gauss story (the one with the triangular numbers) happened independently to (similarly aged) Kolmogorov over a century later. There are contemporary accounts of Kolmogorov's case, however (unlike the original Gauss story, which apparently can't even agree on whether there were 40 or 100 numbers to be summed).

On a separate topic, I once tried to calculate the square root of 9 3/4. It looks like... well, you could probably say it's 3.1225 as a floating point handler test (like in comic 217), and - unless your team is savvy enough to realize that square roots don't work that way - the result would be similar.

(And no, I've never found any remotely decent mathematical explanation why the triangular heck it looks like that, either.)

### Re: Math: Fleeting Thoughts

Posted: **Mon Feb 29, 2016 6:39 am UTC**

by **cyanyoshi**

Probably not worth its own thread, but what is the smallest number for which nobody knows whether it's prime or composite? I just want to compare this to the scale of the largest known prime number.

### Re: Math: Fleeting Thoughts

Posted: **Mon Feb 29, 2016 6:59 am UTC**

by **Eebster the Great**

Correct me if I'm wrong, but aren't there ongoing exhaustive prime searches? I would expect that number to change frequently.

You ought to be able to find out the number of digits of the current frontier though. I can't get a good grasp on what that might be though. 15? 20? 50? 100? 1000?

E: Well there's a difference between "nobody knows" and "nobody has ever checked." There is only a very small amount of memory available in the world, maybe a few zettabytes. So even if all of it was used to store a list of primes to check any given number against, at about 74 bits per integer (god knows how you would encode it), you could store only about the first 10^21 primes, the greatest of which would be roughly 10^22 or 10^23. No person would know all the numbers on the list obviously, but it would be recorded in a human-accessible format.

But I don't know what the largest actual exhaustive list of primes stored anywhere is.

E2: If human accessibility is irrelevant, you could compress it further by recording only the difference between one prime and the next. Not sure how much that saves.

### Re: Math: Fleeting Thoughts

Posted: **Mon Feb 29, 2016 7:10 am UTC**

by **Xanthir**

It's not an interesting number; the numbers we've factored definitively are just a product of computing power + somebody having time to do it. That said, I'm not sure what it is.

The magnitude difference between smallest unknown and largest known prime is enormous, because of the special-case primes (like Mersenne primes, the eternal record-holders) that we have specialized super-fast algorithms for.

ETA: If MathWorld is up-to-date, the earliest instance of the longest-known prime gap involves a 17-digit prime, so we've exhaustively tested to at least that large.

### Re: Math: Fleeting Thoughts

Posted: **Mon Feb 29, 2016 7:17 am UTC**

by **Eebster the Great**

It's not just the super-fast algorithms, it's also the attention paid. There is a big difference between checking if a specific number is prime and checking if that number and every positive number less than it is prime.

### Re: Math: Fleeting Thoughts

Posted: **Mon Feb 29, 2016 7:33 am UTC**

by **Xanthir**

Yeah, but the "generic numbers we pay attention to" (because of RSA contests/etc) also happen to be dwarfed by the "special-format numbers we pay attention to", as a result of the super-fast factorizing methods. ^_^

### Re: Math: Fleeting Thoughts

Posted: **Mon Feb 29, 2016 7:48 am UTC**

by **Xanthir**

Eebster the Great wrote:E2: If human accessibility is irrelevant, you could compress it further by recording only the difference between one prime and the next. Not sure how much that saves.

Should save quite a bit. Again using

this MathWorld article, the

largest gap between primes up to 17 digits is only 1220. At that point the *average* gap should be around 40, so you can go pretty far just using something similar to UTF-8 encoding, using a single byte to store gaps < 127 (or, since all gaps are even, storing half the gap instead) and occasionally using two bytes to store larger ones.

### Re: Math: Fleeting Thoughts

Posted: **Mon Feb 29, 2016 3:23 pm UTC**

by **Eebster the Great**

You would need at least one bit then to indicate whether the gap was to be represented by one or two bytes (or really 7 or 15 bits). But anyway, it's kind of a useless list, because to use the list to tell if a given number is prime, you would have to start from the beginning and keep adding until you reached or exceeded that number.

### Re: Math: Fleeting Thoughts

Posted: **Mon Feb 29, 2016 4:09 pm UTC**

by **jaap**

Eebster the Great wrote:You would need at least one bit then to indicate whether the gap was to be represented by one or two bytes (or really 7 or 15 bits). But anyway, it's kind of a useless list, because to use the list to tell if a given number is prime, you would have to start from the beginning and keep adding until you reached or exceeded that number.

Obviously you'd have a system similar to that used in video codecs (i-frames and p-frames), essentially splitting it into several short lists to limit the length of your linear search. Exactly how many lists depends on the relative importance of memory usage and search time.

### Re: Math: Fleeting Thoughts

Posted: **Mon Feb 29, 2016 5:04 pm UTC**

by **Xanthir**

Yup. Here's Xanthir's Reasonably Efficient Gap Encoding Scheme (XREGES):

1. 2 and 3 are assumed. This will start listing the gaps from 3 onwards.

2. Because all the gaps are even, we'll actually be encoding the halfGap.

3. Write out the halfGap in binary.

4. Determine how many bytes you'll need by dividing the number of bits it uses by 7. (If the answer is > 8, this is an exceptional case. Skip to the later step for these cases.)

5. In the first output byte, set the high (bytes - 1) bits to 1, then the next bit to 0. In the remaining bits of this byte, and the remaining output bytes, write the halfGap directly.

6. Occasionally, whenever you feel is appropriate, encode an "i-frame": write out a byte of all 1s, then encode the next prime into however many subsequent bytes it takes. Use only the lower 7 bits of each byte, leaving the high bit 0. When finished, write out another all-1s byte.

7. For exceptional cases (gap would need more than 8 bytes to encode), write out the prime as an i-frame instead.

This will, on average, use only a single byte per prime up to about e^2^8, or ~10^{111}. It'll then use two bytes per prime up to about e^2^15, then three bytes up to e^2^22, etc. The occasional large gap will just use a few more. It doesn't break down (requring an i-frame for most primes) until around e^2^57 or about 10^10^17, at which point it just degenerates to listing the primes directly, with a 2-byte overhead per prime. (Luckily that overhead is insignificant at that point, as each prime will take about 10 petabytes a piece to write out.)

### Re: Math: Fleeting Thoughts

Posted: **Tue Mar 01, 2016 12:50 am UTC**

by **Eebster the Great**

Well I wouldn't worry beyond e^{256}, since there are pi(e^{256}) ≈ li(e^{256}) ≈ 6 × 10^{108} prime numbers less than e^{256} requiring (in this scheme) about as many bytes to store, which exceeds the storage capacity of the observable universe by many orders of magnitude. So based on that, with efficient storage, we need close to 1 byte per prime. For one megabyte, we could store the gaps between all primes up to about 15 million. For one gigabyte, we get somewhere on the order of 20 billion (though that uses the weaker approximation pi(x) ~ x/ln(x)). For one terrabyte, 30 trillion. For one petabyte, 40 quadrillion.

I don't know what the largest list of this sort is. I don't know how it is compressed or how far it goes. But based on these numbers and based on economics, it is unlikely one exists with more than a petabyte of storage. So it is unlikely that the smallest integer with unknown primality is much greater than 10^16, though it could be far less.

### Re: Math: Fleeting Thoughts

Posted: **Wed Mar 02, 2016 11:17 am UTC**

by **PM 2Ring**

Here's a simple way to store a table of primes in a fairly compact form. All primes greater than 5 are obviously coprime to 2, 3, and 5 and hence are coprime to 2*3*5 = 30. IOW, all primes > 5 are congruent to one of (1, 7, 11, 13, 17, 19, 23, 29) mod 30. Note there are 8 numbers in that set, so we can record the primality of all numbers in a block of 30 with the bits in a single byte.

There are 455,052,511 primes < 10

^{10}. My method needs 333,333,333 bytes to cover that range, which is significantly smaller than the 455,052,511 bytes required using a 1 byte per prime scheme.

FWIW, I wrote some C code 20 years ago on a 10 MB Amiga which creates and reads a set of files that encode primes in this fashion. To generate the primes it uses a

segmented sieve. I still have a set of ten 10,000,000 byte files covering the primes up to 3 billion on my hard drive, but I don't use it much these days since for such small primes a segmented sieve in Python is quite fast even on this old 2GHz machine. However, I wrote a version in Python last month that just uses a simple Eratosthenes sieve; you can see it on

StackOverflow. (My Python code indicates a prime with a 1 bit, my old C code uses the opposite convention).

These files can be compressed using standard data compression algorithms, with compression improving as the mean gap size increases. My file covering the primes under 300,000,000 compresses to 7,408,524 bytes using xz on standard settings; the file covering 2.7 billion to 3 billion compresses to 6,768,060 bytes.

The

Prime pages site has various

lists of primes. Eebster the Great's question is discussed on the

What is the longest list of primes? page:

Chris K. Caldwell wrote:It is often wondered what is the longest list of consecutive primes, starting at two, that has ever been found? Sometimes it is asked in a different manner: "what is the smallest number

n such that it is not known whether or not

n is prime?" (Of course there are infinitely many primes, so there is no theoretical limit to the length of such a list.)

Perhaps the longest lists ever calculated (but not all stored) are those corresponding to the maximal prime gap (and twin prime constant) projects. See

Nicely's lists. At the time I last updated this page, these projects had found (but not stored) all the prime up to 10

^{18}, but not yet to 10

^{19}.

FWIW, Dr. Thomas R. Nicely is the person who "discovered" the notorious

Pentium division bug during his prime gaps research, although it later transpired that Intel were already aware of this bug but were keeping it quiet...

### Re: Math: Fleeting Thoughts

Posted: **Wed Mar 02, 2016 4:50 pm UTC**

by **f5r5e5d**

is/are there good symbols, ways to write "equal up to a sign" in an equation?

### Re: Math: Fleeting Thoughts

Posted: **Wed Mar 02, 2016 5:00 pm UTC**

by **doogly**

= abs( ) or = +/- ?

### Re: Math: Fleeting Thoughts

Posted: **Wed Mar 02, 2016 5:04 pm UTC**

by **Flumble**

Equality up to sign basically means that the magnitudes/absolute values/squares of both sides are equal. So |a| = |b|. (or for norms in general: ‖a‖ = ‖b‖)

Another way I sometimes see is a = ±b.

### Re: Math: Fleeting Thoughts

Posted: **Wed Mar 02, 2016 8:32 pm UTC**

by **f5r5e5d**

abs, norm seem inhibiting, squaring isn't going to be always safe when I want to derive/"prove" an equality in Geometric Algebra - which like Complex Valued equations needs conjugates for norms and has nilpotents

I'm still struggling with the algebraic manipulations as it is - I suppose "+/-" is simply a undetermined scalar and I understand the rules for scalars in GA

I guess the "geometric content" of the problem formulation could have told me the vectors I want to compare have opposite orientations and I just should use the correct signs from the start

when would a equals sign with a question mark above be used?

### Re: Math: Fleeting Thoughts

Posted: **Wed Mar 02, 2016 9:09 pm UTC**

by **doogly**

When you are asking, is this equal? Then you can do some operations as if it were, and then maybe reduce it to a form where it is clear. You could then cover your tracks and use = or /= from the get go, but maybe you want to be open and honest about what you knew at the outset of a derivation. Some people might act this way.

Also, geometric algebra is gross.