Page 2 of 4

Re: Deliberately bad algorithms

Posted: Mon Jun 22, 2015 10:44 pm UTC
by hotaru
Derek wrote:And if you need two posts on a forum, hell if you need even one post, to explain how your single line of code works, your code is unreadable and should be rewritten.

well, that's kind of the point of using "fix". with "fix", you just need to point people who don't get it to an existing explanation of "fix", and then it's pretty obvious what the code does.

Re: Deliberately bad algorithms

Posted: Tue Jun 23, 2015 4:30 am UTC
by Derek
hotaru wrote:
Derek wrote:And if you need two posts on a forum, hell if you need even one post, to explain how your single line of code works, your code is unreadable and should be rewritten.

well, that's kind of the point of using "fix". with "fix", you just need to point people who don't get it to an existing explanation of "fix", and then it's pretty obvious what the code does.

"Fix" is not the source of confusion in that code.

Re: Deliberately bad algorithms

Posted: Tue Jun 23, 2015 6:33 am UTC
by hotaru
Derek wrote:
hotaru wrote:
Derek wrote:And if you need two posts on a forum, hell if you need even one post, to explain how your single line of code works, your code is unreadable and should be rewritten.

well, that's kind of the point of using "fix". with "fix", you just need to point people who don't get it to an existing explanation of "fix", and then it's pretty obvious what the code does.

"Fix" is not the source of confusion in that code.

then what is? "fmap"? "=<<"?

Re: Deliberately bad algorithms

Posted: Tue Jun 23, 2015 7:50 am UTC
by phlip
Well, let me put it this way: when I read it the first time (as a very novice-level Haskell programmer), I had to sit down with a pen and paper for a while and figure out exactly what "fmap . flip id" actually meant... you're passing higher-level functions into other higher-er level functions and it all gets a bit brain-bending for someone who hasn't had a long time of experience with that stuff. Eventually figured out that what you meant was "\x: fmap ($x)", which is just so much easier to read. Maybe if you'd used "($)" instead of "id", I'd have been able to figure it out sooner, but figuring out what "flip id" meant (how do you flip a one-argument function?) was only one of the small stumbling blocks to decyphering this.

Then, we get to (=<<). Monads are weird at the best of times, for a new user, and the function monad in particular I'd worked with exactly zero times before I looked at this. And even then you're being tricksy with partial application and whatnot.

I eventually gave up at this point and started attacking it from the other side - the type signature of the final thing was [[a] -> a] -> [a] (for the list functor), and given the "fix" in there, I just guessed the rest, that the [a] that was being used as a parameter was the same as the [a] that was the result, and tried it, and it seemed to match my guess. So that's what I posted. I was lucky that I'd already spent the down-payment of time in learning how "fix" works, or I'd have been even more lost.

It was only later that I actually decyphered what (=<<) actually means for the function monad, as something akin to the S combinator (another thing that I'd hardly class as "intuitive")... so when the "fix' does essentially:

Code: Select all

loeb = (fmap . flip id) =<< loeb
what it's actually doing is making it so that the parameter go "loeb" gets passed to both sides:

Code: Select all

loeb fs = (fmap . flip id) (loeb fs) fs

Code: Select all

loeb fs = fmap ($loeb fs) fs
which is... again... just so much simpler to read.

Maybe all this stuff comes as second nature for someone who's been working with Haskell for years... but for me it took quite some time to be able to decypher.

Re: Deliberately bad algorithms

Posted: Thu Jun 25, 2015 2:51 pm UTC
by benneh
Yakk wrote:Strange. I'm getting merely n! from that:

Code: Select all

Let S(n) be the cost of sorting n elements.

Let S(n,m) be the cost where the first element is m away from the right place.
S(n) <= S(n, n-1)

S(1) = 1
S(n,0) <= 1 + 2S(n-1) (the case where head(list)<=min(tail(list)))
S(n,k) <= 1 + S(n-1) + S(n, k-1) (otherwise)

So S(n) <= n + (n+1) S(n-1)

which works out to (n+1)! no?

So the n=6 case should be only 7 times slower than the n=5 case.

What did I miss?

The test I ran was on the algorithm in the spoiler:
benneh wrote:

Code: Select all

import Prelude hiding (minimum)

import Control.Applicative (liftA2)
import Data.List (permutations)

stupidesterSort :: (Ord a) => [a] -> [a]
stupidesterSort as = find' (\ bs -> isSorted bs && bs `elem` permutations as) (listPower (length as) as)

listPower :: Int -> [a] -> [[a]]
listPower n as = foldr (liftA2 (:)) [[]] (replicate n as)

isSorted :: (Ord a) => [a] -> Bool
isSorted [] = True
isSorted [_] = True
isSorted (a:as) = a <= minimum as && isSorted as

minimum :: (Ord a) => [a] -> a
minimum = head . sort

find' :: (a -> Bool) -> [a] -> a
find' condition = head . filter condition

Re: Deliberately bad algorithms

Posted: Tue Jun 30, 2015 7:03 pm UTC
by mousewiz

Code: Select all

floor(float f) {
    for(int i = 0; i <= f; i++) ;
    return i;
}


Negative floors need not apply.

Re: Deliberately bad algorithms

Posted: Tue Jun 30, 2015 7:06 pm UTC
by Yakk
All algorithms are improved by recursion:

Code: Select all

int floor(float f, int guess=0);
int ceil(float f, int guess=0);

int ceil(float f, int guess=0) {
    if (f < 0.) return -floor( -f, guess );
    if (guess >= f) return guess;
    for(int i = guess; i < ceil(f,i+1); i++) ;
    return i;
}
int floor(float f, int guess=0) {
    if (f < 0.) return -ceil( -f, guess );
    if (guess > f) return guess-1;
    for(int i = guess; i < floor(f,i+1); i++) ;
    return i;
}

Re: Deliberately bad algorithms

Posted: Thu Jul 02, 2015 2:45 am UTC
by Isaac Hill
I haven't used real programming languages in years, so this is based on MATLAB scripting. To sort a list of positive integers, first, find the maximum value:

Code: Select all

M = input(1);
for i = 2:length(input)
    if input(i) > M,  M = input(i); end
end

Then create a new array, and fill it with the values that appear in the input:

Code: Select all

output = [];
for i = 1:M
   for j = 1:length(input)
      if input(j) == i, out = [out i]; end
   end
end

The order doesn't just depend on your list size, but also the maximum value. If your input is [2 100 1], it takes 2 iterations of the first part to determine the max value, then 300 iterations of the second part to check each value from 1 to 100 to see if it's in the 3 item list. Plus, you're dynamically allocating memory since the size of output keeps changing.

Re: Deliberately bad algorithms

Posted: Thu Jul 09, 2015 8:06 pm UTC
by You, sir, name?

Code: Select all

int multiplyByTwo(int i) {
    for (int j=0;;j++) {
        double val = pow(2*cos(i)*sin(i),2) + pow(cos(j),2);
        if (abs(val-1) < 0.0000001) {
            return i < 0 ? -j : j;
        }
    }
}

Re: Deliberately bad algorithms

Posted: Fri Jul 17, 2015 3:52 am UTC
by Dinosaur!
You, sir, name? wrote:

Code: Select all

int multiplyByTwo(int i) {
    for (int j=0;;j++) {
        double val = pow(multiplyByTwo(cos(i)*sin(i)),2) + pow(cos(j),2);
        if (abs(val-1) < 0.0000001) {
            return i < 0 ? -j : j;
        }
    }
}


Fixed that for ya :D

Re: Deliberately bad algorithms

Posted: Fri Jul 17, 2015 4:47 am UTC
by Wildcard
Dinosaur! wrote:
You, sir, name? wrote:

Code: Select all

int multiplyByTwo(int i) {
    for (int j=0;;j++) {
        double val = pow(multiplyByTwo(cos(i)*sin(i)),2) + pow(cos(j),2);
        if (abs(val-1) < 0.0000001) {
            return i < 0 ? -j : j;
        }
    }
}


Fixed that for ya :D

You need a base for an inductive method. The way you changed this one, it will just recursively call forever and not return the correct answer...no good. "Deliberately bad" refers to writing *correct* algorithms that are extremely, mind-bogglingly inefficient, whether in terms of time or space (or both).

My favorite example from this thread so far, is:
benneh wrote:For a maximum balance of simplicity and inefficiency:

Code: Select all

sort(list):
    if length(list) < 2:
        return list
    else:
        if head(list) <= minimum(tail(list)):
            return head(list) followed by sort(tail(list))
        else:
            return sort(tail(list) followed by head(list))

minimum(list):
    return head(sort(list))

What I find impressive is that with *that* much superfluous recursion, the algorithm is still finite and correct. I haven't done a formal analysis of the time complexity but would love to see one. :)

Re: Deliberately bad algorithms

Posted: Sun Jul 19, 2015 3:30 am UTC
by lightvector
Wildcard wrote:My favorite example from this thread so far, is:
benneh wrote:For a maximum balance of simplicity and inefficiency:

Code: Select all

sort(list):
    if length(list) < 2:
        return list
    else:
        if head(list) <= minimum(tail(list)):
            return head(list) followed by sort(tail(list))
        else:
            return sort(tail(list) followed by head(list))

minimum(list):
    return head(sort(list))

What I find impressive is that with *that* much superfluous recursion, the algorithm is still finite and correct. I haven't done a formal analysis of the time complexity but would love to see one. :)


Let n be the length of the list, and T(n) the worst case cost of sorting that length of list. Worst case, the recursion will run n times rotating the list before "head(list) <= minimum(tail(list))" is satisfied, and each rotation costs T(n-1) via the call to "minimum" - everything else is constant time or trivial in comparison, including "length" at the start - and then will call sort one more time for a total of n+1 calls to a smaller sort.

So T(n) is O((n+1) * T(n-1)).
So T(n) is O((n+1)!)

I don't know if you can obtain a slightly tighter bound. It seems to me that with the way the rotation works, you can't hit the worst case number of rotations all the time within the recursive calls.

Re: Deliberately bad algorithms

Posted: Sun Jul 19, 2015 10:21 am UTC
by You, sir, name?
lightvector wrote:I don't know if you can obtain a slightly tighter bound. It seems to me that with the way the rotation works, you can't hit the worst case number of rotations all the time within the recursive calls.


Stirling's approximation? ;-)

Re: Deliberately bad algorithms

Posted: Sun Jul 19, 2015 2:49 pm UTC
by Flumble
lightvector wrote:I don't know if you can obtain a slightly tighter bound. It seems to me that with the way the rotation works, you can't hit the worst case number of rotations all the time within the recursive calls.

Counting all the recursive calls to sort, I got the sequence 1, 3, 11, 47, 227, 1215, 7107, 44959, 305091, 2206399.*
It's equivalent to A035009, which seems to grow 'only' exponentially.
Image

Compared to a factorial, it immediately deminishes:
Image

It seems to be squeezed between ex and ex^2. No guarantees though, because the sequences stop after only a couple dozen numbers.


*I tried all permutations of lists of length < 7 and each time the reversed list took the longest. So assuming a reversed list takes the longest, I performed the sort on those lists, resulting in 10 elements before also that took too long.
Spoiler:

Code: Select all

import Prelude hiding (minimum, maximum)
import Data.List hiding (sort)

printLn = mapM_ print

tryAll len = map logSort $ permutations [1..len]
   where logSort list = show list ++ ": " ++ show (snd $ sort list) ++ " iterations"

tryReverse = [snd $ sort $ reverse [1..n] | n <- [1..]]

maximum = maximumBy (\a b -> compare (snd $ sort a) (snd $ sort b))


--forgive me for not using a monad
sort list = sort'(list,0)
   where
      minimum (list,n) = (head sorted,n')
         where
            (sorted,n') = sort'(list,n)

      sort' ([],n) = ([],n+1)
      sort' ([element],n) = ([element],n+1)
      sort' ((first:rest),n) = if first <= min then (first:sortedTail,nTail) else (sortedRot,nRot)
         where
            (min,n') = minimum (rest,n)
            (sortedTail, nTail) = sort'(rest,n')
            (sortedRot , nRot ) = sort'(rest++[first],n')

Re: Deliberately bad algorithms

Posted: Mon Jul 20, 2015 3:40 am UTC
by Derek
Wildcard wrote:My favorite example from this thread so far, is:
benneh wrote:For a maximum balance of simplicity and inefficiency:

Code: Select all

sort(list):
    if length(list) < 2:
        return list
    else:
        if head(list) <= minimum(tail(list)):
            return head(list) followed by sort(tail(list))
        else:
            return sort(tail(list) followed by head(list))

minimum(list):
    return head(sort(list))

What I find impressive is that with *that* much superfluous recursion, the algorithm is still finite and correct. I haven't done a formal analysis of the time complexity but would love to see one. :)

Isn't that algorithm wrong? If head(list) is greater than minimum(tail(list)), but less than maximum(tail(list), won't this algorithm incorrectly place head(list) at the end of the final list?

Re: Deliberately bad algorithms

Posted: Mon Jul 20, 2015 3:48 am UTC
by ConMan
Derek wrote:
Wildcard wrote:My favorite example from this thread so far, is:
Isn't that algorithm wrong? If head(list) is greater than minimum(tail(list)), but less than maximum(tail(list), won't this algorithm incorrectly place head(list) at the end of the final list?

You're looking at the return statement wrong. It's saying to return sort(tail followed by head), meaning that it puts the head at the end of the list then tries to sort the *entire* list again. So it will keep recursing on that statement until it finally finds the smallest element on the list, then keeps that at the start of the list, and sorts the rest.

Re: Deliberately bad algorithms

Posted: Mon Jul 20, 2015 4:36 am UTC
by phlip
If you replace the "minimum" definition with something more reasonable, then the sort function ends up behaving essentially like a selection sort. Rotating the list until the smallest value is at the start, then moving on to the next sublist. Inefficient, but not embarrassingly so. It's when "minimum" is in turn defined in terms of "sort" that it suddenly starts becoming ridiculous.

Re: Deliberately bad algorithms

Posted: Mon Jul 20, 2015 5:05 am UTC
by Derek
ConMan wrote:You're looking at the return statement wrong. It's saying to return sort(tail followed by head), meaning that it puts the head at the end of the list then tries to sort the *entire* list again. So it will keep recursing on that statement until it finally finds the smallest element on the list, then keeps that at the start of the list, and sorts the rest.

Ah, yeah, that's exactly what I did. I missed that closing parens.

Re: Deliberately bad algorithms

Posted: Mon Jul 20, 2015 12:36 pm UTC
by Yakk
Is there a way to use the recursive structure of conway arrows in an algorithm that seems to be trying to do something sensible, like sorting?

Re: Deliberately bad algorithms

Posted: Thu Jul 30, 2015 1:25 am UTC
by quintopia
I had this idea today for finding the arithmetic mean of a list without ever averaging more than two of its elements at a given time. It's pretty obvious that it works, but I have no idea what its worst case running time is. I'm sure it's at least O(n²b) (where b bits of precision are used in the mantissa), but it's not clear to me how to compute it exactly. Some help?

(As a side note, if you want an algorithm that's actually terrible, you could simply repeatedly replace two randomly chosen elements with their average, stopping when all elements are equal. My goal here was to optimize the basic algorithm as much as possible to see how bad the "fundamental idea" was, rather than concoct the absolute worst recursive random over-working algorithm ever.)

Code: Select all

# -*- coding: utf-8 -*-
from __future__ import division

def bubble_avg(L):
    if len(L)==1:
        return L[0]
    while True:
        t = (L[0] +L[-1])/2
        inserted = False
        for i in range(0,len(L)-1):
            if not inserted and L[i+1]<=t:
                L[i]=L[i+1]
            if inserted:
                L[i],t = t,L[i]
            if L[i+1]>t and not inserted:
                L[i]=t
                inserted=True
        L[-1]=t
        if L[0]==L[-1]:
            return L[0]
           
           
           
test = [1, 2, 4, 8, 16, 32, 64]
test.sort()
print bubble_avg(test)
print float(sum(test))/len(test)


Here's another way to do basically the same thing. I'm pretty sure it's slower because even though it converges in half as many passes, each pass requires a full sort, though in this case, Timsort will have a lot of monotonic runs to work with, and will return very rapidly.

Code: Select all

# -*- coding: utf-8 -*-
from __future__ import division

def sort_avg(L):
    while True:
        for i in range(len(L)//2):
            t = (L[i]+L[-(i+1)])/2
            L[i] = t
            L[-(i+1)] = t
        L.sort()
        if L[0]==L[-1]:
            return L[0]
           
test = [1, 2, 4, 8, 16, 32, 64]
print sort_avg(test)
print float(sum(test))/len(test)

Slowest Sorting Algorithms that Terminate

Posted: Mon Aug 15, 2016 6:55 pm UTC
by Shufflepants
For some reason I'm interested in really slow sorting algorithms. We've all heard about terrible ones like Bogo Sort that has average running time of O(n!), but is not guaranteed to terminate. There are others such as Slow Sort which has a greater than polynomial running time and doesn't really admit any unnecessary work.

My own idea for a really slow one is what I call Turing Sort.

Turing Machines can be explicitly enumerated starting with a machine with 1 state, then all the machines with 2 states, and so on. So the idea for the sorting algorithm is to simulate Turing Machines.

Here's the rough pseudo-code:

Code: Select all

int i=0;
while(true) {
  initialize Turing Machine i and add it to the list
  for each Turing Machine in the list {
    simulate one step
    if the machine has halted {
      check to see if what the machine has produced on its tape is the sorted list, if it is return it
    } else {
      remove this machine from the list
    }
  }
}


Because for any given string there is some Turing machine which can produce it given a blank tape, we guarantee that this will eventually terminate even though many of the machines being simulated never will.

I estimated that this algorithm has an approximate running time of n! (number of strings of the correct length) times 2^k (the number of Turing Machines with k states where k will somehow be depend on n). I'm sure there needs to be several more factors including the constant that represents the proportion of Turing Machines that halt (which is uncomputable) because of the fact that you continue to process steps for all machines which have fewer states but do not halt than the one that eventually produces the sorted list.

As a funny side note, this algorithm also finds you the Turing Machine that minimizes some function of number of states and number of steps to produce the sorted list; such efficiency!

If any one has any better estimates on the average running time or any of their own really slow running sorting algorithms that are guaranteed to terminate, I'd love to hear them.

Re: Slowest Sorting Algorithms that Terminate

Posted: Mon Aug 15, 2016 8:38 pm UTC
by Soupspoon
I think the 'slowest possible terminating' algorithm has to always move just enough elements in just the right ways, per iteration, to get a net movement towards 'sorted' (and towards zero 'unsortedness') by the lowest possible (non-zero) amount. The reason being that any algorithm that can increase the 'unsort' assessment in an given iteration can just resort/reunsort/reresort ad infinitum and enter into a never finishing fugue (ditto a neutral sort that unsorts one element for every one sorted).

This makes me think that O(n!) sounds about right as a limit (a cycle needed for every element, and for every displacement from 'sorted' that element can have within the constraints by the previously misplaced/displaced elements), in reality a sort of a reverse(sort(list)) will (with the most deliberately unintelligent method, so described) have to move individual elements something like 2*(n!/2), but in any non-negative 'net' sorting movement of two elements, that reduces the unsorted metric by two positions, so that's back to half n! (still O(n!), though).

So I'm wondering about shifting trios. Can arbitrary elements A, B and C have an unsortedness shift of -1, -1 and +1..? I can't get round in my head how I could actually accomplish that knor, for any odd number 2n+1, make n+1 elements lose minimal unsortedness, whilst the remaining n gain unsortedness) and I don't think that's possible. Two microsortings can balance with a single two-unit microunsorting, but that's a net zero, so not allowed for our terminating algorithm. Per cycle, I think that an even (including zero and negative, if we hadn't disallowed them for the above reasons) reduction of unsortedness is the only possible outcome (until zero is reached), but someone will surely be able to correct me on that.

Unless we 'cheat' and define a meta-cycle of iterations (of arbitrary length?) that in total produce a net reduction of unsortedness by the minimum amount (2?), but only by summing positive, negative and zero motions, doing 'wasted' shuffling along the way towards minimum unsortedness. But if that's "list=reverse(list); list=reverse(list); list=minimallysort(list)" then if at any point our list is actually reverse(fullysorted(list)) then at the ⅓rd point we have list=fullysorted(list)), in this example, and it surely can't count as a unit of iteration if its status as fully sorted isn't even tested for and termination of the sort process sought. And it doesn't affect the Order of the sort, anyway.

Or... What about an algorithm that always generates every possible (re-)ordering of a list, and then proceeds to test pairs of generated lists to produce the 'most sorted'. That would be an O(n!) process, stage one, followed by an O(n) process (howsoever the pairs are compared, bracketting or 'winner stays on against next to be tested').. Is that O(n!+n)? My head still hurts from trying to create a non-even reduction of unsortedness to zero, so someone will tell me if that's not actually O(n!) in all but name. It's been too long...

Re: Slowest Sorting Algorithms that Terminate

Posted: Tue Aug 16, 2016 1:03 am UTC
by Xanthir
O(n!+n) is equal to O(n!), so if all you care about is the big-O notation, you're covered either way.

Re: Slowest Sorting Algorithms that Terminate

Posted: Tue Aug 16, 2016 1:55 am UTC
by Soupspoon
Thought so, but my head was swimming.

(So much that I got it drastically wrong in my head. After(/during) creating n! reorderings, there are n!-1 pairs to whittle down. So, it's 2n!, which is back to O(n!)... Yes?)

Anyway, that's not changing much to what I wrote. I shall leave the implementation up to others, but I'd suggest an iterator. I don't have any handy programming tools on this tablet, but mixing coding styles a bit, for freehand pseudocode:

Code: Select all

nextPermute = iteratePermute(list); // exercise to reader, returns function producing next unique recombiation
bestTest=&nextPermute();
bestDisorder=disorderMetric(bestTest);
while (newTest=&nextPermute()) {
  newDisorder=disorderMetric(newTest);
  if (newDisorder < bestDisorder) { // we're not bothered of keeping all 'equally but differently sorted' examples,
                                   // but <= would be ~linearly more intensive?
    bestDisorder=newDisorder; // yes, recalculating this alongside newDisorder, each time, would be wasteful,
                              // but that isn't the objective
    bestTest=newTest;
 }
}
return bestTest // from whatever container function (unstated) was made to enclose this...
 

Re: Slowest Sorting Algorithms that Terminate

Posted: Tue Aug 16, 2016 11:48 am UTC
by Tub
Shufflepants wrote:

Code: Select all

      check to see if what the machine has produced on its tape is the sorted list, if it is return it

I love how the fastest algorithms for checking set equality of both lists involve... sorting both lists.

About the upper bound of the runtime complexity. Enumerating a turing machine by some bitwise encoding is great for saying that you have 2^n turing machines with n bits, but that doesn't help us in figuring out what these turing machines can do.
So I'll be using a turing machine defined by 5-tuples: (state, symbol, new symbol, new state or terminate, tape movement), and we will iterate by the amount of tuples required to define the machine.
How many machines with t tuples are there? There's at most as many states as tuples, there are exactly n symbols, and tape movement is (left, hold, right), so we have t^2 * n * (n+1) * 3 possible tuples on a t-tuple machine with n symbols, thus:
s(t, n) := (t^2 * n * (n+1) * 3) ^ t possible machines with t tuples over n symbols.1

With this definition, for a list of n elements, there is a trivial turing machine with n tuples that outputs a permutation of that list after n steps, like this:
(1, *, 1, 2, right)
(2, *, 2, 3, right)
...
(n, *, n, terminate, hold)
where * is whatever symbol the empty tape contains.
In other words, after having executed n steps on any turing machine with n tuples, the algorithm must have terminated.

Now we can calculate how many machines get instantiated before our trivial winner is found:
sum(i from 1 to n) (i^2*n^2)^i

Add n iterations to actually run the machine, figure out that the whole loop requires O(n^2) for n iterations2, so in theory we have everything we need to establish an upper bound.

Eyeballing the thing seems to lead to something like O(n^n), so you might actually have created something worse than O(n!).


1 Many of those machines will have unreachable states, and we might find a more efficient iteration, but we're interested in an upper bound.
2 Assuming no turing machine is ever removed from the list, though I would be surprised if removal of the terminated machines would lower the complexity class. If you actually have a useful bound on the fraction of turing machines that terminate after a set amount of steps, I have a suspicion that you've gotten terribly close to computing the non-computable busy beaver numbers.

Re: Slowest Sorting Algorithms that Terminate

Posted: Tue Aug 16, 2016 12:08 pm UTC
by jaap
Tub wrote:
Shufflepants wrote:Eyeballing the thing seems to lead to something like O(n^n), so you might actually have created something worse than O(n!).

I don't think so. Because of Stirling's approximation of n!, I think O(n!) is slightly worse than about the same as O(nn).

Re: Slowest Sorting Algorithms that Terminate

Posted: Tue Aug 16, 2016 2:04 pm UTC
by Shufflepants
Tub wrote:2 Assuming no turing machine is ever removed from the list, though I would be surprised if removal of the terminated machines would lower the complexity class. If you actually have a useful bound on the fraction of turing machines that terminate after a set amount of steps, I have a suspicion that you've gotten terribly close to computing the non-computable busy beaver numbers.


What I referred to, but not by name in my original post: https://en.wikipedia.org/wiki/Chaitin%27s_constant a non-computable constant that represents the proportion of machines that halt for a given encoding.
I didn't expect the removal of machines to lower the complexity class as far as Big O is concerned, it would only be a constant factor, but a very interesting constant!

My true dream is to come up with sorting algorithm that has a running time that grows as a non-computable function. But I suspect that any such algorithm would not be guaranteed to terminate otherwise we'd have found a way to calculate it a la sorting all lists up to a certain size to discover the running time.

Re: Slowest Sorting Algorithms that Terminate

Posted: Tue Aug 16, 2016 4:20 pm UTC
by Tub
jaap wrote:
Tub wrote:
Shufflepants wrote:Eyeballing the thing seems to lead to something like O(n^n), so you might actually have created something worse than O(n!).

I don't think so. Because of Stirling's approximation of n!, I think O(n!) is slightly worse than about the same as O(nn).

n! = 1 * 2 * 3 * ... * n
n^n = n * n * n * ... * n
O(n^n) is at least as large as O(n!). In fact, if you take the limit, n^n will always be at least a factor of n larger than n!, making it its own class.

Shufflepants wrote:What I referred to, but not by name in my original post: https://en.wikipedia.org/wiki/Chaitin%27s_constant a non-computable constant that represents the proportion of machines that halt for a given encoding.

I see. But that'll only tell you if they halt, but not when. It's useless to prove a better upper bound on the loop - after all, it's entirely possible that your winning turing machine is the first to halt.

Re: Slowest Sorting Algorithms that Terminate

Posted: Fri Aug 26, 2016 1:51 pm UTC
by Zowayix
Has worstsort been mentioned yet? Given any computable function it's supposed to generate a bounded-time sorting algorithm with runtime worse than that function.

Re: Deliberately bad algorithms

Posted: Sun Oct 02, 2016 2:11 pm UTC
by BlackSails
What about

Code: Select all

k=1
While the list isnt sorted
      Pick a random turing machine of size k
      Calculate BB_k
      Run turing machine on list (just pick an encoding, dont see why it would matter) until it either halts or runs for BB_k steps
      If turing machine outputs sorted list, return the list
      k++

Re: Deliberately bad algorithms

Posted: Sun Oct 02, 2016 2:30 pm UTC
by Tub
BlackSails wrote:

Code: Select all

      Calculate BB_k

Can you please provide an implementation for this line that terminates after a bounded, finite amount of steps?

Re: Deliberately bad algorithms

Posted: Sun Oct 02, 2016 8:45 pm UTC
by BlackSails
Sure, just takes specialized hardware. You put a mathematician in a box and then query the box.

Re: Deliberately bad algorithms

Posted: Sun Oct 02, 2016 8:49 pm UTC
by Soupspoon
A Chinese mathematician? Or a feline one?

Re: Deliberately bad algorithms

Posted: Sun Oct 02, 2016 10:01 pm UTC
by korona
Unless the mathematician has infinite handwriting speed (or some other method to process arbitrary long proofs in bounded time) he cannot compute BB_k either. :D

Re: Deliberately bad algorithms

Posted: Mon Oct 03, 2016 2:19 pm UTC
by Yakk
For some k she cannot calculate BB_k.

But the bound on which the mathematician certainly cannot calculate BB_k is pretty high.

However, to make the algorithm more likely to terminate, we should have the mathematician both calculate BB_k and try to prove it cannot be calculated.

If it cannot be calculated, the mathematician should come up with a new axiomatic framework where it can be calculated, then she should continue.

Re: Deliberately bad algorithms

Posted: Mon Oct 03, 2016 2:41 pm UTC
by Flumble
If you put a mathematician in a box, you can be sure the mathematician will terminate within a week, so that only adds a constant.


Different approach:

Code: Select all

1. Send the computing device into a supermassive black hole
2. Sort the list using quicksort

Re: Deliberately bad algorithms

Posted: Mon Oct 03, 2016 8:42 pm UTC
by Tub
Anyone who claims that replacing a machine with a human will somehow avoid the problems of uncomputablity hasn't understood the problems.

Yakk wrote:For some k she cannot calculate BB_k.

For all but a finite number of k, she cannot calculate BB_k. The busy beaver function is a bit more strict in its uncomputability, because knowing BB_a for any a allows you to compute all BB_b with b < a.

For BlackSails' algorithm, that means that the loop can only run a finite number of iterations. But even if it knew all BB_k, it is not guaranteed to terminate. The chance for it to terminate approaches 0 as list length increases.

It doesn't sort, it doesn't terminate, but at least it's slow. One out of three. :D

Yakk wrote:But the bound on which the mathematician certainly cannot calculate BB_k is pretty high.

What bound do you mean? For any k, there's at least one TM (or mathematician) which outputs BB_k, so there's no theoretical upper bound k for which BB_k cannot be known.

For all practical purposes, something as low as BB_10 is too large to be known or uttered or written down by anything that fits a human-sized box.

Flumble wrote:Different approach:

Code: Select all

1. Send the computing device into a supermassive black hole
2. Sort the list using quicksort

So.. uh.. now how do I read the sorted list? I kinda needed that.

(If you tell me I can't, then your algorithm doesn't work. If you tell me to jump after the device, then from my perspective it isn't slow at all.)

Re: Deliberately bad algorithms

Posted: Mon Oct 03, 2016 9:57 pm UTC
by Flumble
Tub wrote:So.. uh.. now how do I read the sorted list? I kinda needed that.

If you care about the result, why do you use a deliberately bad algorithm? :roll:

Anyway, you have to wait until the SMBH is evaporated to retrieve the computing device. I don't know how much time there is inside a black hole, but hopefully the computer is done once you retrieve it.

Re: Deliberately bad algorithms

Posted: Tue Oct 04, 2016 6:10 am UTC
by Xanthir
Tub wrote:
Yakk wrote:But the bound on which the mathematician certainly cannot calculate BB_k is pretty high.

What bound do you mean? For any k, there's at least one TM (or mathematician) which outputs BB_k, so there's no theoretical upper bound k for which BB_k cannot be known.

The upper bound of possible knowability is the size of the smallest UTM, no? If you knew BB for that size, you could solve the halting problem generally.

Re: Deliberately bad algorithms

Posted: Tue Oct 04, 2016 1:02 pm UTC
by Wildcard
Flumble wrote:If you care about the result, why do you use a deliberately bad algorithm? :roll:


Just to remind you: "Deliberately bad" refers to writing correct algorithms that are extremely, mind-bogglingly inefficient, whether in terms of time or space (or both). But kudos for entertaining imaginativeness. :)

My favorite from this thread is still benneh's, which I've translated to Python:

Code: Select all

def sort(list):
    if len(list) < 2:
        return list
    else:
        if list[0] <= minimum(list[1:]):
            return list[0:1] + sort(list[1:])
        else:
            return sort(list[1:] + list[0:1])

def minimum(list):
    return sort(list)[0]


I also asked about the time complexity analysis of this algorithm on the Computer Science stack exchange a while back; might make for interesting reading for those here. (I gave credit to benneh, of course.)

Just for fun, here is the same code dropped into a wrapper, to run it on reverse-sorted lists and measure the time it takes:

Code: Select all

$ cat bad_sort.py
#!/usr/bin/python

import sys

def sort(list):
  if len(list) < 2:
    return list
  else:
    if list[0] <= minimum(list[1:]):
      return list[0:1] + sort(list[1:])
    else:
      return sort(list[1:] + list[0:1])

def minimum(list):
  return sort(list)[0]

mylist = list(range(int(sys.argv[1])))
mylist.reverse()

print(sort(mylist))
$ for n in {1..15}; do time -p ./bad_sort.py "$n"; done
[0]
real 0.03
user 0.02
sys 0.00
[0, 1]
real 0.02
user 0.02
sys 0.00
[0, 1, 2]
real 0.03
user 0.02
sys 0.00
[0, 1, 2, 3]
real 0.02
user 0.02
sys 0.00
[0, 1, 2, 3, 4]
real 0.03
user 0.02
sys 0.00
[0, 1, 2, 3, 4, 5]
real 0.03
user 0.02
sys 0.00
[0, 1, 2, 3, 4, 5, 6]
real 0.04
user 0.03
sys 0.00
[0, 1, 2, 3, 4, 5, 6, 7]
real 0.09
user 0.08
sys 0.00
[0, 1, 2, 3, 4, 5, 6, 7, 8]
real 0.48
user 0.47
sys 0.00
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
real 3.34
user 3.33
sys 0.01
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
real 27.03
user 27.01
sys 0.01
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
real 226.84
user 226.78
sys 0.06
^C