Deliberately bad algorithms

A place to discuss the science of computers and programs, from algorithms to computability.

Formal proofs preferred.

Moderators: phlip, Moderators General, Prelates

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Deliberately bad algorithms

Postby korona » Tue Oct 04, 2016 4:24 pm UTC

Xanthir wrote:
Tub wrote:
Yakk wrote:But the bound on which the mathematician certainly cannot calculate BB_k is pretty high.

What bound do you mean? For any k, there's at least one TM (or mathematician) which outputs BB_k, so there's no theoretical upper bound k for which BB_k cannot be known.

The upper bound of possible knowability is the size of the smallest UTM, no? If you knew BB for that size, you could solve the halting problem generally.

I don't think that is true. For BB_k you look at all k-state TMs (let's say we're only looking at 2 symbols in our alphabet) and determine which of these TMs is able to write the most 1s on the tape (starting from an empty input tape!). I don't see how knowing BB_k for a k so that there is a UTM with <= k states would help you to solve the halting problem.

For the mathematician in the box: Keep in mind that there are no "short" (i.e. with length bounded by any computable function in k) proofs for BB_k = n: If there was such a function f a TM could just enumerate all string with length <= f(k) and check if any string is a valid proof for BB_k = n. Even the output n (as well as log(n), the size of its representation in a TM) grows faster than any computable function.

Tub
Posts: 319
Joined: Wed Jul 27, 2011 3:13 pm UTC

Re: Deliberately bad algorithms

Postby Tub » Tue Oct 04, 2016 6:02 pm UTC

One can "know" a very high busy beaver number with very little information: you just need a UTM and the winning turing machine. Both should fit inside the box until BB(10^30) or something.
But to actually calculate the number (as required if you wish to use it in a sorting algorithm), you need a tape big enough to run the winning TM, and that tape won't fit the box for anything past BB(10).

I think what Xanthir is trying to do is this:
* produce a k-state UTM
* determine BB_k
* Solve the halting problem for a turing machine T: Simulate T on the UTM for BB_k steps. If the UTM hasn't halted, T doesn't halt either.
Doesn't work, unfortunately, because BB_k is the maximum number of steps when running the UTM on an empty tape. If you provide a tape that already contains an encoding of T, then BB_k is no longer an upper bound.

User avatar
Yakk
Poster with most posts but no title.
Posts: 11049
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Deliberately bad algorithms

Postby Yakk » Tue Oct 04, 2016 6:39 pm UTC

That doesn't "know" it. You have the number, but you don't know it is the BB number.

To know a BB number, you have to have a proof that there are no longer BB runs using a halting TM (on an empty input).
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
ConMan
Shepherd's Pie?
Posts: 1632
Joined: Tue Jan 01, 2008 11:56 am UTC
Location: Beacon Alpha

Re: Deliberately bad algorithms

Postby ConMan » Tue Oct 04, 2016 10:24 pm UTC

However, I think you can borrow some of the UTM idea to get a nicely terrible algorithm.

  • Start from s = 0.
  • Choose k, l, m, n such that k + l + m + n = s.
  • Generate all Turing machines of size less than or equal to k with input of all lists of length less than or equal to l and maximum value m.
  • Iterate each Machine n steps.
  • For each machine, check the following:
  1. Has it halted for every given input?
  2. If so, has it correctly sorted every given input?
  3. If so, was one of the given inputs the list we needed to sort?
  • If a machine meets all three of those criteria, then we have the sorted list.
  • Otherwise, select a different set of k, l, m, n with the same sum.
  • If all possible combinations have been tried, add 1 to s and and start again.
pollywog wrote:
Wikihow wrote:* Smile a lot! Give a gay girl a knowing "Hey, I'm a lesbian too!" smile.
I want to learn this smile, perfect it, and then go around smiling at lesbians and freaking them out.

User avatar
pogrmman
Posts: 337
Joined: Wed Jun 29, 2016 10:53 pm UTC
Location: Probably outside

Re: Deliberately bad algorithms

Postby pogrmman » Tue Oct 04, 2016 10:43 pm UTC

I know it's not really an algorithm, but I just fixed the shittiest code implementation I've ever written.

So, I created a webapp on pythonanywhere to make an animation of the most recent radar images from NOAA. I knew that the files were saved under a certain filename.
Spoiler:
http://radar.weather.gov/ridge/RadarImg/$IMGTYPE/$STATION/$STATION_$DATE_$TIME_$IMGTYPE.gif for those that are curious
However, the problem was I didn't think there was a directory listing. They don't make the images available over ftp, so I had to use http.

To circumvent this issue, I originally made a loop that started at the current time, and counted backwards. I tried literally every possible image and waited for a response, and did this until I had however many were needed.

Obviously, this made the loading times unbearably slow, which defeated the whole point of the webapp (which was to be a simple, light radar interface without any extra crap).

Recently, I found out that they DO have directory listings, so now my code does the smart thing in parsing the listing and getting the most recent images from that. I've managed to cut the loading speed by an order of magnitude (or maybe more).


Anyway, that's the deliberately bad "algorithm" I implemented -- it's just about the worst way to fetch the images that I could think of!
I imagine they thought I was trying to do a tiny-scale DOS attack each time I ran my webapp in the past...

Derek
Posts: 2154
Joined: Wed Aug 18, 2010 4:15 am UTC

Re: Slowest Sorting Algorithms that Terminate

Postby Derek » Fri Oct 28, 2016 12:09 am UTC

Shufflepants wrote:For some reason I'm interested in really slow sorting algorithms. We've all heard about terrible ones like Bogo Sort that has average running time of O(n!), but is not guaranteed to terminate. There are others such as Slow Sort which has a greater than polynomial running time and doesn't really admit any unnecessary work.

My own idea for a really slow one is what I call Turing Sort.

Turing Machines can be explicitly enumerated starting with a machine with 1 state, then all the machines with 2 states, and so on. So the idea for the sorting algorithm is to simulate Turing Machines.

Here's the rough pseudo-code:

Code: Select all

int i=0;
while(true) {
  initialize Turing Machine i and add it to the list
  for each Turing Machine in the list {
    simulate one step
    if the machine has halted {
      check to see if what the machine has produced on its tape is the sorted list, if it is return it
    } else {
      remove this machine from the list
    }
  }
}


Because for any given string there is some Turing machine which can produce it given a blank tape, we guarantee that this will eventually terminate even though many of the machines being simulated never will.

I estimated that this algorithm has an approximate running time of n! (number of strings of the correct length) times 2^k (the number of Turing Machines with k states where k will somehow be depend on n). I'm sure there needs to be several more factors including the constant that represents the proportion of Turing Machines that halt (which is uncomputable) because of the fact that you continue to process steps for all machines which have fewer states but do not halt than the one that eventually produces the sorted list.

As a funny side note, this algorithm also finds you the Turing Machine that minimizes some function of number of states and number of steps to produce the sorted list; such efficiency!

If any one has any better estimates on the average running time or any of their own really slow running sorting algorithms that are guaranteed to terminate, I'd love to hear them.

I think your pseudo code has bugs. First of all, you never increment i. Second, I don't understand why you would remove machines from the list. Just because they haven't halted yet doesn't mean they won't halt. Here's my attempt to fix the pseudocode:

Code: Select all

for(int i = 0; true; i++) {
  initialize Turing Machine i and add it to the list
  for each Turing Machine in the list {
    simulate one step
    if the machine has halted {
      check to see if what the machine has produced on its tape is the sorted list, if it is return it
    }
  }
}


I believe the runtime of this algorithm is O((i + t(n))^2), where t(n) is the asymptotically fastest time that a Turing machine can sort a list of size n in whatever model you've chosen, and i is the Goedel number of the first Turing machine that achieves this runtime.

My reasoning for this is that the algorithm is guaranteed to terminate when machine i has run for t(n) steps. We can visualize the total work done by this algorithm as a graph of machines 0 to infinity and the number of steps that have been executed on them. After n iterations of the outer for loop, machine 0 will have run for n steps, machine 1 for n-1 steps, machine 2 for n-2 steps, ... machine n-1 for 1 step, and machine n and onwards for 0 steps. In order to run machine i for t(n) steps we need to run the outer loop i + t(n) time, and summing up all the steps that will have been executed across all machines at this point gives us O((i + t(n))^2) work. The time taken to check that the output of halting machines is sorted is assumed to be negligible compared to the rest of the problem (most machines won't halt or the output will be found to be unsorted in constant time).

Since i is a constant, this works out to O(t(n)^2) with a very, very large constant. The smallest possible value of t(n) is O(n*log(n)) (if it were smaller then a comparison sort faster than O(n*log(n)) would be possible), giving O((n*log(n))^2) runtime, which is surprisingly good for a sort that iterates through all Turing machines.

This technique can be generalized to solve any problem in O(t(n)^2) if an algorithm exists to solve the problem in t(n), even if we don't know that algorithm. In particular, it gives us a polynomial time algorithm to solve any NP-hard problem if and only if P=NP.

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Deliberately bad algorithms

Postby korona » Fri Oct 28, 2016 3:58 pm UTC

That can be improved to yield a total impractical and useless but asymptotically optimal algorithm for every problem. Consider a problem that can be solved in O(f(n)) time by a DTM. Now we do:

Code: Select all

for i = 1 to infinity do
    run DTM with number i for i * f(n) steps on the original input
        if the DTM produced the correct output
            halt
done

What a nice O(n log n) sorting algorithm! Of course the question is: If you know that there is a O(f(n)) algorithm why don't you just run that algorithm itself? But hey it least the above can be implemented when you only have an existence proof for your O(f(n)) algorithm. Corollary: An upper bound on the worst-case run time of an arbitrary DTM-computable problem always allows you to construct an algorithm satisfying that upper bound.

User avatar
Shufflepants
Posts: 16
Joined: Wed Jul 20, 2016 3:12 pm UTC

Re: Slowest Sorting Algorithms that Terminate

Postby Shufflepants » Fri Oct 28, 2016 4:08 pm UTC

Derek wrote:
Since i is a constant, this works out to O(t(n)^2) with a very, very large constant. The smallest possible value of t(n) is O(n*log(n)) (if it were smaller then a comparison sort faster than O(n*log(n)) would be possible), giving O((n*log(n))^2) runtime, which is surprisingly good for a sort that iterates through all Turing machines.

This technique can be generalized to solve any problem in O(t(n)^2) if an algorithm exists to solve the problem in t(n), even if we don't know that algorithm. In particular, it gives us a polynomial time algorithm to solve any NP-hard problem if and only if P=NP.


Yeah, there were some bugs, I meant for the "remove from list" to be conditional on "halted but not with the correct output". But you seem to misunderstand one piece, all the Turing Machines being started up aren't actually given the list to sort as input, they're all given blank tapes. So the machine that wins will not be one that has sorted list so much as produced the list in sorted order from nothing. So the winning machine will quite possibly have only run for O(n) steps, if that makes any sense to say for a machine that is only capable of producing the correct result for a single list. And because of that, I don't think your estimate of the running time makes much sense. And my description can be generalized to solve any NP-Hard problem, but I'm quite sure it won't do it in polynomial time.

What's perhaps more interesting, is that you could also generalize this method to find a maximal compression (given a constraint on running time) for the sorted list.

Derek
Posts: 2154
Joined: Wed Aug 18, 2010 4:15 am UTC

Re: Slowest Sorting Algorithms that Terminate

Postby Derek » Fri Oct 28, 2016 4:16 pm UTC

Shufflepants wrote:Yeah, there were some bugs, I meant for the "remove from list" to be conditional on "halted but not with the correct output". But you seem to misunderstand one piece, all the Turing Machines being started up aren't actually given the list to sort as input, they're all given blank tapes. So the machine that wins will not be one that has sorted list so much as produced the list in sorted order from nothing. So the winning machine will quite possibly have only run for O(n) steps, if that makes any sense to say for a machine that is only capable of producing the correct result for a single list. And because of that, I don't think your estimate of the running time makes much sense. And my description can be generalized to solve any NP-Hard problem, but I'm quite sure it won't do it in polynomial time.

What's perhaps more interesting, is that you could also generalize this method to find a maximal compression (given a constraint on running time) for the sorted list.

I see, yes that is a big difference. So it's essentially using Turing machines as a random list generator (with a large chance of failing to produce any list at all) and running until it finds the correct list. Yeah the runtime of that would be pretty awful.

So the winning machine will quite possibly have only run for O(n) steps

This is possible in mine as well. The O(n*log(n)) machine only provides an upperbound for the runtime, but for any given input any of the Turing machines may output the correct answer earlier. This is also why we can't use my algorithm to find the optimal algorithm.

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Slowest Sorting Algorithms that Terminate

Postby korona » Fri Oct 28, 2016 4:37 pm UTC

Shufflepants wrote:
Derek wrote:
Since i is a constant, this works out to O(t(n)^2) with a very, very large constant. The smallest possible value of t(n) is O(n*log(n)) (if it were smaller then a comparison sort faster than O(n*log(n)) would be possible), giving O((n*log(n))^2) runtime, which is surprisingly good for a sort that iterates through all Turing machines.

This technique can be generalized to solve any problem in O(t(n)^2) if an algorithm exists to solve the problem in t(n), even if we don't know that algorithm. In particular, it gives us a polynomial time algorithm to solve any NP-hard problem if and only if P=NP.


Yeah, there were some bugs, I meant for the "remove from list" to be conditional on "halted but not with the correct output". But you seem to misunderstand one piece, all the Turing Machines being started up aren't actually given the list to sort as input, they're all given blank tapes. So the machine that wins will not be one that has sorted list so much as produced the list in sorted order from nothing. So the winning machine will quite possibly have only run for O(n) steps, if that makes any sense to say for a machine that is only capable of producing the correct result for a single list. And because of that, I don't think your estimate of the running time makes much sense. And my description can be generalized to solve any NP-Hard problem, but I'm quite sure it won't do it in polynomial time.

That is actually quite hard to analyze. In the uniform cost model the algorithm certainly is unbounded: Assume that there is some bound f(n) on the number of steps the algorithm runs to sort a list of size n. Consider the one element list (x) where x is chosen so that x != the output of the first f(1) TMs. Then the algorithm needs more than f(1) steps to sort the list. Contradiction. In a per-bit cost model such an analysis is much harder; (Is it possible at all?).

Shufflepants wrote:What's perhaps more interesting, is that you could also generalize this method to find a maximal compression (given a constraint on running time) for the sorted list.

I doubt that this is true; I'm not even sure how to rigorously interpret this sentence.

Derek
Posts: 2154
Joined: Wed Aug 18, 2010 4:15 am UTC

Re: Deliberately bad algorithms

Postby Derek » Fri Oct 28, 2016 6:40 pm UTC

korona wrote:That can be improved to yield a total impractical and useless but asymptotically optimal algorithm for every problem. Consider a problem that can be solved in O(f(n)) time by a DTM. Now we do:

Code: Select all

for i = 1 to infinity do
    run DTM with number i for i * f(n) steps on the original input
        if the DTM produced the correct output
            halt
done

What a nice O(n log n) sorting algorithm! Of course the question is: If you know that there is a O(f(n)) algorithm why don't you just run that algorithm itself? But hey it least the above can be implemented when you only have an existence proof for your O(f(n)) algorithm. Corollary: An upper bound on the worst-case run time of an arbitrary DTM-computable problem always allows you to construct an algorithm satisfying that upper bound.

That's good, I hadn't thought of it. Though it requires you to know f(n) ahead of time, since it's a value in the algorithm. Is there an algorithm that can do better than O(f(n)^2) when f(n) is not known?

User avatar
hotaru
Posts: 1024
Joined: Fri Apr 13, 2007 6:54 pm UTC

Re: Deliberately bad algorithms

Postby hotaru » Sun Oct 30, 2016 12:21 pm UTC

compress.sh:

Code: Select all

#!/bin/sh

if [ -z $1 ]
then
  echo "Usage: $0 filename" >&2
  exit 1
fi

wc -c $1
sha256sum -b $1


uncompress.sh:

Code: Select all

#!/bin/sh

if [ -z $1 ]
then
  echo "Usage: $0 filename" >&2
  exit 1
fi

read size name < $1

until sha256sum -c $1 2>/dev/null >/dev/null
do
  dd if=/dev/urandom of=$name bs=1 count=$size 2>/dev/null
done

Code: Select all

factorial product enumFromTo 1
isPrime n 
factorial (1) `mod== 1

Tub
Posts: 319
Joined: Wed Jul 27, 2011 3:13 pm UTC

Re: Deliberately bad algorithms

Postby Tub » Sun Oct 30, 2016 1:45 pm UTC

Yeah, that compression is going to be a bit lossy for anything longer than 20 bytes. It will most likely produce a hash collision instead of the original input.

User avatar
hotaru
Posts: 1024
Joined: Fri Apr 13, 2007 6:54 pm UTC

Re: Deliberately bad algorithms

Postby hotaru » Sun Oct 30, 2016 2:14 pm UTC

Tub wrote:Yeah, that compression is going to be a bit lossy for anything longer than 20 bytes. It will most likely produce a hash collision instead of the original input.

more likely anyone who runs it will give up long before that happens.

Code: Select all

factorial product enumFromTo 1
isPrime n 
factorial (1) `mod== 1

User avatar
phlip
Restorer of Worlds
Posts: 7543
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

Re: Deliberately bad algorithms

Postby phlip » Sun Oct 30, 2016 10:30 pm UTC

Surely, for consistency with other command-line single-file compression utils, this should by default delete the original file after compression...

Code: Select all

enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}
[he/him/his]

DeGuerre
Posts: 46
Joined: Mon Feb 04, 2008 6:41 am UTC

Re: Deliberately bad algorithms

Postby DeGuerre » Mon Nov 21, 2016 7:05 am UTC

I don't know if this counts, but the algorithm with the worst complexity that I've ever implemented was Tarski's algorithm for quantifier elimination in real closed fields.

To explain what I mean by an algorithm having a bad complexity, here's an explanation: The complexity class O(2^N) is called EXPTIME. The complexity class The complexity class O(2^(2^N)) is called DEXPTIME or 2-EXPTIME. You can similarly define 3-EXPTIME, and so on. The union of the class n-EXPTIME for all finite integers n is known as ELEMENTARY.

If N is the number of quantifiers to be eliminated, Tarski's algorithm in the class NONELEMENTARY. I'm just going to let that sink in for a moment.

To be fair to Tarski, his algorithm was an existence proof and DEXPTIME algorithms to solve the problem are now known. However, Tarski's algorithm is much easier to implement, and even though it is not feasible to run it on any problem where N>1, it works just fine on problems where N=1. That's all I needed to solve the problem that I needed to solve at the time.

EDIT

By the way, if you're curious about what the problem is, and don't care to read a monograph to find out, here's the elevator pitch version.

So you should know that the theory of Peano arithmetic (natural numbers with addition and multiplication) is undecidable, as proven by Gödel. What you may not know (but probably aren't surprised to learn) is that if you remove multiplication, the theory (which is the theory of Presburger arithmetic) is decidable.

What Tarski showed is that the theory of real closed fields (real numbers with all the field operations, both equality and inequality) is decidable. This seems weird, because it seems like real numbers should be a superset of the integers. But the reals don't have an equivalent of things like prime numbers. If it helps, consider that rational or real linear programming is much easier to solve than integer linear programming. It's the discrete nature of the integers which makes the problem harder.

In fact, Tarski showed that real closed fields have an even stronger property: quantifier elimination. If you have a statement which includes a quantifier, then it can be converted into an equivalent statement which doesn't have that quantifier.

For example, consider the statement:

∃ x. a x2 + b x + c = 0


This is equivalent to the following statement, which doesn't contain the existential quantifier:

(a = 0 ∧ b = 0 ∧ c = 0) ∨ (a = 0 ∧ b ≠ 0) ∨ (a ≠ 0 ∧ b2 > 4ac)


Tarski's algorithm performs this transformation.

Derek
Posts: 2154
Joined: Wed Aug 18, 2010 4:15 am UTC

Re: Deliberately bad algorithms

Postby Derek » Wed Nov 30, 2016 5:13 am UTC

What you may not know (but probably aren't surprised to learn) is that if you remove multiplication, the theory (which is the theory of Presburger arithmetic) is decidable.

No, that statement is very surprising. It's not at all obvious why multiplication is necessary to prove Godel's Incompleteness Theorem. You have to dig into the details to find that multiplication is necessary in the Godel encoding of statements (and the encoding of primitive recursive functions in particular). In fact I'd say that it seems obvious that multiplication can be defined from Presburger arithmetic using it's axiom of induction (it can't be, but I don't have a good explanation as to why).

Mike Rosoft
Posts: 63
Joined: Mon Jun 15, 2009 9:56 pm UTC
Location: Prague, Czech Republic

Re: Deliberately bad algorithms

Postby Mike Rosoft » Sun Dec 25, 2016 2:26 pm UTC

A classical bad algorithm for Fibonacci's numbers:

Code: Select all

ulong Fibonacci(uint x)
{
  if(x==0) return 0;
  else if(x==1) return 1;
  else return Fibonacci(x-1)+Fibonacci(x-2);
}


Okay, it computes the value exactly by definition, but...

User avatar
Yakk
Poster with most posts but no title.
Posts: 11049
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Deliberately bad algorithms

Postby Yakk » Sun Dec 25, 2016 11:48 pm UTC

That is not trying.

Code: Select all

int fib(int x){
  if (x==0 || x==1) return 1
  std::vector<int> cache(x);
  for(int i = 0; i <x; ++i)
    cache[i]=fib(i);
  return cache[x-1]+cache[x-2];
}

Look, I added a cache! That makes it faster, right?

(no)

Sadly, it isn't much slower than the naive dumb one. Exponential is hard to beat with more dumbness.
Last edited by Yakk on Fri Dec 30, 2016 5:02 am UTC, edited 2 times in total.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
PM 2Ring
Posts: 3620
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Mid north coast, NSW, Australia

Re: Deliberately bad algorithms

Postby PM 2Ring » Wed Dec 28, 2016 7:29 am UTC

Mike Rosoft wrote:A classical bad algorithm for Fibonacci's numbers:

Code: Select all

ulong Fibonacci(uint x)
{
  if(x==0) return 0;
  else if(x==1) return 1;
  else return Fibonacci(x-1)+Fibonacci(x-2);
}


Okay, it computes the value exactly by definition, but...

That algorithm got a mention back on page 1. Also see Xanthir's sig. Actually, it's not bad if you give it a memoizing cache, but I agree there are better ways to calculate Fibonacci numbers. :)

User avatar
Xanthir
My HERO!!!
Posts: 5228
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Deliberately bad algorithms

Postby Xanthir » Thu Jan 05, 2017 11:49 pm UTC

Hey, my sig is linear time!
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
Wildhound
Posts: 248
Joined: Fri Jan 28, 2011 3:34 pm UTC
Location: Lost in Time.

Re: Deliberately bad algorithms

Postby Wildhound » Fri Mar 24, 2017 11:37 am UTC

What about a sort whose run time is based not just on the size of the array, but on the size of the values? I call it IterSort; and it's horrifying.

Code: Select all

public static int[] iterSort(int[] unsorted) {
   
   int[] result = new int[unsorted.length];
   int count = 0;      
   
   for(int i = Integer.MIN_VALUE; true; i++) {
      
      for(int j = 0; j < unsorted.length; j++) {
         
         if(unsorted[j] == i) {
            result[count] = i;
            count++;
         }
      }
      
      if(count == unsorted.length)
          break;
   }
   
   return result;
}


Or in pseudocode:

Code: Select all

Start at minimum value and begin iterating to infinite.
   Check each element of the array against the current iterator value. If equal:
      Insert the current iterator value into the next slot of the result array.
   If the array has been filled, break.
Return result array.
Now and forever, a staunch TimeKeeper amongst heretics.

OTC Android Live Wallpaper

User avatar
Flumble
Yes Man
Posts: 1951
Joined: Sun Aug 05, 2012 9:35 pm UTC

Re: Deliberately bad algorithms

Postby Flumble » Fri Mar 24, 2017 3:53 pm UTC

...but that's merely O(n^2). Or O(O) (pronounced "uh-oh!") if you try to sort values of an unbounded type. :P (yes, you can sort it in finite time if your longest element has n bits and you only iterate over values of 1 to n bits, but you won't sort [[2..],[1..]])

>-)
Posts: 512
Joined: Tue Apr 24, 2012 1:10 am UTC

Re: Deliberately bad algorithms

Postby >-) » Sun Mar 26, 2017 10:12 pm UTC

i learned this algorithm for finding a path from vertices s to t in a graph in class recently:

Code: Select all

//is there a path of length at most k from s to t?
loop over vertices w in V:
    recursively compute if there is a path from s to w with length at most ceil(k/2)
    recursively compute if there is a path from w to t with length at most ceil(k/2)
    if yes to both, return true
return false


then just run this algorithm with k set to the number of vertices in the graph

this algorithm runs in n^(log n) time , which is not even polynomial, so i think it qualifies for this thread
however, it happens to be the most efficient known algorithm for the path problem in terms of space -- it uses O(log^2 n) extra space.
(the undirected case can be done with O(log n) space)

Tub
Posts: 319
Joined: Wed Jul 27, 2011 3:13 pm UTC

Re: Deliberately bad algorithms

Postby Tub » Mon Mar 27, 2017 8:04 am UTC

Interesting tradeoff. Though I have yet to meet a problem that requires checking existence of a path without actually returning the path.
>-) wrote:it uses O(log^2 n) extra space. (the undirected case can be done with O(log n) space)

Really? It would seem that the call stack is at most O(log n) calls deep, and each stack frame stores O(1) information. And the algorithm as written should work in both directed and undirected versions. What am I missing?

User avatar
Flumble
Yes Man
Posts: 1951
Joined: Sun Aug 05, 2012 9:35 pm UTC

Re: Deliberately bad algorithms

Postby Flumble » Mon Mar 27, 2017 2:50 pm UTC

Tub wrote:It would seem that the call stack is at most O(log n) calls deep, and each stack frame stores O(1) information.

The loop iterator also has size log n, so I think that's where the square comes from.

>-)
Posts: 512
Joined: Tue Apr 24, 2012 1:10 am UTC

Re: Deliberately bad algorithms

Postby >-) » Mon Mar 27, 2017 3:05 pm UTC

Yes, that is indeed where the extra log factor comes from. And it does work for both directed and undirected versions -- I was just mentioning at the end that in the undirected case, there is an even more efficient algorithm which runs in just log n space.

User avatar
Yakk
Poster with most posts but no title.
Posts: 11049
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Deliberately bad algorithms

Postby Yakk » Mon Mar 27, 2017 8:05 pm UTC

Why would a vector iterator need log(n) space?

Btw, log^2 is awkward notation.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

>-)
Posts: 512
Joined: Tue Apr 24, 2012 1:10 am UTC

Re: Deliberately bad algorithms

Postby >-) » Mon Mar 27, 2017 8:38 pm UTC

because a binary counter going from 1 to n needs at least log n bits. I'm not sure how cpp iterators are implemented, but they are probably just 64 bit datatypes, in which case do not actually solve this problem correctly for graphs with more than 2^64 vertices

ive always written it as log^2 n as opposed to (log n)^2 for two reasons. first, on paper it's written down as log^2 ... and second because it's two fewer characters

Derek
Posts: 2154
Joined: Wed Aug 18, 2010 4:15 am UTC

Re: Deliberately bad algorithms

Postby Derek » Sat Apr 01, 2017 12:12 am UTC

Yakk wrote:Why would a vector iterator need log(n) space?

Btw, log^2 is awkward notation.

Indexing a collection of size n requires at least log_2(n) space. It doesn't matter whether the whether the index is a counter, a pointer, or some more complicated structure.

User avatar
Yakk
Poster with most posts but no title.
Posts: 11049
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Deliberately bad algorithms

Postby Yakk » Sat Apr 01, 2017 4:29 pm UTC

>-) wrote:because a binary counter going from 1 to n needs at least log n bits. I'm not sure how cpp iterators are implemented, but they are probably just 64 bit datatypes, in which case do not actually solve this problem correctly for graphs with more than 2^64 vertices

ive always written it as log^2 n as opposed to (log n)^2 for two reasons. first, on paper it's written down as log^2 ... and second because it's two fewer characters

But log log n is also a possibility. I was trying to work out how you'd justify that. It was challenging. ;)
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

>-)
Posts: 512
Joined: Tue Apr 24, 2012 1:10 am UTC

Re: Deliberately bad algorithms

Postby >-) » Sat Apr 01, 2017 4:40 pm UTC

I guess it's just convention really, since i've only ever seen log log n written as log log n

User avatar
The Great Hippo
Swans ARE SHARP
Posts: 6862
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: Deliberately bad algorithms

Postby The Great Hippo » Sun Apr 02, 2017 9:03 am UTC

Works in Python 3.x:

Code: Select all

import sys



def dude_wheres_my_sort(seq):
  module = sys.modules[__name__]
  my_stuff = module.__dict__
  gross_stuff = ("exit", "quit", "input")
  for stuff in my_stuff:
    if stuff in gross_stuff: continue
    maybe_THIS_sorts = my_stuff[stuff]
    try:
      i_dunno = maybe_THIS_sorts(seq)
    except Exception as apparently_not:
      continue
    if i_dunno == looks_sorted_to_me(seq):
      return i_dunno
 

def looks_sorted_to_me(seq):
  return sorted(seq)
What?

There's gotta be a sort function in there somewhere, right?


EDIT: Yeah, yeah, OKAY, you caught me; I cheated by using 'sorted' to check my result. FINE, MOM, -- I'll do it right:

Code: Select all

import sys
import ctypes

Py_ssize_t = \
    hasattr(ctypes.pythonapi, 'Py_InitModule4_64') \
    and ctypes.c_int64 or ctypes.c_int

class PyObject(ctypes.Structure):
    pass

PyObject._fields_ = [
    ('ob_refcnt', Py_ssize_t),
    ('ob_type', ctypes.POINTER(PyObject)),
]

class SlotsProxy(PyObject):
    _fields_ = [('dict', ctypes.POINTER(PyObject))]


def patchable_builtin(klass):
    name = klass.__name__
    target = klass.__dict__

    proxy_dict = SlotsProxy.from_address(id(target))
    namespace = {}

    ctypes.pythonapi.PyDict_SetItem(
        ctypes.py_object(namespace),
        ctypes.py_object(name),
        proxy_dict.dict,
    )
    return namespace[name]



def sequence_sort_thyself(seq):
  for name in sys.modules:
    module = sys.modules[name]
    for key in module.__dict__:
      try:
        cls = module.__dict__[key].__class__
      except AttributeError:
        continue
      try:
        sort_method = cls.__dict__["sort"]
      except KeyError:
        continue
      builtin_cls_dict = patchable_builtin(type(seq))
      builtin_cls_dict["why_dont_you_have_this"] = sort_method
      seq.why_dont_you_have_this()
      return seq

sort_this = [5, 4, 3, 2, 1]

sequence_sort_thyself(sort_this))
THERE! I found a "sort" method somewhere, shoved it into your dumb sequence object, then called the method and returned the object! Are you happy, now!?

User avatar
Yakk
Poster with most posts but no title.
Posts: 11049
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Deliberately bad algorithms

Postby Yakk » Sun Apr 02, 2017 12:11 pm UTC

The Great Hippo wrote:THERE! I found a "sort" method somewhere, shoved it into your dumb sequence object, then called the method and returned the object! Are you happy, now!?

I dunno, assuming just because a method is cqlled sort it sorts seems bad.

You should check that it returns the same for every permutation of the source sequence, *and* that its result is the highest voted amoung sort methods with this property, *and* that it preserves length (by counting). (Checking length only is an optimization over checking the elements are ascending, as length just requires counting not comparisons).
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
The Great Hippo
Swans ARE SHARP
Posts: 6862
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: Deliberately bad algorithms

Postby The Great Hippo » Sun Apr 02, 2017 1:24 pm UTC

Yakk wrote:I dunno, assuming just because a method is cqlled sort it sorts seems bad.

You should check that it returns the same for every permutation of the source sequence, *and* that its result is the highest voted amoung sort methods with this property, *and* that it preserves length (by counting). (Checking length only is an optimization over checking the elements are ascending, as length just requires counting not comparisons).
You're right; I shouldn't presume that just because a method is called "sort", it's actually going to sort my sequence. That's deeply prejudice of me; I apologize.

But what you're describing sounds kind of hard, so... instead, why don't we just try all the methods? Not just the ones on our sequence -- I mean, maybe they forgot to put a sort method on it, right? So let's be safe: We'll use every one.

EEEEVEEEEERYYYYYOOOOONE

Code: Select all

import sys

bad_methods = ("clear", "__call__", "pop", "__init__")

def bring_me_everyone(seq):
  copy_seq = type(seq)(seq)
  while copy_seq == seq:
    seq = EVEEEERYYYYYYOOOOOONNNNNNNE(seq)
  return seq


def EVEEEERYYYYYYOOOOOONNNNNNNE(seq):
  for name in dict(sys.modules):
    module = sys.modules[name]
    for key, value in module.__dict__.items():
      if hasattr(value, "__class__"):
        for method_name, method in value.__class__.__dict__.items():
          if method_name in bad_methods: continue
          try:
            method(seq)
          except:
            continue
  return seq


sort_this = [5, 4, 3, 2, 1]

print(bring_me_everyone(sort_this))
(This actually does work... maybe. Sometimes.)

User avatar
The Great Hippo
Swans ARE SHARP
Posts: 6862
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: Deliberately bad algorithms

Postby The Great Hippo » Sun Apr 02, 2017 1:36 pm UTC

(I also briefly dabbled with a function that loads every method it finds in every module into the sequence's class (even if it's a built-in), then calls them -- but it didn't show much promise. On the flip-side, though, you end up with one class that contains every method! Who needs inheritance?)

User avatar
phlip
Restorer of Worlds
Posts: 7543
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

Re: Deliberately bad algorithms

Postby phlip » Sun Apr 02, 2017 2:00 pm UTC

(Very rough pseudocode...)

Code: Select all

for path in sys.path:
  for file in os.walk(path):
    __import__(file)

for o in gc.get_objects():
  if isinstance(o, types.FunctionType):
    try:
      o(seq)
      etc
Every function. Who cares where it came from.

Also, to go back a step... I've never seen "log² x" to mean "(log x)²"... I've only ever seen that notation for trig functions, sin²x etc. I guess because functional powers of trig functions aren't particularly common (except for that whole "cos(cos(cos(...))) is a weird useless constant" thing) while (sin x)² comes up quite frequently, and sin²x lets you skip the parens, even if it's kinda an abuse of notation it makes sense. On the other hand, log(log x) is a thing that happens on rare occasion, while (log x)² doesn't come up so frequently that we're hurting for an abbreviation for it...

Code: Select all

enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}
[he/him/his]

User avatar
Yakk
Poster with most posts but no title.
Posts: 11049
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Deliberately bad algorithms

Postby Yakk » Sun Apr 02, 2017 5:47 pm UTC

And everyone (who studies algorithmic complexity) should know about log^*.

Because the story behind it showing up in an analysis is funny.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
Flumble
Yes Man
Posts: 1951
Joined: Sun Aug 05, 2012 9:35 pm UTC

Re: Deliberately bad algorithms

Postby Flumble » Sun Apr 02, 2017 6:00 pm UTC

phlip wrote:Also, to go back a step... I've never seen "log² x" to mean "(log x)²"... I've only ever seen that notation for trig functions, sin²x etc. I guess because functional powers of trig functions aren't particularly common (except for that whole "cos(cos(cos(...))) is a weird useless constant" thing) while (sin x)² comes up quite frequently, and sin²x lets you skip the parens, even if it's kinda an abuse of notation it makes sense. On the other hand, log(log x) is a thing that happens on rare occasion, while (log x)² doesn't come up so frequently that we're hurting for an abbreviation for it...

Ugh, I hate the use of superscript on a function for exponentiation instead of application.
However, it was pretty unambiguous in >-)'s algorithm, because the k/2 recursion entails a complexity of at least O(log n).


Yakk wrote:Because the story behind it showing up in an analysis is funny.

Do you have a link for us uninitiated?

User avatar
Yakk
Poster with most posts but no title.
Posts: 11049
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Deliberately bad algorithms

Postby Yakk » Sun Apr 02, 2017 6:09 pm UTC

It was in one of the complexity bibles. Some problem that everyone thought was n log n, but nobody could prove it. If you measured its growth rate, it was n log n.

Turns out it was n log n log^* n. Because you cannot experimentally distinguish log^* n from a constant.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.


Return to “Computer Science”

Who is online

Users browsing this forum: No registered users and 3 guests