0287: "NPComplete"
Moderators: Moderators General, Prelates, Magistrates

 Posts: 1
 Joined: Mon Jul 09, 2007 5:18 pm UTC
Fiddly wrote:I thought the knapsack problem is maximizing utility subject to capacity constraints. The problem described here seems a bit different.
necroforest wrote:I'm suprised nobody has pointed this out  The problem isn't NPComplete, since it's not a decision problem, therefore not in NP.
A friend of mine noticed that this isn't the knapsack problem because of the lack of utility. I then realized that this is just a problem of making exact change for $15.05 where the "coins" are mixed fruit, french fries, etc. How many nerd points do we get?
exsustumicist wrote:Fiddly wrote:I thought the knapsack problem is maximizing utility subject to capacity constraints. The problem described here seems a bit different.necroforest wrote:I'm suprised nobody has pointed this out  The problem isn't NPComplete, since it's not a decision problem, therefore not in NP.
A friend of mine noticed that this isn't the knapsack problem because of the lack of utility. I then realized that this is just a problem of making exact change for $15.05 where the "coins" are mixed fruit, french fries, etc. How many nerd points do we get?
Well, essentially those two problems are the same, only with the coins you know for sure for any total amount, there's a combination of coins that will add up to that amount. With a knapsack you just try to approximate it as closely as possible, and if you're lucky you'll fill it completely, which happens to be the case in this comic.

 Posts: 2
 Joined: Mon Jul 09, 2007 6:56 pm UTC

 Posts: 2
 Joined: Mon Jul 09, 2007 7:08 pm UTC
General Solution in Python
Here's my general solution in python:
http://www.enigmacurry.com/2007/07/09/m ... nchbreak/
This comic was way too much fun!
http://www.enigmacurry.com/2007/07/09/m ... nchbreak/
This comic was way too much fun!
Interesting
Thanks for the Python solution, it was interesting to see someone with Python experience code an answer. It will take me quite a while to decode what yours does. I have virtually no Python (or programming) experience, so my solution came out quite different:
It's probably horribly inefficient, but it worked, which I was happy to see!
Code: Select all
a=0;b=0;c=0;d=0;e=0;f=0
while (f<10):
if 215*a+275*b+335*c+355*d+420*e+580*f==1500 :
print a,",",b,",",c,",",d,",",e,",",f,"."
if a<9:
a=a+1
else:
a=0
if b<9:
b=b+1
else:
a=0;b=0
if c<9:
c=c+1
else:
a=0;b=0;c=0
if d<9:
d=d+1
else:
a=0;b=0;c=0;d=0
if f<9:
f=f+1
else:
a=0;b=0;c=0;d=0;f=10
It's probably horribly inefficient, but it worked, which I was happy to see!

 Posts: 3
 Joined: Wed Oct 11, 2006 12:58 pm UTC
General solution?
My python solution:
Code: Select all
total = 1505
options = (215, 275, 335, 355, 420, 580)
cache = {0: set([(0,)*len(options)])}
def orders(q):
if q not in cache: cache[q] = set(v[:o]+(v[o]+1,)+v[o+1:] for o,j in enumerate(options) if qj >= 0 for v in orders(qj))
return cache[q]
print orders(total)
Re: General solution?
RichardSmith wrote:My python solution:Code: Select all
total = 1505
options = (215, 275, 335, 355, 420, 580)
cache = {0: set([(0,)*len(options)])}
def orders(q):
if q not in cache: cache[q] = set(v[:o]+(v[o]+1,)+v[o+1:] for o,j in enumerate(options) if qj >= 0 for v in orders(qj))
return cache[q]
print orders(total)
Wow. Your solution is *way* cleaner than mine.
Code: Select all
appprice=[2.15,2.75,3.35,3.55,4.20,5.80]
appcount=[0,0,0,0,0,0]
parties=10
pricetarget=15.05
results=[]
def testres(target,results):
sum=0
for i in range(len(appcount)):
sum+=appcount[i]*appprice[i]
if abs(sumtarget)<0.01:
results+=[appcount[:]]
def inctest(level,maxlevel,target,results):
for i in range(parties+1):
appcount[level]=i
if level!=maxlevel:
inctest(level+1,maxlevel,target,results)
else:
testres(target,results)
inctest(0,len(appcount)1,pricetarget,results)
print results

 Posts: 3
 Joined: Wed Oct 11, 2006 12:58 pm UTC
Re: General solution?
It's also a lot faster ;) It's O(n c) (where n == number of menu items, c == total cost). Solving NPcomplete problems in polynomial time is fun.Annirak wrote:Wow. Your solution is *way* cleaner than mine.
ptveite wrote:I agree, it's like the antimatter/matter of the restaurant world. You bring healthy food and mozzarella sticks onto the same table and they annihilate each other.
No dude, the fruit can cover for the mozz stix, they go down to the stomach together and the canteloupe will be all "it's cool, he's with me"
Nobody for the Mitch Hedberg reference? <sigh>
 Yakk
 Poster with most posts but no title.
 Posts: 11129
 Joined: Sat Jan 27, 2007 7:27 pm UTC
 Location: E pur si muove
I'm suprised nobody has pointed this out  The problem isn't NPComplete, since it's not a decision problem, therefore not in NP.
There is an implicit decision problem  is there a set of dishes that satisfy the criteria?
The actual order is the hint that allows you to verify the problem in P time.
Alright. What does NPcomplete mean?
P time (or just P) is the set or problems solveable in polynomial time.
NP time (or just NP) is the set of problems which, given some hint that "describes" the solution, you can verify the solution in polynomial time.
NPhard is a problem that you can transform an NP problem into one of these problems in Polynomial time.
NPcomplete is an NPhard problem that is also in NP.
It is no known if P equals NP or not, but most people believe they are not equal.
Note that every P problem is in NP. If a P problem was found to be NPcomplete, it would prove that P=NP.
So NPcomplete are the interesting problems in NP.
NP stands for "Nondeterministic Polynomial", and refers to running the problem on a Nondetermistic turing machine and finding the solution in Ptime.
However, given the execution path that solves the problem on the NDTM, you can verify the problem on a DTM in Ptime, so it is easier to refer to NP problems as verifiable problems instead of ND solved problems.
...
As for the problem, take the GCD of the elements (which is at least 0.05), and you have a 301 entry dynamic programming problem.
 Graham's Number
 Posts: 33
 Joined: Mon Jul 09, 2007 9:27 pm UTC
Re: The Haskell answer
chessguy wrote:mwn3d wrote:make sure you dont forget the barbecue sandwich as an option....$6.55.
That's not an appetizer. Look at the menu: that's clearly a separate section. There are only 2 solutions:
Prelude> let g = [0..10] in [[a,b,c,d,e,f]  a < g, b < g, c < g, d < g, e <
 g, f < g, abs(a*2.15+b*2.75+c*3.35+d*3.55+e*4.2+f*5.815.05)<0.001]
[[1.0,0.0,0.0,2.0,0.0,1.0],[7.0,0.0,0.0,0.0,0.0,0.0]]
That is:
1 Mixed Fruit, 2 Hot Wings, 1 sampler plate, (EDIT: already mentioned) or
7 Mixed Fruit (already mentioned)
I had to register to give this answer
I'm really sorry about this, but I have no idea what you are on about here. I can see why those are solutions, but I have no idea how you deduced that they were the only two using that. Could someone please explain this (bearing in mind I don't even have a GCSE in Maths yet, although I am determined to learn more than the dratted course has in it)? Is this the sort of thing Mathematica is good for?
P.S. This looks like the perfect sort of forum for me. I love the fact that there are so many hidden jokes that most people would not even realise were there...
 BugEyed Earl
 Posts: 6
 Joined: Fri Apr 27, 2007 4:17 am UTC
 Location: Boston
 Contact:
ptveite wrote:ptveite wrote:I agree, it's like the antimatter/matter of the restaurant world. You bring healthy food and mozzarella sticks onto the same table and they annihilate each other.
No dude, the fruit can cover for the mozz stix, they go down to the stomach together and the canteloupe will be all "it's cool, he's with me"
Nobody for the Mitch Hedberg reference? <sigh>
Your second post beat me to it. =/
I tip my hat to you, sir.
jlintern wrote:Ren wrote:
"Oh no, not again."
That was my favorite part.
Am I the only one who noticed that the customer is trying to get the server to apply the traveling salesman problem to the six tables he needs to traverse as quickly as possible?
The head in the hands bit was my favourite too. And I did notice the traveling salesman reference. But then I got distracted writing something to solve the problem.
oblivimous wrote:Reading up on the travelling salesman reminded me of a something I've wondered for a while: What method do Google maps, MapQuest, etc. use to find the fastest possible route?
Something I might be able to answer, as I've already implemented pathfinding algorithms (though not on Google's scale, obviously). I would like to apologize in advance for this rant, especially to mathematicians, who will be horrified at how I treat their lovely algorithms.
You wouldn't use Dijkstra, or any deterministic method on a graph of the complexity of real world road networks. The problem space is just too large. Usually, as a first swipe you'd use A*, which very briefly is 'dijkstra with economical preference': instead of visiting all paths in a depthfirst or breadthfirst search, at each iteration you look at all the edges you could take next, and compute a path cost for each. Then you go for the cheapest route, and repeat until you reach your goal, or reach a termination point (too many iterations, "can't get any closer than this" ...). If, while you construct a particular path, a route you'd discarded previously suddenly becomes cheaper than the one you've been on so far, you start exploring that one instead, always keeping to the cheapest overall path until you reach the goal.
Yes, my explanation sucks, and Wikipedia or the intarweb most likely have an intelligible one.
The critical part of A* is computing the cost, which is usually done by summing the cost of each edge that forms the path. The cost of each edge is a heuristic function, and is the hard part to get right. If the heuristic is too soft, the algorithm will tend to "wander" a lot instead of homing in on the right path quickly and with little branching/backtracking. If the heuristic is terminally bad (cost of 0 for all edges), A* devolves into its worst case performance, equivalent to Dijkstra's algorithm.
For a vehicle route planner heuristic:
 You'd start by considering whether travelling along an edge actually brings you closer to your target;
 You'd then factor the travel time into the heuristic: given the speed limit on that edge, and its length, it is more or less costly in time to take it, so that is a factor in selecting routes;
 Then you'd also add realtime or recorded statistical data about traffic on that edge at the planned time of travel, to account for rush hours and route around known bottlenecks;
 Then you get even fancier by adding knowledge of traffic lights at intersections, which add a cost to traversing a particular node in the graph;
 Trafic code considerations can also be factored in: for example, I know that in California it is legal to make a right turn at an intersection, even if the light is red. You could use that to reduce the cost of a right turn at intersections that are in California (which any decent geo database will tell you with relative ease, once you've defined California) relative to left turns or going straight on (as an aside, I understand that UPS have done this for the route planners their delivery vans use, and have cut fuel consumption by a nonnegligible amount across the board by avoiding long idling at traffic lights);
 You can also give the heuristic a little memory by giving it the previous edges in the route, to implement general policies such as "if you're on a freeway, it is better to stay on it longer, rather than jumping on and off" (by adding an extra cost to moving between a "freeway" edge type and an "offramp" edge type, so that you only leave the freeway when you start badly overshooting your target).
 You can add or remove constants to the cost depending on user selected options (for example, add an infinite cost to taking an interstate when you're on foot, or an elevation cost to a road sloping up if you're on a bicycle).
With all these, and likely countless other factors considered, you end up with a heuristic function that is the sum of many weighed subheuristics:
h(edge) = A*distance_to_target_after(edge) + B*travel_time_along(edge) + C*vehicle_is_african_swallow() + ...
For the correct value(s) of [A,B,C,...], A* will find paths optimally. Now the pathfinding problem is transformed into a numerical optimization problem: find the best value(s) of [A,B,C,...] such that the resulting A* heuristic finds paths in as little time as possible. And for this problem, you apply yet more heuristic algorithms, but offline, using supercomputers or grid computing: simulated annealing, tabu search, genetic algorithms (my personal favorite; watching a simulation of evolution by natural selection is really, really, really cool)... Test each member of the population of potential solutions being considered with a few hundred/thousand/tens of thousands of test paths, score based on the computation times that each candidate comes up with in the test paths, and repeat for a couple of months/years. Voila, tuned A*!
To be fair, the tuning isn't actually perfect until you've applied the numerical optimization testing against all possible paths, but Google, Mapquest and the others aren't in the business of pleasing mathematicians, they want a pragmatic workable method that doesn't take until the heat death of the universe to find a good enough route between Heathrow airport and Southend. And to that end, some or all of the heuristics I suggested may be too costly in processor time for too little benefit. I don't know.
And then I imagine you could layer levels of meta above that. For example, every year near where I live there a huge rock festival. I suspect that during this time, mapping services get many thousands of requests for "how do I get from the nearest train station to the festival ground?". If there is software in place that can notice this trend in people looking for the same path, the solution can be cached for a while to take pressure off the routing backend.
And once you have all that, you remember that A* is terribly memory inefficient on large routes, and reimplement a memory bounded variant, which likely throws your heuristic finetuning out the window. Ain't maps fun?
Oh, and if I said stupid things and you know better, please do correct me, of course. It's now half past one in the morning, and I ranted on for somewhat longer than I'd planned to, and seem to have ended up sounding more authoritative than I actually am.
Re: The Haskell answer
Graham's Number wrote:but I have no idea what you are on about here. I can see why those are solutions, but I have no idea how you deduced that they were the only two using that.
What chessguy posted was actually a Haskell program. The first two lines from "let g = ..." until "...<0.001]" are the code (the "Prelude>" bit is just the Haskell interpreter prompt).
The next line (containing the two solutions) is the answer that Haskell spit back out at him.
Basically, his program is asking haskell to "give me all solutions (a,b,c,d,e,f) where each of a, b, c, d, e and f are pulled from the set g = (0 .. 10) = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10), such that the absolute value of a*2.15 + b*2.75 + ... + f*5.8  15.05 is within some tolerance (specifically 0.001) of zero." That is, a is the number of mixed fruits and a*2.15 is the total cost thereof; b is the number of fries and b*2.75 the total cost thereof, and so on. After summing up all the costs of the items purchased and subtracting the original amount (15.05), an answer within a 10th of a penny of 0 (to account for rounding issues) implies a valid solution and Haskell outputs the corresponding tuple (actually, list, but in this case the difference is unimportant).
It's basically a bruteforce solver, and since Haskell only returned two answers, chessguy concluded that they were the only valid ones.
I'd like to add to what Danderson said with my personal experience that Google does not provide optimal results. Now that they let you drag the line around to define sorts of "midpoints" that you'd like to traverse, you can see that the offered path is often suboptimal.
Go ahead and try it: Choose your home and, as a destination, some place which you frequently go to. Google will pick out a path that heavily emphasizes main roads and directional simplicity; but you have probably found a shorter/faster way to get from A to B.
I think it might be an interesting application of a successiveapproximations algorithm, though Google obviously doesn't use one.
Go ahead and try it: Choose your home and, as a destination, some place which you frequently go to. Google will pick out a path that heavily emphasizes main roads and directional simplicity; but you have probably found a shorter/faster way to get from A to B.
I think it might be an interesting application of a successiveapproximations algorithm, though Google obviously doesn't use one.
 SecondTalon
 SexyTalon
 Posts: 26531
 Joined: Sat May 05, 2007 2:10 pm UTC
 Location: Louisville, Kentucky, USA, Mars. HA!
 Contact:
just want to ask but did any of the non mathematician/programmers etc try and work it out with pen/paper?
i did it by:
15.05  3.55 = 11.50 (3.55 being the biggest non 00 order thus the first to try)
11.50  5.80 = 5.70 (5.80 being the biggest 00 order and so is the next to try)
5.70  3.55 = 2.15 (3.55 being the next available number)
didn't even use a calculator of anything!
preemptive edit: actually looking at it it seems very similar to that 'different sizes boxes into uniform lorries' algorithm my mathsloving friend showed me.
i did it by:
15.05  3.55 = 11.50 (3.55 being the biggest non 00 order thus the first to try)
11.50  5.80 = 5.70 (5.80 being the biggest 00 order and so is the next to try)
5.70  3.55 = 2.15 (3.55 being the next available number)
didn't even use a calculator of anything!
preemptive edit: actually looking at it it seems very similar to that 'different sizes boxes into uniform lorries' algorithm my mathsloving friend showed me.
I liked it when googlemaps would suggest that you swim 1,300 some miles across the atlantic ocean to reach europe. It doesn't do that anymore....
If you are visually impaired or cannot otherwise read this confirmation code, then you probably can't see this message either so it's questionable why we put it here.
Someone already pointed this out: the problem in the strip is SUBSETSUM.
Remboooo wrote:exsustumicist wrote:Fiddly wrote:I thought the knapsack problem is maximizing utility subject to capacity constraints. The problem described here seems a bit different.necroforest wrote:I'm suprised nobody has pointed this out  The problem isn't NPComplete, since it's not a decision problem, therefore not in NP.
A friend of mine noticed that this isn't the knapsack problem because of the lack of utility. I then realized that this is just a problem of making exact change for $15.05 where the "coins" are mixed fruit, french fries, etc. How many nerd points do we get?
Well, essentially those two problems are the same, only with the coins you know for sure for any total amount, there's a combination of coins that will add up to that amount. With a knapsack you just try to approximate it as closely as possible, and if you're lucky you'll fill it completely, which happens to be the case in this comic.
ptveite wrote:I agree, it's like the antimatter/matter of the restaurant world. You bring healthy food and mozzarella sticks onto the same table and they annihilate each other.
No dude, the fruit can cover for the mozz stix, they go down to the stomach together and the canteloupe will be all "it's cool, he's with me"
Mitch Hedburg, may he rest in peace.
danderson wrote: ... a long spiel ...
That was really helpful and in depth. no one's complained yet. I know in my brains when i need to get to someplace i've NEVER been that is in an URBAN area i will use my delorme mapping software (it heavily weights all heavy traffic areas and will generally get you there faster)
but when i am going to the boonies or similar place i will generally trust google because delorme weights interstates really high, and highways lower. Driving on a winding 2 lane POS highway at 2 AM is a nightmare. Google doesn't make me do that
Last edited by genewitch on Tue Jul 10, 2007 5:31 am UTC, edited 1 time in total.

 Posts: 4
 Joined: Fri Jun 29, 2007 9:33 pm UTC
 Location: Houma, LA
 Contact:
Tempoz wrote:I liked it when googlemaps would suggest that you swim 1,300 some miles across the atlantic ocean to reach europe. It doesn't do that anymore....
Yea, when Google says, "[w]e could not calculate driving directions between New York City, New York and Paris, France," it really isn't much fun at all.
I miss swimming for 1,000+ miles.
rockmovement1 wrote:Tempoz wrote:I liked it when googlemaps would suggest that you swim 1,300 some miles across the atlantic ocean to reach europe. It doesn't do that anymore....
Yea, when Google says, "[w]e could not calculate driving directions between New York City, New York and Paris, France," it really isn't much fun at all.
I miss swimming for 1,000+ miles.
i think it was an easter egg around april 1st of this year. did anyone catch the RFC this year? i should go look it up.

 Posts: 4
 Joined: Fri Nov 10, 2006 9:28 pm UTC
wocket wrote:mwn3d wrote:make sure you dont forget the barbecue sandwich as an option....$6.55.
It's not actually an option, since it is not under the "Appetizers" heading, but rather the "Sandwiches" heading.
An appetizer is simply a meal that you eat before a meal, but in words that make more sense, ya know? If you're like me, a nice steak or two would make for a pretty good appetizer... *drools*
I also really don't see why you can't have both mixed fruit and mozarella sticks... They're BOTH delicious, and that's exactly how I remember buying food back in my day.
danderson wrote:The critical part of A* is computing the cost, which is usually done by summing the cost of each edge that forms the path. The cost of each edge is a heuristic function, and is the hard part to get right. If the heuristic is too soft, the algorithm will tend to "wander" a lot instead of homing in on the right path quickly and with little branching/backtracking. If the heuristic is terminally bad (cost of 0 for all edges), A* devolves into its worst case performance, equivalent to Dijkstra's algorithm.
Actually, now that I've had a little sleep, this is incorrect, or at least inaccurate. There are 2 interesting bands for the heuristic, and 2 critical points of interest. The heuristic can either underestimate or overestimate the cost of a given path, compared to some "ideal" cost. The worst case of underestimation is having a constant cost of zero all the time, in which case A* is equivalent to Disjkstra's algorithm. Then, as the heuristic approaches the ideal point, the search will get faster and faster, and is proven to find a shortest path through the graph, not just a "good enough" solution.
However, if the heuristic overestimates the cost of paths or does not assign the correct relative costs to paths, A* is no longer guaranteed to find a shortest path, as it may find that a suboptimal route is cheaper than the optimal route.
So, in regular classroom implementations, you're told that you should always make your heuristic underestimate, or suffer the consequences. The problem with complex heuristics that take into account more than just distance is that it isn't that easy to find the sweet spot, so you're very likely to overshoot the perfect values, maybe even intentionally, to force the use of a path that appearssuboptimalbutyouknowbetter (say, routing around a traffic jam).
That would be why mapping services don't always return the path that your brains and experience have discovered to be the shortest. Their heuristics, while very finely tuned, can't match your brain's navigation computer and local experience.
 UmbrageOfSnow
 Not Fully Human
 Posts: 354
 Joined: Sun Apr 22, 2007 7:06 am UTC
 Location: Insomnia Island
 Contact:
Re: The Haskell answer
Graham's Number wrote:
I'm really sorry about this, but I have no idea what you are on about here. I can see why those are solutions, but I have no idea how you deduced that they were the only two using that. Could someone please explain this (bearing in mind I don't even have a GCSE in Maths yet, although I am determined to learn more than the dratted course has in it)? Is this the sort of thing Mathematica is good for?
Yes, this is exactly what Mathematica is good for. They weren't shown to be the only solutions through a mathematical proof, but by trying every possibility. "If brute force isn't working, you aren't using enough."
I could have done it much faster if I'd had Mathematica or MATLAB on my home computer at the time, but I didn't, they are good for this kind of thing. Instead, I did it in C++, since everyone else is posting their code, here is mine (ugly).
Code: Select all
#include<iostream>
#include<cmath>
#include<ctime>
using namespace std;
int const bound = 10;
double const PRECISION = 0.01;
int main() {
clock_t start, stop;
start = clock();
double a[6];
a[0] = 2.15;
a[1] = 2.75;
a[2] = 3.35;
a[3] = 3.55;
a[4] = 4.2;
a[5] = 5.8;
for(int i0=0;i0<bound;i0++)
for(int i1=0;i1<bound;i1++)
for(int i2=0;i2<bound;i2++)
for(int i3=0;i3<bound;i3++)
for(int i4=0;i4<bound;i4++)
for(int i5=0;i5<bound;i5++) {
if(abs( (i0*a[0]+i1*a[1]+i2*a[2]+i3*a[3]+i4*a[4]+i5*a[5])15.05)<=PRECISION) {
cout << i0 << " " << i1 << " " << i2 << " " << i3 << " " << i4 << " " << i5 << " ";
cout << i0*a[0]+i1*a[1]+i2*a[2]+i3*a[3]+i4*a[4]+i5*a[5] << endl;
}
}
stop = clock();
cout << "Time: " << ((double)(stop  start))/((double)1000000) << " seconds" << endl;
return EXIT_SUCCESS;
}
yellie wrote:Confession: I just had to look up the word ubiquitous because I kept seeing it all over the place and had no idea what it meant.
Actually, the O(nc) solution is not a polynomial time solution to an NPcomplete problem. It's quite clearly an exponential time solution in the size of the input, because c is represented in decimal notation  if c was represented in unary, this wouldn't be an issue, but the problem would no longer be NPcomplete.
Also, multiplying prices by twenty is clearly much more efficient in terms of time and space than multiplying by one hundred. In fact, I almost ran the Knapsack algorithm by hand (clearly faster than coding it up) before I realized there was a much better programming paradigm for this problem  I'm talking about IRP. Several minutes after reading the comic, I had a complete list of solutions to the problem.
The joke here is that he doesn't realize that the waiter has already precalculated the solutions to the Traveling Waiter Problem for all possible combinations of tables in the restaurant. That's why the girlfriend is holding her head in shame.
 This Message Sponsored By The Society For Pedantic Correctness 
Also, multiplying prices by twenty is clearly much more efficient in terms of time and space than multiplying by one hundred. In fact, I almost ran the Knapsack algorithm by hand (clearly faster than coding it up) before I realized there was a much better programming paradigm for this problem  I'm talking about IRP. Several minutes after reading the comic, I had a complete list of solutions to the problem.
The joke here is that he doesn't realize that the waiter has already precalculated the solutions to the Traveling Waiter Problem for all possible combinations of tables in the restaurant. That's why the girlfriend is holding her head in shame.
 This Message Sponsored By The Society For Pedantic Correctness 
Zµ«VjÕ«ZµjÖZµ«VµjÕZµkVZÕ«VµjÖZµ«VjÕ«ZµjÖZÕ«VµjÕZµkVZÕ«VµjÖZµ«VjÕ«ZµjÖZÕ«VµjÕZµkVZÕ«ZµjÖZµ«VjÕ«ZµjÖZÕ«VµjÕZ

 Posts: 3
 Joined: Wed Oct 11, 2006 12:58 pm UTC
I disagree, on technical grounds (the best kind of grounds). It's clearly a polynomialtime algorithm (in n and c). And it solves an NPcomplete problem. My maths jokes are too obscure for xkcd forums? Wow.notzeb wrote:Actually, the O(nc) solution is not a polynomial time solution to an NPcomplete problem.
Not for my algorithm. My first implementation did in fact have a gcd step, but I realised that it actually makes no difference to the performance. Had I used bottomup DP instead of memoization, you'd be right.notzeb wrote:Also, multiplying prices by twenty is clearly much more efficient in terms of time and space than multiplying by one hundred.
This one too ;)notzeb wrote: This Message Sponsored By The Society For Pedantic Correctness 
 necroforest
 Posts: 195
 Joined: Tue Apr 10, 2007 3:46 pm UTC
 Contact:
Yakk wrote:I'm suprised nobody has pointed this out  The problem isn't NPComplete, since it's not a decision problem, therefore not in NP.
There is an implicit decision problem  is there a set of dishes that satisfy the criteria?
The actual order is the hint that allows you to verify the problem in P time.
But they're asking for the optimization/function problem  Bring me the order that satisfies the criteria, not "does the order exist." For any optimization/function problem, there's always a corresponding decision problem, so the statement "there is an implicit decision problem" is a tautology. Unless I misunderstood what you meant.
 Sprocket
 Seymour
 Posts: 5951
 Joined: Mon Mar 26, 2007 6:04 pm UTC
 Location: impaled on Beck's boney hips.
 Contact:
ishikiri wrote:just want to ask but did any of the non mathematician/programmers etc try and work it out with pen/paper?
i did it by:
15.05  3.55 = 11.50 (3.55 being the biggest non 00 order thus the first to try)
11.50  5.80 = 5.70 (5.80 being the biggest 00 order and so is the next to try)
5.70  3.55 = 2.15 (3.55 being the next available number)
didn't even use a calculator of anything!
preemptive edit: actually looking at it it seems very similar to that 'different sizes boxes into uniform lorries' algorithm my mathsloving friend showed me.
Oh I tried it. For about ten minutes.
General solution
I'm guessing the general solution would be to a problem like the following:
given a set of n items with associated prices {p1, p2, ..., pn} and a target price t, find all positive integer solutions for coefficients {a1, a2, ..., an} such that (a1 * p1 + a2 * p2 + ... + an * pn = t).
I suppose you could generalise further (including negative numbers, rational numbers, etc.), but my gut feeling is that those generalisations would result in more trivial solutions.
Edit: whoops, forgot about the 50% tip. Would that be included only if the general solution were correct?
given a set of n items with associated prices {p1, p2, ..., pn} and a target price t, find all positive integer solutions for coefficients {a1, a2, ..., an} such that (a1 * p1 + a2 * p2 + ... + an * pn = t).
I suppose you could generalise further (including negative numbers, rational numbers, etc.), but my gut feeling is that those generalisations would result in more trivial solutions.
Edit: whoops, forgot about the 50% tip. Would that be included only if the general solution were correct?
Return to “Individual XKCD Comic Threads”
Who is online
Users browsing this forum: hetas and 91 guests