If shipment cost is reasonably low, just deliver predetermined items (e.g. those recommended by Amazon) to customers, have them pay for what they consumed and return the rest.
Or, if storage cost at customers is low and items do not perish, leave the items they didn't consume at their site. Later they may find them useful and pay for them. This is actually the model employed by some of medical drug resellers in Japan.
He took 5 times the number of steps taken by God. So I propose, a new metric, god index, as the inverse of the number of times of the steps he takes, to the one god would.
Many speedcubers hit a plateau after a number of years and get bored, true, but it's a deep subject. Every solve is different and there is endless opprotunity for optimization and improvisation. And the physical dexterity aspect just adds to the fun. It would get boring only if you never improved or learned anything new. Definitely more satisfying than Sudoku or Solitaire.
If you find Sudoku boring you just need to add more constraints. For example, try solving one by considering some ordering on the entries (say, Left-to-Right then Top-to-Bottom) and only allow yourself to fill in entries in that order... then time yourself!
Sudoku is very boring. If I wanted to spend my time solving small value NP-complete problems I might as well go out and solve TSPs over and over again.
It's the perfect thing to do while you're watching videos or listening to podcasts.
I can't really watch anything or listen to a podcast at the same time that I'm coding. But there is still stuff I really want to watch. This week in startups and some Mixergy shows etc.
But only sitting there watching videos can be very boring, so it's good to have something to do with your hands.
Also. There are so many formulas, techniques, etc. You gradually gain a greater understanding of how the cube works, etc.
I recommend it. Great hobby. But don't be one of those losers that tries to impress a whole party by solving one, it's really lame to cubists.
A good friend of mine hangs out with the 7x7 cube champion of the United States. He plays with various different kinds of Rubik's cubes, and from what I gather cubing can also get rather addicting.
So, they've brute forced the problem and they now know that there are no positions that need more than 20 moves, but that does not get you closer to developing an algorithm giving you those 20 moves (edit: or however many that configuration takes).
I think you're confused -- they went the other way around, using the algorithm to find the optimal number.
Search techniques that can quickly solve any cube with a small (near-optimal) number of moves have been known for a while, mainly due to the work of Kociemba: http://www.jaapsch.net/puzzles/compcube.htm#kocal The techniques used are standard AI tree/graph search algorithms with lots of Rubik's cube-specific optimizations.
A few years ago, these methods became good enough to solve almost any cube quickly within 20 moves (which was conjectured to be God's number.) So the algorithm as well as a fast implementation already existed. Here's Kociemba's page http://kociemba.org/cube.htm and here's an iPhone app with a neat twist: you can photograph your physical cube to solve it http://www.wired.com/epicenter/2009/01/iphone-app-solv/
What these guys did was to make further optimizations and run it on a cluster to search through all possible position sets. As they say, they can solve about 4000 positions/s (in 20 moves or less) on a single machine.
Your original question was whether they have an algorithm to solve in <= 20 moves or not. I answered that.
Now you changed your question to whether they can solve each position optimally or not. Fine.
Why don't you re-read the article once again? They state that they can solve random positions optimally at the rate of 0.36/sec, in the same table where they say 3,900/sec for solving it in 20 moves or less.
I clarified my question because I think that it wasn't clearly stated, and I marked my edit to make sure that it was clear that I did so (unlike others here).
As for the re-reading, they don't solve the positions optimally, they brute force them so they're not in the possession of a god algorithm as far as I can see, they're using a highly optimized search strategy and that's a different beast.
A 'god' algorithm would take the faces as an input and would produce the minimal number of moves without a search component. So each configuration would be processed to give you the next without evaluation of 'wrong' moves or back-tracking.
That would be the 'god' algorithm. Anything else is (highly) optimized search.
They seem to have two different algorithms. One is given a position and it gives you the optimal solution, the fewest moves to solve that position. This is the God Algorithm. And a different, much faster, algorithm that given a position finds a solution with 20 moves or less - lets call it a Mortal Algorithm. (It does not always give you the optimal solution, with the fewest moves, just one with 20 moves or less.) My understanding is that they have just run the Mortal Algorithm against every position to prove that it can solve every position in 20 moves or less. Their God Algorithm is much too slow to run against every position.
According to Wikipedia a God Algorithm only has to be 'practical', find the optimal solution to a position with a sensible amount of processing. So it can use search and back-tracking. We could term your algorithm that works without search and back-tracking as a God God Algorithm because it is an optimal sequence of processor moves that find the optimal sequence of cube moves. A God God Algorithm would be awesome, but a much more difficult thing to create. I imagine it would run orders of magnitude faster than their God Algorithm.
(PS. I don't think we can entirely rule out the possibility that the God God Algorithm might contain some small amount of search or back-tracking.)
Ok, that sounds like a sensible summary of the positions.
If I understand you correctly a god algorithm is allowed to produce the required result using whatever strategy, including partial brute forcing/searching, backtracking in order to arrive at its solution as long as it does so in a reasonable time, so is not 'brute force' per se.
The reason why that didn't sit right with me (I accept your wikipedia quote as what construes a god algorithm) is that for me the 'god' algorithm would imply there is no better one, since with omniscience you don't need to make any false moves and I could not imagine a true god algorithm would be allowed to make false moves.
So you're looking for an algorithm that produces the optimal result heuristically, without making any false moves, instead of using the convenient lookup table that God has available to him? Sounds very NP to me.
They explicitly state that they did not generate the
optimal number of moves
You say:
They state that they can solve random positions optimally
Those two statements contradict each other. However: AFAIK the existing algorithm, which they used, has not been proven to solve the problem optimally. It has now been shown that it can solve any position in at most 20 moves, but there's still the chance it solves a position in 20 moves that could optimally be solved in 19. Because the upper bound of 20 moves coincides with the proven lower bound of 20 moves, we know God's number. That does not mean that this must be God's algorithm.
"can" is conditional, so the two statements do not contradict. They can solve random positions optimally at 0.36 positions per second. But their goal was to lower the upper bound on God's number to 20, not to find all the optimal solutions. So what they did do, for all starting positions, was find good solutions, which they can solve at a rate of 3,900 positions per second.
True, they indeed do not contradict each other. I hadn't yet heard that there actually was an algorithm that could find optimal solutions, but I admit it's been a long time since I was interested in the issue.
Because of the time difference between generation a solution and generating the optimal solution, I'm guessing the algorithm bruteforces the optimal solutions, which explains part of the confusion: I was thinking about an algorithm that generates the optimal solution directly.
I have the feeling you're not a programmer. An algorithm that generates the optimal solution for a given configuration is trivial. Do a breadth-first search of the results of each move until you get to a solved cube. Ta da, you're done!
Now, if you want to do it fast, that's a different story.
An algorithm that could find the optimal solution in practice. For instance, in 0.36s, as in the article. A theoretical algorithm that finds the optimal solution is trivial; you don't need to be a programmer to come up with that one.
As far as I can tell, this is simply a (or more appropriately, The) re-discovery of the upper bound on Kociemba's algorithm. The authors apparently did not search for optimal solutions (which is not strictly necessary anyway since we have a lower bound), which you can assume it's equivalent to stopping Kociemba's algorithm when a solution of less than 21 steps is found, for all intents and purposes.
True. I was referring to the cumulative effort since Reid proved that the upper bound of Kociemba's algorithm is 29 moves. I apologize for any misunderstandings, but I cannot edit that post now.
I'll make an educated guess... the advancement is knowing that if you make a machine that can bruteforce 6 sides x 2 turn directions, you can stop searching once a 20th movement is not enough to get the cube solved, and in the worst scenario, you will check ( 6 x 2 )^20 movements in total. I think that scores as test cases :P
With some handwaving it could be about $82.5K (computation spread over 1 year) or $125K (computation done in 1 month).
total cost was 35 CPU years (Intel Nehalem, 4-core, 2.8GHz)
1 EC2 compute unit is about 1 GHz Xeon
let's assume 1 Google unit = 12 EC2 units (3x4 cores)
High-CPU Extra Large instance = 20 EC2 units
35*12 = 420 EC2 unit years = 5040 EC2 unit months
= 21 HiCPU XL instances for 1 year
= 252 HiCPU XL instances for 1 month
Putting this into AWS calculator [1] yields:
$82,491.36 (21 HiCPU XL reserved instances for 1 year)
$125,435.52 (252 HiCPU XL on-demand instances for 1 month)
Edit: It could be actually a bit more, Nehalems seem to be faster than my first estimate.
Plugging in EC2 computing unit comparison from Cluster Compute instance (which has 2x quadcore Nehalem), 1 Google unit can be 16.75 EC2 units.
This gives:
$114K for 1 year of 29 reserved HiCPU XL instances
$166K for 1 year of 18 reserved cluster compute instances
$173K for 1 month of 348 on-demand HiCPU XL instances
$246K for 1 month of 210 on-demand cluster compute instances
-----
Edit2: Dedicated server hosting would be much cheaper. For example, from Hetzner [2] it would cost just about 26K EUR (= $34K) to rent 35 dedicated quadcore Nehalem servers for 1 year.
I'd agree with you had they used a weasel-word like "virtually", but "essentially" seems appropriate in this case:
adverb: used to emphasize the basic, fundamental, or intrinsic nature of a person, thing, or situation
If you didn't actually iterate through every position - but rather a subset that provided equivalent coverage by eliminating symmetries - then you have, in essence, solved every position.
This is cool and all, but couldn't that computing power have gone to greater use? There is a ton of medical research going on that could have been helped by Google's donation.
Hmm... I'm sure someone with more experience will chime in, but I recall reading in a Scientific American years ago that removing a particular piece from the cube then reinserting it wrong could yield a non-solvable configuration.
So I think your statement is only true if you define 'possible' and 'reachable' as synonymous, making it tautological. There are still 'cubes' that look plausible, with the right number of faces of each color and adjacent-faced-pieces, that are still unsolvable.