Comparing the quarter-turn metric and the "standard" metric is very interesting.
I'm fascinated that for both, the number of positions above a certain distance starts to reduce again (which is much more pronounced in the quarter-turn table). Intuitively, it isn't obvious to me why this would be so, but it's nerd sniped me into thinking about how combinations work in general and the geometry of the Rubik's cube particularly.
Here's an analogy: take a point on the sphere and consider the set of points on the sphere that are distance d from that point. The size of that set increases as d goes from 0 to 1/4 * circumference, then decreases as d goes to 1/2 * circumference, and you end up with just the antipodal point.
There's probably some general argument that you'll always see this with any sufficiently nice compact metric space.
It also works for Hamming distance of bit strings. For an n-bit string there are C(n,k) ways of achieving a Hamming distance of k, for example there is only 1 string with distance 0 (the original string) and 1 string with distance n (the complement of the original string), and everything in the middle goes first up and then down—following the components of Pascal's triangle.
This could also be viewed as, for example, the distance along edges from one corner of an n-dimensional hypercube to other corners. There is 1 corner where you don't move at all and 1 corner where you move in every dimension, and the largest number of corners in which you move in half of the dimensions.
I agree with your intuition that this phenomenon will apply to a whole lot of contexts and situations.
Feels to me like this is required of any 'lattice'
(partial ordered set with a unique supremum and infinum for every sub-set).
Notably, given any 'cross-section' of a lattice (a set A such that given an x in A, and any x > y then y in A) has a 'surface' (the set of all points x in z such that for all y in A we NOT z > x). That cross section needs to start (1 element) at the cross section containing only the 'infinum' of the entire lattice.
Similarly, the surface needs to end small (1 element again) at the cross-section that is the entire lattice.
In between, it will generally by larger. I think there might be some 'convexity' results that can be proven about the surfaces of increasing sequences of cross-sections. Given specific kinds of lattices.
It feels to me like a cartesian-lattice would always have this convexity porperty. (By convexity I mean that the surface starts of increasing, might be constant for a while, and then must continue decreasing.)
Cool subject, makes me wish I was doing a PhD in mathematics.
For reference, what you call a 'cross section' is sometimes called an ideal or a downset. I can't quite make out your definition of 'surface', but, if it is meant to read "the set of all points x in A such that for all y in A we have not y > x", then it is the set of maximal elements of A. I'm not an expert on posets, but, if I've got the terminology right, your notion of convexity sounds like you want to deal with a ranked poset with unimodal h-vector.
If we take the definition to refer to points that are exactly distance d, rather than points that are within distance d, I think the original formulation is correct.
> Intuitively, it isn't obvious to me why this would be so, but it's nerd sniped me into thinking about how combinations work in general and the geometry of the Rubik's cube particularly.
My guess would be that this can be hand-waved by looking at it like a sort of discrete space. Then a solved cube is a single point in that space (let's assume we already normalize for orientation etc.). The moves we can make on a cube connect each point in the space to a set of adjacent points. The shortest path from A to B would be the space's metric. Now we know that the maximum distance from the origin / solution point is 20. Let's call all the points with a given distance n S_n (like "stratum"). To explain the distribution we only need to look at paths that either go from S_n to S_n+1 or to S_n-1, because "lateral moves" are never relevant to the shortest/optimal paths. So for small n the number of points grows, because each stratum we go away from the solution gives us more possible moves to go even further away. For S_20 there are no moves that go further away, for S_19 each combination has at most one move that goes further away etc., so essentially I think the limitation of the path length at the edge reduces the possible combinations you can get in those stratums, i.e. close to the center a random move will bring you farther away, so there has to be more points farther away, but when you are close to the edge, a random move either doesn't move you at all (stratum-wise) or closer to the center, so there have to be fewer points as you approach the edge.
The really interesting thing however is how these two balance each other to place most points in S_17 and S_18.
Wouldn't a solved cube actually be several possible points in that space? My intuition suggests that the inner four squares on each side could be rotated somehow, but I haven't learned the skill of solving rubik's cubes, so I'm not sure if that's true.
In the usual setting, the various orientations of the centers are considered irrelevant because you can't tell them apart. So those points in the configuration space are considered the same point.
There is such a thing as a cube with marked centers, sometimes called a supercube, where the difference does matter. These are harder to solve, and I believe God's number is still unknown in that setting.
I'm fascinated that for both, the number of positions above a certain distance starts to reduce again (which is much more pronounced in the quarter-turn table). Intuitively, it isn't obvious to me why this would be so, but it's nerd sniped me into thinking about how combinations work in general and the geometry of the Rubik's cube particularly.
Off to Wikipedia!