Land is still the primary source of economic value. And I don't think land was getting fairly allocated between all the peasants during the late 1300s.
It wasn't but you had the opposite of the effect that you have now. Right now the top 1% are getting a lot of the additional wealth that's being created, while the bottom 99% stay mostly the same, or improve at slower speeds. So yes, this means that we are headed towards a more inequal world, even though the situation improves for everyone.
Turn that now around, and kill a lot of people from the bottom 90% while letting the top 10% survive at higher rates. Suddenly, the world looks way more fair, even though the people at the bottom still have nothing.
It has to join each part of the previous join to the next join, and if you have a lot of joins this can get out of hand.
We have a lot of joins in our final fct orders from our intermediate table, and looks like this:
from foo
left join bar on bar.common_id = foo.common_id
left join baz on baz.common_id = foo.common_id
left join qux on qux.common_id = foo.common_id
left join waldo on waldo.common_id = foo.common_id
So waldo joins to qux, which joins to qux... I call it a "staircase join", as that's what it looks like in the SF profiler.
Well part of the benefit is rapid development; it's mind-boggling how quickly someone can stand up a dbt project and begin to iterate on transforms. Using Python/SQL/JSON (at small/medium) scales keeps the data stack consistent and lowers the barrier to entry. No reason to prematurely optimize when your bottleneck is the modeling and not the actual data volume.
dbt and ELT in general are such a game-changer for allowing rapid iteration on business logic by data analysts; the feedback loop feels much more like "normal" software engineering compared to legacy systems.
Still not sure whether this is serious or not, but it's not really infrastructure as SQL, it's infrastructure as database records which is stateful and defeats the point.
Ahh, gotcha. I appreciate the response there as I wasn't aware of that notation and even then I can't think of any time I've used a cross join. Not sure which syntax I would use personally.
They're good for getting rates on small datasets. think (select grouper, count(1) from data) cross join select count(1) from data) I think I've mostly used them in interviews, tbh.