Services at Uber are pretty much all stateless Go or Java executables, running on a central shared Mesos cluster per zone, exposing and consuming Thrift interfaces. There is one service mesh, one IDL registry, one way to do routing. There is one managed Kafka infrastructure with opinionated client libraries. There are a handful of managed storage solutions. There is one big Hive where all the Kafka topics and datastores are archived, one big Airflow (fork) operating the many thousands of pipelines computing derived tables. Almost all Java services now live in a monorepo with a unified build system. Go services are on their way into one. Stdout and stderr go to a single log aggregation system.
At the business/application level, it's definitely a bazaar rather than a cathedral, and the full graph of RPC and messaging interactions is certainly too big and chaotic for any one person to understand. But services are not that different from each other, and run on pretty homogeneous infrastructure. It takes pretty strong justification to take a nonstandard dependency, like operating your RDBMS instance or directly using an AWS service, although it does happen when the standard in-house stuff is insufficient. Even within most services you will find a pretty consistent set of layers: handlers, controllers, gateways, repositories.
Generally software architecture solves for non-functional requirements, rather than functionality. Product managers organize business-level functionality.
+ all stateless Go or Java executables
+ running on a central shared Mesos cluster per zone
+ one service mesh
+ one IDL registry,
+ one way to do routing
+ one managed Kafka infrastructure
- handful of managed storage solutions
+ one big Hive where all the Kafka topics and datastores are archived,
+ one big Airflow (fork) operating the many thousands of pipelines computing derived tables.
+ Almost all Java services now live in a monorepo with a
+ unified build system.
+ Go services are on their way into one.
+ Stdout and stderr go to a single log aggregation system.
= +11 singular/unified things, forming a single, larger system.
"
It takes pretty strong justification to take a nonstandard dependency ...
Even within most services you will find a pretty consistent set of layers ...
"
Maybe I'm misunderstanding, but how in the world do you get 'bazaar' out of this?
At the business/application level, it's definitely a bazaar rather than a cathedral, and the full graph of RPC and messaging interactions is certainly too big and chaotic for any one person to understand. But services are not that different from each other, and run on pretty homogeneous infrastructure. It takes pretty strong justification to take a nonstandard dependency, like operating your RDBMS instance or directly using an AWS service, although it does happen when the standard in-house stuff is insufficient. Even within most services you will find a pretty consistent set of layers: handlers, controllers, gateways, repositories.