Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Depending on what angle you come at this from, you could say that DOMA groups services into clusters ("domains"), as Uber has done here, or that services always should have been domain-driven, and DOMA welcomes the networking layer inside the bounded context as well.

I've spent a lot of time trying to understand when a microservices architecture makes sense, what the caveats are, and what philosophy one should take to building services. All the material I've read seems to point in the direction of services being ideally coupled to domain boundaries.

It seems to me that Uber's services proliferated beyond the framing of bounded contexts, and DOMA is their attempt to reign it back in again. I think it's an excellent strategy, and arguably a very good approach for other companies who find themselves in this position.

I don't think DOMA is a good place to stay at. The network should only be tolerated as long as it provides benefits that outweigh the costs. "Monolith" is not synonymous with "poor design". Seeing that these enclaves of services sitting within a domain depend on eachother in the way the OP describes, it really makes me think that they'd find further benefits by expelling the network from each domain.



> The network should only be tolerated as long as it provides benefits that outweigh the costs.

I agree. There should be no need to use network calls to enforce interface boundaries if you have a cohesive bounded context.

Or to be snarky about it, welcome to 2004 Uber! Eric Evans sends his regards!


Unfortunately, picking and enforcing bounded contexts is hard work in a big organization, even with strong code review processes.

There will always be people who don’t want to respect where the boundaries are drawn (not in a constructive, “it could be better” kind of way, but in a “but this works, too” kind of way). If a group of such people get together, microservices compartmentalizes their capacity to drag down the ship, so to speak. I think this risk compartmentalization is a benefit that must be weighed against the costs (in terms of latency, maintenance of shared libs, opentracing, etc). These days the costs are vanishing, as tools are quite good and becoming easier to manage.

All that said, if you’re a small team of senior engineers working with a shared mental model, a single binary with internally-bounded contexts works really well and I agree with you, having seen it done well.


I was lucky enough to be in the right place, at the right time, to lead a group that scaled this approach across half a dozen different development teams.

Fortunately everyone bought into the architecture, and respected the boundaries. Not everyone was senior, and not all the code was great, but we adopted the viewpoint that so long as the bad code is in the right spot (and not talking to things it shouldn’t) everything would be ok in the end. And it was.

Half a dozen teams working in one codebase was definitely pushing it though, and the need to scale much beyond that would have definitely required some service-level compartmentalization to keep the ship from sinking, as you said.

Even then, it would still be a far cry from the “microservices should be small enough to re-write in 2 weeks” approach.


It’s important to respect the boundaries, but also have the flexibility to change them as the business changes.

I’ve seen many times where people were afraid to change boundaries because they assumed the first person got the architecture exactly right.


If the programming language is compiled and modules are distributed as binaries across teams, they have no means than to comply with modularity.


I'm a big fan of the modular monolith pattern. I usually make domain modules as independent as possible and invert dependencies for web and persistence layers. If you design well you can break off domain-based services whenever the advantages warrant it.


If you ever have to start a big company from scratch, this is how to do it.

There aren’t any major drawbacks to this model when a business is young (first couple years). The downsides appear when you have different parts of your application with very different load requirements.

It also takes a lot of discipline to write code this way. Without strict code review and more experienced hands, the bounded contexts fall apart.

One of the advantages of the microservices model is it limits the damage people can do. :)

It forces a bounded context on a team of engineers and says “hey, play in this sandbox and follow these SLAs. If your internal designs are awful, good luck.”


It mostly hides the damage: instead of a code base no one understands, you have network traffic no one understands.


I’ll take the latter any day with good patterns of aggregating to a grey log. Having to triage production issues in a multiple application saas environment, The latter has always been easier to me. Don’t get me started on trouble shooting someone else’s crazy event queues


This is really interesting to me: I’ve always found dealing with code preferable to dealing with network communication. It might be because the languages I work with (common lisp, Clojure and Scala/Java/Kotlin) all have excellent code navigation abilities.


That would make sense in a perfect world to me. I would prefer it too. This company I worked for was leader in its industry and many vertices. It had legacy apps from 80s still running through today. Probably 30 SAAS based applications or so. Many many different languages used. Many using internal services and queues to communicate with each other. With 100's if not 1000's of B2B integrations, pumping millions of requests through their portal at any given time. (they also maintained their own data centers.) Anyways, given my experience and troubleshooting message queuing etc and jumping in new code bases all the time. I was more like a blend of SRE / Dev / IT / Product manager. (yeah I know). Given I worked across the SDLC I always found it easiest to, establish the problem statement and the behavior around it. The expected behavior, then dive into the gray log with a unique piece of information that should be logged and trace it from there. To each their own. Unfortunetly with architecture this way I commonly see "segmentation" between Support/Ops/Dev were a problem can end up in limbo. That's were I would hop in.


If you have a monorepo, you get both! (You can grep for logging messages in the services you’re calling).


What would be even better is some way to use swagger/graphql to jump from frontend api calls to backend code


As far as I can tell from reading Evans' DDD, there's nothing forbidding network calls inside a service. For a trivial example, you make network calls from your API to your DB. And also to your Redis cache, and then if your service runs async/periodic tasks, to your task queue, say your RMQ & Celery instances.

So to me the OP reads like "we're coming up with some new terminology for a bounded context, and also defining how those contexts should be allowed to layer in order to simplify/control failure modes".

The layering stuff is more interesting than Uber's rediscovering bounded contexts, though it's definitely interesting that they have come into agreement with Evans (and the rest of the DDD community) on the "Service == Bounded Context" principle.


> As far as I can tell from reading Evans' DDD, there's nothing forbidding network calls inside a service.

You're quite right.

> [...] defining how those contexts should be allowed to layer [...]

This, however, seems to go against the spirit of things. There is a consistent "ubiquitous language" within a bounded context, where domain terms are concrete and unambiguous. (Or rather, the context disambiguates the language.) The concept of "layered contexts" seems to neuter the concept. Does each layer successively disambiguate the one above? Or does it add new terms that didn't exist?

The layering here sounds much more technically-motivated than domain-oriented. And my argument is that the networking internal to their "domains" is largely an artifact of having build DOMA out of a plethora of disorganized microservices. Doubtless there will be some necessary networking remaining, as you remarked on, such as between processors and databases. But the origin here suggests most of it is left over from what came before.


Playing devil's advocate for a second, I'm wondering if, at Uber's scale (thousands of microservices, perhaps that means hundreds of bounded contexts after applying "DORA"?), the observations that make hexagonal/layered architectures a useful design within a single service become relevant at the system level.

I'm not sure how many systems have been built using DDD with hundreds of interacting bounded contexts, but I suppose I could believe that _some_ structure would be beneficial. (If you know of any case studies here I'd love to hear of them, I've not actually seen anything published on this topic.)

In general the concept of an "infrastructure bounded context" seemed a bit weird to me from my understanding of DDD, but then I though about Kubernetes, and you could make a case that it is an example of such a bounded context; it has its own ubiquitous language, etc. It would be weird for your infrastructure to have any understanding of the domain objects running on top of it, so a hierarchy makes sense.

Likewise if you have BFFs for your different API clients; the domain services underneath them could be abstracted away from things like REST, if all your internal services use gRPC (for example). You could consider this the UI layer in DDD's layered architecture.

I'm struggling to come up with more sensical layers than that though; in DDD there's the Application and Domain layers; I don't really see how you'd pull "Application" vs. "Domain" bounded context layers together in a way that made sense.

> But the origin here suggests most of it is left over from what came before.

I'd certainly agree with this -- it seems like lots of the intra-BC complexity is excessive compared to what you'd get if you built your services with a BC in mind from the start.

I don't think I'd emulate their intra-BC structure, it's only the inter-BC organization that I think has any merit for other systems (and even then I'm not fully convinced yet).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: