Containers, VM's, physical servers, WASM programs, Kubernetes, and countless other technologies fill niches. They will become mature, boring technologies, but they'll be around, powering all the services we use. We take mature technologies, like SQL or HTTP for granted, but once upon a time, they were the new hotness and people argued about their suitability.
It sounds like another way to distribute software has entered the chat, and it'll be useful for some people, and not for others.
I've been building API's for a long time, using gRPC, and HTTP/REST (we'll not go into CORBA or DCOM, because I'll cry). To that end, I've open sourced a Go library for generating your clients and servers from OpenAPI specs (https://github.com/oapi-codegen/oapi-codegen).
I disagree with the way this article breaks down the options. There is no difference between OpenAPI and REST, it's a strange distinction. OpenAPI is a way of documenting the behavior of your HTTP API. You can express a RESTful API using OpenAPI, or something completely random, it's up to you. The purpose of OpenAPI is to have a schema language to describe your API for tooling to interpret, so in concept, it's similar to Protocol Buffer files that are used to specify gRPC protocols.
gRPC is an RPC mechanism for sending protos back and forth. When Google open sourced protobufs, they didn't opensource the RPC layer, called "stubby" at Google, which made protos really great. gRPC is not stubby, and it's not as awesome, but it's still very efficient at transport, and fairly easy too extend and hook into. The problem is, it's a self-contained ecosystem that isn't as robust as mainstream HTTP libraries, which give you all kinds of useful middleware like logging or auth. You'll be implementing lots of these yourself with gRPC, particularly if you are making RPC calls across services implemented in different languages.
To me, the problem with gRPC is proto files. Every client must be built against .proto files compatible with the server; it's not a discoverable protocol. With an HTTP API, you can make calls to it via curl or your own code without having the OpenAPI description, so it's a "softer" binding. This fact alone makes it easier to work with and debug.
There is a distinction between (proper) REST and what this blog calls "OpenAPI". But the thing is, almost no one builds a true, proper REST API. In practice, everyone uses the OpenAPI approach.
The way that REST was defined by Roy Fielding in his 2000 Ph.D dissertation ("Architectural Styles and the Design of Network-based Software Architectures") it was supposed to allow a web-like exploring of all available resources. You would GET the root URL, and the 200 OK Response would provide a set of links that would allow you to traverse all available resources provided by the API (it was allowed to be hierarchical- but everything had to be accessible somewhere in the link tree). This was supposed to allow discoverability.
In practice, everywhere I've ever worked over the past two decades has just used POST resource_name/resource_id/sub_resource/sub_resource_id/mutatation_type- or PUT resource_name/resource_id/sub_resource/sub_resource_id depending on how that company handled the idempotency issues that PUT creates- with all of those being magic URL's assembled by the client with knowledge of the structure (often defined in something like Swagger/OpenAPI), lacking the link-traversal from root that was a hallmark of Fielding's original work.
Pedants (which let's face it, most of us are) will often describe what is done in practice as "RESTful" rather than "REST" just to acknowledge that they are not implementing Fielding's definition of REST.
I tend to prefer RESTish rather than RESTful since RESTful almost suggests attempting to implement Fielding's ideas but not quite getting there. I think the subset of approaches that try and fail to implement Fielding's ideas is an order of magnitude (or two) smaller than those who go for something that is superficially similar, but has nothing to do with HATEOAS :-).
REST is an interesting idea, but I don't think it is a practical idea. It is too hard to design tools and libraries that helps/encourages/forces the user implement HATEOAS sensibly, easily and consistently.
While it is amazing for initial discovery to have everything presented for the developer's inspection, in production it ends up requiring too many network round-trips to actually traverse from root to /resource_name/resource_id/sub_resource_name/sub_resource_id, or an already verbose transaction (everything is serialized and deserialized into strings!) becomes gigantic if you if don't make it hierarchical and just drop every URL into the root response.
This is why everyone just builds magic URL endpoints, and hopefully also includes a OpenAPI/Swagger documentation for them so the developer can figure it out. And then keeps the documentation up-to-date as they add new sub_resource endpoints!
> Pedants (which let's face it, most of us are) will often describe what is done in practice as "RESTful" rather than "REST" just to acknowledge that they are not implementing Fielding's definition of REST.
Yes, exactly. I've never actually worked with any group whom had actually implemented full REST. When working with teams on public interface definitions I've personally tended to use the so-called Richardson's Maturity Model[0] and advocated for what it calls 'Level 2', which is what I think most of us find rather canonical and principal of least surprise regarding a RESTful interface.
> There is no difference between OpenAPI and REST, it's a strange distinction.
That threw me off too. What the article calls REST, I understand to be closer to HATEOAS.
> I've open sourced a Go library for generating your clients and servers from OpenAPI specs
As a maintainer of a couple pretty substantial APIs with internal and external clients, I'm really struggling to understand the workflow that starts with generating code from OpenAPI specs. Once you've filled in all those generated stubs, how can you then iterate on the API spec? The tooling will just give you more stubs that you have to manually merge in, and it'll get harder and harder to find the relevant updates as the API grows.
This is why I created an abomination that uses go/ast and friends to generate the OpenAPI spec from the code. It's not perfect, but it's a 95% solution that works with both Echo and Gin. So when we need to stand up a new endpoint and allow the front end to start coding against it ASAP, the workflow looks like this:
1. In a feature branch, define the request and response structs, and write an empty handler that parses parameters and returns an empty response.
2. Generate the docs and send them to the front end dev.
Now, most devs never have to think about how to express their API in OpenAPI. And the docs will always be perfectly in sync with the code.
That's conceptually true, and yet if the hundreds of code generators don't support Your Favorite OAPI Feature ™ then you're stuck, whereas the opposite is that unless your framework is braindead it's going to at least support some mapping from your host language down to the OAPI spec. I doubt very seriously it's pretty, and my life experience is that it will definitely not be bright enough to have #/component reuse, but it's also probably closer to 30 seconds to run $(go generate something) than to launch an OAPI editor and now you have a 2nd job
I'd love an OAPI compliance badge (actually what I'm probably complaining about is the tooling's support for JSON Schema) so one could readily know which tools to avoid because they were conceived in a hackathon and worked for that purpose but that I should avoid them for real work
This comes down to your philosophical approach to API development.
If you design the API first, you can take the OpenAPI spec through code review, making the change explicit, forcing others to think about it. Breaking changes can be caught more easily. The presence of this spec allows for a lot of work to be automated, for example, request validation. In unit tests, I have automated response validation, to make sure my implementation conforms to the spec.
Iteration is quite simple, because you update your spec, which regenerates your models, but doesn't affect your implementation. It's then on you to update your implementation, that can't be automated without fancy AI.
When the spec changes follow the code changes, you have some new worries. If someone changes the schema of an API in the code and forgets to update the spec, what then? If you automate spec generation from code, what happens when you express something in code which doesn't map to something expressible in OpenAPI?
I've done both, and I've found that writing code spec-first, you end up constraining what you can do to what the spec can express, which allows you to use all kinds of off-the-shelf tooling to save you time. As a developer, my most precious resource is time, so I am willing to lose generality going with a spec-first approach to leverage the tooling.
In my part of the industry, a rite of passage is coming up with one's own homegrown data pipeline workflow manager/DAG execution engine.
In the OpenAPI world, the equivalent must be writing one's own OpenAPI spec generator that scans an annotated server codebase, probably bundled with a client codegen tool as well. I know I've written one (mine too was a proper abomination) and it sounds like so have a few others in this thread.
> In the OpenAPI world, the equivalent must be writing one's own OpenAPI spec generator
Close, it's writing custom client and server codegen that actually have working support for oneOf polymorphism and whatever other weird home-grown extensions there are.
> Once you've filled in all those generated stubs, how can you then iterate on the API spec? The tooling will just give you more stubs that you have to manually merge in, and it'll get harder and harder to find the relevant updates as the API grows.
This is why I have never used generators to generate the API clients, only the models. Consuming a HTTP based API is just a single line function nowadays in web world, if you use e.g. react / tanstack query or write some simple utilities. The generaged clients are almost never good enough. That said, replacing the generator templates is an option in some of the generators, I've used the official openapi generator for a while which has many different generators, but I don't know if I'd recommend it because the generation is split between Java code and templates.
I'm scratching my head here. HATEOAS is the core of REST. Without it and the uniform interface principle, you're not doing REST. "REST" without it is charitably described as "RESTish", though I prefer the term "HTTP API". OpenAPI only exists because it turns out that developers have a very weak grasp on hypertext and indirection, but if you reframe things in a more familiar RPC-ish manner, they can understand it better as they can latch onto something they already understand: procedure calls. But it's not REST.
> This is why I created an abomination that uses go/ast and friends to generate the OpenAPI spec from the code.
This is against "interface first" principle and couples clients of your API to its implementation.
That might be OK if the only consumer of the API is your own application as in that case API is really just an internal implementation detail. But even then - once you have to support multiple versions of your own client it becomes difficult not to break them.
I don't see why it couples clients to the implementation.
Effectively, there's no difference between writing the code first and updating the OpenAPI spec, and updating the spec first and then doing some sort of code gen to update the implementation. The end state of the world is the same.
In either case, modifications to the spec will be scrutinized to make sure there are no breaking changes.
Yeah this is the way, I mean if the spec already exists it makes sense to go spec-first. I went spec-first last time I built an API because I find most generators to be imperfect or lacking features; going spec-first ensured that the spec was correct at least, and the implementations could do the workarounds (e.g. type conversions in Go) where necessary.
That is, generate spec from code and your spec is limited to what can be expressed by the code, its annotations, and the support that the generator has. Most generators (to or from openapi) are imperfect and have to compromise on some features, which can lead to miscommunication between clients/servers.
OpenAPI spec being authored by a human or a machine, it can still be the same YAML at the end of the day, so why would one approach be more brittle / breaks your clients than the other?
The oapi-codegen tool the OP was put out (which I use) solves this by emitting an interface though. OpenAPI has the concept of operation names (which also have a standard pattern), so your generated code is simply implementing operation names. You can happily rewrite the entire spec and provided operation names are the same, everything will still map correctly - which solves the coupling problem.
I'm piggybacking on the OpenAPI spec as well to generate a SQL-like query syntax along with generated types which makes working with any 3rd party API feel the same.
This way, you don't have to know about all the available gRPC functions or the 3rd party API's RESTful quirks while retaining built-in documenting and having access to types.
> To me, the problem with gRPC is proto files. Every client must be built against .proto files compatible with the server; it's not a discoverable protocol.
That's not quite true. You can build an OpenAPI description based on JSON serialization of Protobufs and serve it via Swagger. The gRPC itself also offers built-in reflection (and a nice grpcurl utility that uses it!).
Buggy/incomplete Openapi codegen for rust was a huge disappointment for me. At least with grpc some languages are first class citizens. Of course generated code has some uglyness. Kinda sad http2 traffic can be flaky due to bugs in network hardware.
Random UUID's are super useful when you have distributed creation of UUID's, because you avoid conflicts with very high probability and don't rely on your DB to generate them for you, and they also leak no information about when or where the UUID was created.
Postgres is happier with sequence ID's, but keeping Postgres happy isn't the only design goal. It does well enough for all practical purposes if you need randomness.
> Postgres is happier with sequence ID's, but keeping Postgres happy isn't the only design goal.
It literally is the one thing in the entire stack that must always be happy. Every stateful service likely depends on it. Sad DBs means higher latency for everyone, and grumpy DBREs getting paged.
Postgres is usually completely happy enough with UUIDv4. Overall architecture (such as allowing distributed id generation, if relevant) is more important than squeezing out that last bit of performance, especially for the majority of web applications who don't work with 10 million+ rows.
If your app isn’t working with billions of rows, you really don’t need to be worrying about distributed anything. Even then, I’d be suspicious.
I don’t think people grasp how far a single RDBMS server can take you. Hundreds of thousands of queries per second are well in reach of a well-configured MySQL or Postgres instance on modern hardware. This also has the terrific benefit of making reasoning about state and transactions much, much simpler.
Re: last bit of performance, it’s more than that. If you’re using Aurora, where you pay for every disk op, using UUIDv4 as PK in Postgres will approximately 7x your IOPS for SELECTs using them, and massively (I can’t quantify it on a general basis; it depends on the rest of the table, and your workload split) increase them for writes. That’s not free. On RDS, where you pay for disk performance upfront, you’re cutting into your available performance.
About the only place it effectively doesn’t matter except at insane scale is on native NVMe drives. If you saturate IOPS for one of those without first saturating the NIC, I would love to see your schema and queries.
Fair point. You can still use monotonic IDs with these, via either interleaving chunks to each DB, or with a central server that allocates them – the latter approach is how Slack handles it, for example.
Listen, I didn't make the title up, I just grabbed onto it from the SRE world because I love databases.
There are some pragmatic differences I've found, though - generally, DBAs are less focused on things like IaC (though I know at least one who does), SLIs/SLOs, CI/CD, and the other things often associated with SRE. So DBRE is SRE + DBA, or a DB-focused SRE, if you'd rather.
> Random UUID's are super useful when you have distributed creation of UUID's, because you avoid conflicts with very high probability and don't rely on your DB to generate them for you
See Snowflake IDs for a scheme that gives you the benefit of random UUIDs but are strictly increasing. Which is really UUIDv7 but fits in your bigint column. No entropy required.
I'm a systems nerd, and I found working with it quite challenging, but rewarding. It's been many years, but I still remember a number of the challenges. SPE's didn't have shared memory access to RAM, so data transfer was your problem to solve as a developer, and each SPE had 256k of RAM. These things were very fast for the day, so they'd crunch through the data very quickly. We double-buffered the RAM, using about 100k for data, while simultaneously using the other 100k as a read buffer for the DMA engine.
That was the trickiest part - getting the data in and out of the thing. You had 6 SPE's available to you, 2 were reserved by the OS, and keeping them all filled was a challenge because it required nearly optimal usage of the DMA engine. Memory access was slow, something over 1000 cycles from issuing the DMA until data started coming in.
Back then, C++ was all the rage and people did their various C++ patterns, but due to the instruction size being so limited, we just hand-wrote some code to run on the SPU's which didn't match the rest of the engine, so it ended up gluing together two dissimilar codebases.
I both miss the cleverness required back then, but also don't miss the complexity. Things are so much simpler now that game consoles are basically PC's with PC-style dev tools. Also, as much as I complain about the PS3, at least it wasn't the PS2.
Yep, all valid. When I started on it we had to do everything ourselves. But by the time I did serious dev on it our engine team had already build vector/matrix libraries that worked on both ppu and spu and had a dispatcher that took care of all the double buffering for me.
Indeed, anyone who mastered the parallelism of the PS3 bettered themselves and found the knowledge gained applied to the future of all multi core architectures. Our PC builds greatly benefitted from the architecture changes forced on us by the PS3
JWT's are perfectly fine if you don't care about session revocation and their simplicity is an asset. They're easy to work with and lots of library code is available in pretty much any language. The validation mistakes of the past have at this point been rectified.
Not needing a DB connection to verify means you don't need to plumb a DB credentials or identity based auth into your service - simple.
Being able to decode it to see its contents really aids debugging, you don't need to look in the DB - simple.
If you have a lot of individual services which share the same auth system, you can manage logins into multiple apps and API's really easily.
That article seems to dislike JWT's, but they're just a tool. You can use them in a simple way that's good enough for you, or you can overengineer a JWT based authentication mechanism, in which case they're terrible. Whether or not to use them doesn't really depend on their nature, but rather, your approach.
You are confusing simplicity (it's easy to understand and straightforward to implement safely) with convenience (I have zero understanding of how it works and couldn't implement it securely if my life depended on it, but someone already wrote a library and I'm just going to pretend all risk is taken care of when I use it).
It's not difficult to implement JWT's, the concept is simple, however, with authentication code, the devil is in the details, and that's true for any approach, whether it's JWT's, or opaque API tokens, whatever. There are many, many ways to make a mistake which allows a bypass. Simple concepts can have complex implementations. A JWT is simply a bit of JSON that's been signed by someone that you trust. There are many ways to get that wrong!
Convenience, when it comes to auth, is also usually the best path, and you need to be careful to use well known and well tested libraries.
I had to solve a similar problem years ago, during the transition from fixed function to shaders, when shaders weren't as fast or powerful as today. We started out with an ubershader approximating the DX9/OpenGL 1.2 fixed functions, but that was too slow.
People in those days thought of rendering state being stored in a tree, like the transform hierarchy, and you ended up having unpredictable state at the leaf nodes, sometimes leading to a very high permutation of possible states. At the time, I decomposed all possible pipeline state into atomic pieces, eg, one light, fog function, texenv, etc. These were all annotated with inputs and outputs, and based on the state graph traversal, we'd generate a minimal shader for each particular material automatically, while giving old tools the semblance of being able to compose fixed function states. As for you, doing this on-demand resulted in stuttering, but a single game only has so many possible states - from what I've seen, it's on the order of a few hundred to a few thousand. Once all shaders are generated, you can cache the generated shaders and compile them all at startup time.
I wonder if something like this would work for emulating a Gamecube. You can definitely compute a signature for a game executable, and as you encounter new shaders, you can associate them with the game. Over time, you'll discover all the possible state, and if it's cached, you can compile all the cached shaders at startup.
Anyhow, fun stuff. I used to love work like this. I've implemented 3DFx's Glide API on top of DX ages ago to play Voodoo games on my Nvidia cards, and contributed some code to an N64 emulator named UltraHLE.
> contributed some code to an N64 emulator named UltraHLE
That's a blast from the past, I distinctly remember reading up about UltraHLE way back when and then trying it our and for the first time being able to play Ocarina of Time on my middle class PC with almost no issues, that was magical.
Ask your neighbors with solar who they used and if hey liked working with them, and also get a lot of estimates from lots of local contractors. You will see all kinds of system proposals when you do this. Go with the contractor that you like dealing with who has a good system proposal. A good sign for me was when someone was willing to make changes; eg, use micro inverters vs optimizers for my complex roof geometry.
You can't figure out who's good online, interview the local companies. Locals will also get you through the permitting and code compliance process too.
Also, don't overlook the "services" category of CL (e.g. https://sfbay.craigslist.org/search/bbb?query=solar%20instal... it still carries the same "call a rando" risk as does hiring any contractor, but it can be easier than filtering out the higher budget Yelp businesses. Sometimes they list in the for-sale category, too as CL is wont to do
There are companies attempting to recycle them into new batteries, such as Redwood Materials, but from what I know, recycled lithium is more expensive than fresh lithium today.
The problem with used EV batteries is that they've started to degrade, and they degrade in chaotic ways, so you can't offer a predictable product made from old cells. Some cells may have shorts internally, others may have evaporated some electrolyte, or the electrodes may have degraded. Right now, lithium recovery is quite primitive from used cells. I've tried to reuse used batteries myself for storage, and the unpredictable wear made me give up.
Also, EV batteries, which are optimized for power density, may not be the best choice for home storage, where you want the ability to deep cycle to buffer power usage as the NYT article describes. The NMC cells common in EV's don't like to sit at above 90% state of charge (this cutoff is arbitrary, but > 90% results in fast breakdown), and they don't like to go below 20%, so you have a useful range of 70% of the capacity. You can over-provision by 30% or you can use lithium-iron-phosphate cells, which are less power dense, but much more tolerant of deep cycling.
I set my home up like this a long time ago. I use 100% of my solar and export nothing to the CA grid due to batteries. It's not cost effective to do this given the cost of storage when I set this up, but it's really neat to someone of my nerdy predisposition. My goals originally were to have solar based backup power, because I lose power quite a lot despite living in silicon valley, and it's worked great for that too.
It's a nice change for little experimental programs, but production servers need lots of functionality that third party routers offer, like request middleware, better error handling, etc. It's tedious to build these on top of the native router, so convenience will steer people to excellent packages like Gin, Echo, Fiber, Gorilla, Chi, etc.
Honestly, there is a lot of praise of the middleware in these projects, but I recently found out that most of them are unable to handle parsing the Accept and Accept-Encoding header properly, that is: according to the RFC, with weights.
This means that the general perception "these projects are production-quality but stdlib isn't" is misleading. If I have to choose web framework or library that implements feature X incorrectly versus one that doesn't have X at all and I have to write it by myself, I will with no doubt choose the latter.
Big fan of chi. It is simple, and just works. Also matches the http.Handler interface, so for testing and otherwise just makes life so easy (like using httptest new server - pass it the chi.Mux)
The panics are really annoying. Sometimes, you generate routes dynamically from some data, and it would be nice for this to be an error, so you can handle it yourself and decide to skip a route, or let the user know.
With the panic, I have to write some spaghetti code with a recover in a goroutine.
Many are misunderstanding when the panic happens. It does not happen when the user requests the path, it happens when the path is registered. The user will never arrive at that path to be notified. You will be notified that you have a logic error at the application startup. It can be caught by the simplest of tests before you deploy your application.
Containers, VM's, physical servers, WASM programs, Kubernetes, and countless other technologies fill niches. They will become mature, boring technologies, but they'll be around, powering all the services we use. We take mature technologies, like SQL or HTTP for granted, but once upon a time, they were the new hotness and people argued about their suitability.
It sounds like another way to distribute software has entered the chat, and it'll be useful for some people, and not for others.