Yes, this. One of the things I love about go is the stdlib capabilities. I try to avoid a lot of frameworks as there seems to be too much "magic" going on that you end up having to figure out when it comes time to troubleshoot. The HttpRouter is bare bones enough to understand and build on top of while still adding some value and helping avoid creating boilerplate code on your own.
Cool, because this is actually what I ended up with myself in a dummy Go setup of mine (after having reinvented AR's migrations poorly with a Bash script). I'm not really accustomed to Go's ecosystem yet, so:
>and code-generate the basic CRUD Go stuff from the schema
I guess you read schema.rb and output some structs and methods? Because a quick Google search only yielded some Yeoman plugins …
Migrations are a minor pain point for me. The libraries I've used don't support running (all) pending migrations; only those with a timestamp more recent than the previous one. This means we have to finesse the timestamps manually as migrations get merged into the main branch and deployed to various environments. As such, the value offered by the migration library is pretty minimal compared to just remembering to pipe a .sql file directly into the database. Hopefully there's a more robust option out there that I just haven't found yet.
I take the approach of storing the name of any migrations run in the database in a metadata table - this means you can run them out of order, restore from a backup then run migrations not yet run etc. to test or adjust them. With multiple devs it's often the case that we want to run things out of order on production too, so ordering by time doesn't work and running manually is error-prone.
Code for this here - it'd be pretty simple to put it in its own little tool or even a bash script or something, and there are surely some other tools that do something similar... This one psql only as it relies on the psql binary to load the sql, but hopefully gives you the idea. I haven't bothered developing it further to do things like run a single migration or down migrations as haven't required that so far.
None of the above, sorry I didn't mean a tool called 'sql migrations' I meant I simply use sql files (rather than having a special language or config file that translates to sql which is another common approach).
I use something I wrote myself (see link in other comment). Nothing fancy and could be replicated easily - save sql files for migrations, run migration if not in db already, then store in the db the name of the migration run.
We've been pretty happy with https://gobuffalo.io so far. We're serving up a Vue.js app with it. We needed a single binary to deploy on a client's server with no external dependencies and buffalo fits the bill. It comes with a cli that runs webpack and does hot reloads on save. It also has a build command that will bundle all the assets up to shove in the binary.
The Go standard library has most of what you need for basic API servers. However as has been stated here, some things are not worth spending time crafting yourself.
Also for modern Docker/Kubernetes ops environments you need some additional infrastructure for table stakes. Here's my standard Nulladmin.com stack:
I put together a little example web app based based on a few popular packages, including gorilla / mux. It's not a framework, just a nice starting point for little apps.
I've been using the standard library, httptreemux and Chi, depending on how large the project is. I have also used Gorilla in the past, and sometimes use Alice to compose handlers. I prefer to stay close to the standard library, so as to not get stuck with code that depends too heavily on a third library with no easy way to entangle that dependency. I've still got nightmares from late '90s PHP and early 2000 Zope framework disasters... not with a 10 foot pole. Though when your application gets bigger, it makes sense to pull in a good library for some route grouping and helper functions – there are so many options, it's not worth your while to roll your own.
And unless you have very specific requirements, the performance is probably not a relevant concern in a real application either.
Same here. I was sceptical about code generation and the whole protobuf as a base thing but i'm really liking it. Especially when you throw opentracing in the mix.
We run grpc-web with a big single-page VueJS app on the frontend. That way, Go is just the Api server, and the frontend handles all the routing and stuff.
We will also try to go this path. What is your overall satisfaction with this approach? I saw that grpc-web is marked as alpha and the grpc team itself wants to implement something which is kind of comparable.
At Grofers, we built a framework using Swagger that we use for building aggregated APIs and also open-sourced it couple of days back. It can be found here - https://github.com/grofers/go-codon
Same here. I never really liked idea of frameworks like Spring/Django for Golang. Rather build very small services and mostly use just standard library, perhaps some simple packages for specific tasks.
We use this too. It's not bad if all you're ever doing is returning json, but escaping from their `rest.ResponseWriter` to a normal `http.ResponseWriter` is always sad when you need to write out anything else.
That's a pretty good summary of the rest of it for me too...pretty good, but sad that it doesn't adhere to the standard libs `http` interfaces