I recommend against using Zeit. I tried their offering this year and it was too much in beta. I honestly couldn't even get a basic service deployed because my defined dependencies wouldn't install. My web service deployed fine using the serverless offering of another cloud that I won't name, although it was not AWS. As a disclaimer, I have no conflict of interest.
They killed their Docker offering right about after they launched it, mostly because it's too difficult/expensive to scale containers, so they forced their entire user base onto the serverless paradigm, because it was convenient for them. Their PR story is it's "better for everyone", except there remain use cases (aka websocket support) that remain unsolved for months on their new platform. To give them credit, they didn't fully deprecate Docker-- old customers can use it, for an indeterminate amount of time.
Heh, that's precisely what Amazon did early on. "It's difficult to make servers reliably persist at scale, so everyone should learn how to not need persistent servers, which is better for everyone".
Now is hit-and-miss for me. Works great for the most part but they seem more focused on moving fast than stability.
At one point they (without any warning that I could see) turned on a CDN for all my deployments that didn't take the host header into account... so suddenly all the various URLs leading to my landing page generator all returned a single customer's page. Annoying.
I’ve been using zeit / now for the past two months to host static sites and it’s been excellent. Haven’t tried a node app yet but their core service seems really easy to use and I haven’t had any issues.
Conceptually one lambda per page (route) sounds super cool, but in practice I suspect it would lead to a ton of cold start wait times for less common routes
Have you actually found cold-start delays to be an issue with serverless services, or are you just speculating? To my knowledge, the cloud provider keeps enough instances hot based on a forecasted demand. I have not found cold-start delays to be a real issue.
AWS doesn’t keep instances warm based on forecasted demand. It uses one instance per request and instances stay warm once they are started for a predetermined amount of time based on the size of tyrvlambda environment.
It depends on the stack. You have to be more mindful of cold start when running Java/Clojure apps, less with JS and .net. There are some good benchmarks out there. There are also some easy ways to keep routes warm, although for certain setups I imagine that could be quite expansive, though still probably not expensive.
I love now.sh - its the best developer experience out there and we really really tried to use it at my company - but we kept on having latency issues. It was consistently 5-10x the TTFB of Heroku or an ec2 instance (all in the same locations, or super close).
Is the headline benefit really paying only for use? Trading increased complexity & dev effort for this seems like a backwards development when vm/container/heroku-style hosting is so cheap & getting cheaper.
The complexity and dev effort is higher initially because the paradigm is so different from "localhost" development. But once the initial boilerplate and yak-shaving work is done, serverless can actually be easier to manage because you deal at a much smaller functional level.
I have never used "serverless", but how does that differ from say a Django app?
You have the initial boilerplate setting up the project, then you map urls to functions. Once the initial setup is done its trivial to map a new urls to new functions.
I am genuinely curious as I can't really work out what all the hype is about regarding serverless. Am I missing something?
It downplays/skips over the complexity costs from learning, configuring, debugging, deploying, testing, etc the big zoo of woven together cloud services. Ideally you'd also want some dev time and cognitive capacity left over to think about your domain problems...
Yes. So in serverless, there is no concept of a long running service. So you can't listen on port 3000 and have any routing within the app. You will rather write one function, add a route to it via API Gateway and then use it in your app. You can probably use django library to help you write code in django style but you can't use it fully for e.g. 1) you can't do migrations in serverless the same way, 2) everytime your function is invoked Django will map the database to models, etc so it could be very slow.
"But once the initial boilerplate and yak-shaving work is done, serverless can actually be easier to manage because you deal at a much smaller functional level."
Everything you just described in your second comment, sounds like it would make things more difficult.
Suppose you have set up everything to get you going with a single route. Now, adding another route/handler is just deploying another function. And you deal with everything at this function level and never the entire "monolith".
You get all the advantages of breaking free from a "monolith", independent iteration, scale, federated management. Consider a big team which writes 100s of routes/handlers, then you can just federate the ownership easily.
Not very great for a solo-dev kind of projects, I agree.
This article hits one of my biggest pet peeves -- _tell the reader what you're about_. Presumably, the reason you have this on the Hasura blog is at least in part lead generation, right? And it worked, in that I showed up on Hasura's website never having heard of them before. But the article makes no effort to tell me what Hasura is before throwing me in the deepend. A link wouldn't go amiss, either. (Yes, I know there's one in the header. It's better to have one in the article text too. Links are free, better to have more than fewer.)
For apps that just do basic things like read and write to a database (e.g. DynamoDB) would it be possible to go a stage beyond serverless and have the code that runs in the user's browser talk to the database directly?
Obviously you'd have to be careful about permissions, and integrate with Cognito, but there are REST APIs for talking to AWS services so I'm sure there are use cases where even the lambdas are not necessary.
I don't know what such an architecture would be called, other than "serverlessless".
This is exactly the model that Firebase is aiming to support. Cloud Firestore is accessed directly through the various client libraries (Android, iOS, JS). Firestore has a permissions system to control who can access what, based on Firebase Auth user ids, and then when you need little bits of server side stuff you can put it in cloud functions.
Other than what others mentioned (Cognito with temporary credentials) I think AppSync might be close to what you are looking for
https://aws.amazon.com/appsync/
It has a GraphQL API and supports DynamoDB as a data source
Depending on your threat model, absolutely. Cognito supports issuing temporary IAM credentials, so you can have granular permissions, billing and auditing.
I debated doing this for a side project but decided it was a little too risky for my use case. For a corporate/intranet thing, though, it'd absolutely be reasonable.
Apart from permissions, which can perhaps be solved by temp credentials etc, how do you prevent 1) DDoS: someone fiddling the queries to result in db outage, 2) scale: browsers can't leverage connection pooling so not sure if DB can handle so many client connections.
1) DDoS not just at the API Gateway level, but also at the database level. Suppose you fiddle the query to return you a million rows or some horrible aggregated join. You can slow down the entire DB. You need to hide the query from the client.
2) Yeah, connection pooling is apparently not relevant for DynamoDb because it is HTTP based, I wonder how they implement transactions then. How can I manipulate code while having an open transaction?
I think dynamodb lacks joins. Generally, good point. Though I suspect many traditional web app backends are vulnerable to this kind of "crafted high overhead api call" dos too. I guess you could throttle the calls based on duration using some kind of token bucket scheme...
you can’t trust the browser client at all, but setting that aside it isn’t very reliable, much better to queue up commands and then do real work on server (what happens when some third party service is down?)
It bugs me too. When found that link to post, the first thing I thought was "I'm gonna get a comment about this shitty forced signup". It's a good guide though. In a few hours I went from knowing very little about next (not a React/Web beginner though) to getting my hands dirty in the internals.