Looking at the code it looks like the db is downloaded from s3 when a connection is opened (although I'm not entirely sure if there is some caching in here) and then uploaded, overwriting the old when the connection is closed.
It would be awesome if there was some sort of merge that could be done rather than an overwrite. It would probably mean needing to use guid's rather than sequential id's or have some sort of central blocking id generator.
I wonder if something could be done with a write ahead log, effectively implementing an operation transform system. Each lambda function could download the db once and then tail the log to update the db? It's a pity that s3 has no append function...
Obviously, the main limiting factor with this is that each lambda function needs to download the entire database and so it's not really suitable for dbs of more than a few hundred MB.
Looking at the code it looks like the db is downloaded from s3 when a connection is opened (although I'm not entirely sure if there is some caching in here) and then uploaded, overwriting the old when the connection is closed.
It would be awesome if there was some sort of merge that could be done rather than an overwrite. It would probably mean needing to use guid's rather than sequential id's or have some sort of central blocking id generator.
I wonder if something could be done with a write ahead log, effectively implementing an operation transform system. Each lambda function could download the db once and then tail the log to update the db? It's a pity that s3 has no append function...
Obviously, the main limiting factor with this is that each lambda function needs to download the entire database and so it's not really suitable for dbs of more than a few hundred MB.