> as simple as "with open(...) as f: f.write(data)"
Save where?
With what redundancy?
With what access policies?
With what backup strategy?
With what network topology?
With what storage equipment and file system and HVAC system and...
Without on-prem, saving a file is as simple as s3.put_object() !
>> Without cloud, saving a file is as simple as "with open(...) as f: f.write(data)" + adding a record to DB.
> Save where? With what redundancy? With what access policies? With what backup strategy? With what network topology? With what storage equipment and file system and HVAC system and...
Most of these concerns can be addressed with ZFS[0] provided by FreeBSD systems hosted in triple-A data centers.
> Save where? With what redundancy? With what access policies? With what backup strategy? With what network topology? With what storage equipment and file system and HVAC system and...
Wow that's a lot to learn before using s3... I wonder how much it costs in salaries.
> With what network topology?
You don't need to care about this when using SSDs/HDDs.
> With what access policies?
Whichever is defined in your code, no restrictions unlike in S3. No need to study complicated AWS documentation and navigate through multiple consoles (this also costs you salaries by the way). No risk of leaking files due to misconfigured cloud services.
> With what backup strategy?
Automatically backed up with rest of your server data, no need to spend time on this.
>> No risk of leaking files due to misconfigured cloud services.
> One misconfigured .htaccess file for example, could result in leaking files.
I don't think you are making a compelling case here, since both scenarios result in an undesirable exposure. Unless your point is both cloud services and local file systems can be equally exploited?
It sounds like you’re not at the scale where cloud storage is obviously useful. By the time you definitely need S3/GCS you have problems making sure files are accessible everywhere. “Grep” is a ludicrous proposition against large blob stores
I inherited an S3 bucket where hundreds of thousands of files were written to the bucket root. Every filename was just a uuid. ls might work after waiting to page though to get every file. To grep you would need to download 5 TB.
It's probably going to be dog slow. I dealt with HDDs where just iterating through all files and directories takes hours, and network storage is going to be even slower at this scale.
You can't ever definitively answer most of those questions on someone else's cloud. You just take Amazons word for whatever number of nines they claim it has.
Bro were you off grid last week. Your questions equally apply to AWS, you just magically handwave away all those questions as if AWS/GCP/Azure outages aren’t a thing.
Save where? With what redundancy? With what access policies? With what backup strategy? With what network topology? With what storage equipment and file system and HVAC system and...
Without on-prem, saving a file is as simple as s3.put_object() !