Apparently Persian and Russian are close. Which is surprising to say the least. I know people keep getting confused about how Portuguese from Portugal and Russian sound close yet the Persian is new to me.
Idea: Farsi and Russian both have simple list of vowel sounds and no diphtongs. Making it hard/obvious when attempting to speak english, which is rife with them and many different vowel sounds
While Persian has only two diphtongs and 6-8 vowels, Other Languages of Iran are full of them(e.g. Southern Kurdish speakers can pronounce 12+1 vowels and 11 diphtongs). I find it funny if all Iranians are speaking English with the Persian accent.
I've found that building my side projects to be "scalable" is a practical side effect of choosing the most cost-effective hosting.
When a project has little to no traffic, the on-demand pricing of serverless is unbeatable. A static site on S3 or a backend on Lambda with DynamoDB will cost nothing under the AWS free tier. A dedicated server, even a cheap one, is an immediate and fixed $8-10/month liability.
The cost to run a monolith on a VPS only becomes competitive once you have enough users to burn through the very generous free tiers, which for many side projects is a long way off. The primary driver here is minimizing cost and operational overhead from day one.
> A dedicated server, even a cheap one, is an immediate and fixed $8-10/month liability.
Personally, I am more worried about the infinitely-scalable service potentially (liability) sending a huge bill after the fact. This "liability" of $8-10 is predictable, like a Netflix subscription.
Data all-rounder with 10 years building everything from low-latency Go microservices to training ML models to large-scale AWS data pipelines. Looking for a senior, autonomous role at a small company/startup.
> It was uncomfortable at first. I had to learn to let go of reading every line of PR code. I still read the tests pretty carefully, but the specs became our source of truth for what was being built and why.
This is exactly right. Our role is shifting from writing implementation details to defining and verifying behavior.
I recently needed to add recursive uploads to a complex S3-to-SFTP Python operator that had a dozen path manipulation flags. My process was:
* Extract the existing behavior into a clear spec (i.e., get the unit tests passing).
* Expand that spec to cover the new recursive functionality.
* Hand the problem and the tests to a coding agent.
I quickly realized I didn't need to understand the old code at all. My entire focus was on whether the new code was faithful to the spec. This is the future: our value will be in demonstrating correctness through verification, while the code itself becomes an implementation detail handled by an agent.
> Our role is shifting from writing implementation details to defining and verifying behavior.
I could argue that our main job was always that - defining and verifying behavior. As in, it was a large part of the job. Time spent on writing implementation details have always been on a downward trend via higher level languages, compilers and other abstractions.
> My entire focus was on whether the new code was faithful to the spec
This may be true, but see Postel's Law, that says that the observed behavior of a heavily-used system becomes its public interface and specification, with all its quirks and implementation errors. It may be important to keep testing that the clients using the code are also faithful to the spec, and detect and handle discrepancies.
Claude Plays Pokemon showed that too. AI is bad at deciding when something is "working" - it will go in circles forever. But an AI combined with a human to occasionally course correct is a powerful combo.
If you actually define every inch of behavior, you are pretty much writing code. If there's any line in the PR that you can't instantly grok the meaning of, you probably haven't defined the full breadth of the behavior.
You're not wrong, but it's a "dysfunction" that many successful tech companies have learned to leverage.
The reality is, most engineers spend far less than half their time writing new code. This is where the 80/20 principle comes into play. It's common for 80% of a company's revenue to come from 20% of its features. That core, revenue-generating code is often mature and requires more maintenance than new code. Its stability allows the company to afford what you call "dysfunction": having a large portion of engineers work on speculative features and "big bets" that might never see the light of day.
So, while it looks like a bug from a pure "coding hours" perspective, for many businesses, it's a strategic feature!
I suspect a lot of that organizational dysfunction is related to a couple of things that might be changed by adjusting individual developer coding productivity:
1) aligning the work of multiple developers
2) ensuring that developer attention is focused only on the right problems
3) updating stakeholders on progress of code buildout
4) preventing too much code being produced because of the maintenance burden
If agentic tooling reduces the cost of code ownership, annd allows individual developers to make more changes across a broader scope of a codebase more quickly, all of this organizational overhead also needs to be revisited.
I live next to an abandoned building from the Spanish property boom. It's now occupied illegally. Hype's over yet the consequence is staring at me every day. I am sure it'll eventually be knocked down or repurposed yet it'd be better had the misallocation never happened.
I bought a flat in the Spanish property boom. It was empty a while with ~80% of flats in the area vacant, then I had a squatter, now kicked out. Now most of the property is occupied and the Spanish government are bringing in huge restrictions to ease the property shortage. These things go in cycles. The boom and bust isn't very efficient but there you go.
The crux of the article is asking whether such a large investment is justified; downplaying the article saying it's only X% of the GDP compared to Y doesn't address the issue.