> Always worth noting, human depth perception is not just based on stereoscopic vision, but also with focal distance
Also subtle head and eye movements, which is something a lot of people like to ignore when discussing camera-based autonomy. Your eyes are always moving around which changes the perspective and gives a much better view of depth as we observe parallax effects. If you need a better view in a given direction you can turn or move your head. Fixed cameras mounted to a car's windshield can't do either of those things, so you need many more of them at higher resolutions to even come close to the amount of data the human eye can gather.
Follow up: Opus is also great for doing the planning work before you start. You can use plan mode or just do it in a web chat and have them create all of the necessary files based on your explanation. The advantage of using plan mode is that they can explore the codebase in order to get a better understanding of things. The default at the end of plan mode is to go straight into implementation but if you're planning a large refactor or other significant work then I'd suggest having them produce the documentation outlined above instead and then following the workflow using a new session each time. You could use plan mode at the start of each session but I don't find this necessary most of the time unless I'm deviating from the initial plan.
You have to think of Opus as a developer whose job at your company lasts somewhere between 30 to 60 minutes before you fire them and hire a new one.
Yes, it's absurd but it's a better metaphor than someone with a chronic long term memory deficit since it fits into the project management framework neatly.
So this new developer who is starting today is ready to be assigned their first task, they're very eager to get started and once they start they will work very quickly but you have to onboard them. This sounds terrible but they also happen to be extremely fast at reading code and documentation, they know all of the common programming languages and frameworks and they have an excellent memory for the hour that they're employed.
What do you do to onboard a new developer like this? You give them a well written description of your project with a clear style guide and some important dos and don'ts, access to any documentation you may have and a clear description of the task they are to accomplish in less than one hour. The tighter you can make those documents, the better. Don't mince words, just get straight to the point and provide examples where possible.
The task description should be well scoped with a clear definition of done, if you can provide automated tests that verify when it's complete that's even better. If you don't have tests you can also specify what should be tested and instruct them to write the new tests and run them.
For every new developer after the first you need a record of what was already accomplished. Personally, I prefer to use one markdown document per working session whose filename is a date stamp with the session number appended. Instruct them to read the last X log files where X is however many are relevant to the current task. Most of the time X=1 if you did a good job of breaking down the tasks into discrete chunks. You should also have some type of roadmap with milestones, if this file will be larger than 1000 lines then you should break it up so each milestone is its own document and have a table of contents document that gives a simple overview of the total scope. Instruct them to read the relevant milestone.
Other good practices are to tell them to write a new log file after they have completed their task and record a summary of what they did and anything they discovered along the way plus any significant decisions they made. Also tell them to commit their work afterwards and Opus will write a very descriptive commit message by default (but you can instruct them to use whatever format you prefer). You basically want them to get everything ready for hand-off to the next 60 minute developer.
If they do anything that you don't want them to do again make sure to record that in CLAUDE.md. Same for any other interventions or guidance that you have to provide, put it in that document and Opus will almost always stick to it unless they end up overfilling their context window.
I also highly recommend turning off auto-compaction. When the context gets compacted they basically just write a summary of the current context which often removes a lot of the important details. When this happens mid-task you will certainly lose parts of the context that are necessary for completing the task. Anthropic seems to be working hard at making this better but I don't think it's there yet. You might want to experiment with having it on and off and compare the results for yourself.
If your sessions are ending up with >80% of the context window used while still doing active development then you should re-scope your tasks to make them smaller. The last 20% is fine for doing menial things like writing the summary, running commands, committing, etc.
People have built automated systems around this like Beads but I prefer the hands-on approach since I read through the produced docs to make sure things are going ok and use them as a guide for any changes I need to make mid-project.
With this approach I'm 99% sure that Opus 4.5 could handle your refactor without any trouble as long as your classes aren't so enormous that even working on a single one at a time would cause problems with the context window, and if they are then you might be able to handle it by cautioning Opus to not read the whole file and to just try making targeted edits to specific methods. They're usually quite good at finding and extracting just the sections that they need as long as they have some way to know what to look for ahead of time.
> So it makes me wonder, is embodiment (advanced robotics) 1000x harder than LLMs from an information processing perspective?
Essentially, yes, but I would go further in saying that embodiment is harder than intelligence in and of itself.
I would argue that intelligence is a very simple and primitive mechanism compared to the evolved animal body, and the effectiveness of our own intelligence is circumstantial. We manage to dominate the world mainly by using brute force to simplify our environment and then maintaining and building systems on top of that simplified environment. If we didn't have the proper tools to selectively ablate our environment's complexity, the combinatorial explosion of factors would be too much to model and our intelligence would be of limited usefulness.
And that's what we see with LLMs: I think they model relatively faithfully what, say, separates humans from chimps, but it lacks the animal library of innate world understanding which is supposed to ground intellect and stop it from hallucinating nonsense. It's trained on human language, which is basically the shadows in Plato's cave. It's very good at tasks that operate in that shadow world, like writing emails, or programming, or writing trite stories, but most of our understanding of the world isn't encoded in language, except very very implicitly, which is not enough.
What trips us up here is that we find language-related tasks difficult, but that's likely because the ability evolved recently, not because they are intrinsically difficult (likewise, we find mental arithmetic difficult, but it not intrinsically so). As it turns out, language is simple. Programming is simple. I expect that logic and reasoning are also simple. The evolved animal primitives that actually interface with the real world, on the other hand, appear to be much more complicated (but time will tell).
Does anyone have a link to a video that uses Claude Code to produce clean robust code that solves a non trivial problem (ie not tic tac toe or a landing page) more quickly than a human programmer can write? I don’t want a “demo”, I want a livestream from an independent programmer unaffiliated with any AI company and thus not incentivised to hype.
I want the code to have subsequently been deployed in production and demonstrably robust, without additional work outside of the livestream.
The livestream should include code review, test creation, testing, PR creation.
It should not be on a greenfield project, because nearly all coding is not.
I want to use Claude and I want to be more productive, but my experience to date is that for writing code beyond autocomplete AI is not good enough and leads to low quality code that can’t be maintained, or else requires so much hand holding that it is actually less efficient than a good programmer.
There are lots of incentives for marketing at the grassroots level. I am totally open to changing my mind but I need evidence.
> Headline: OpenAI raises 400 Trillion, proclaims dominion over the delta quadrant
> Top comment: This just proves that it's a bubble. No AI company has been profitable, we're in the era of diminishing returns. I don't know one real use case for AI
It's hilarious how routinely bearish this site is about AI. I guess it makes sense given how much AI devalues siloed tech expertise.
# ASCII RPG
This repo uses Rust + Bevy (0.16.1), multi-crate workspace, RON assets, and a custom ASCII UI. The rules below keep contributions consistent, testable, and verifiable.
## Quick rules (read me first)
- Read/update CURRENT_TASK.md each step; delete when done.
- Build/lint/test (fish): cargo check --workspace; and cargo clippy --workspace --all-targets -- -D warnings; and cargo test --workspace
- Run dev tools: asset-editor/dev.fish; debug via /tmp/ascii_rpg_debug; prefer debug scripts in repo root.
- Logging: use info!/debug!/warn!/error! (no println!); avoid per-frame logs unless trace!.
- ECS: prefer components over resource maps; use markers + Changed<T>; keep resources for config/assets only.
- UI: adaptive content; builder pattern; size-aware components.
- Done = compiles clean (clippy -D warnings), tests pass, verified in-app, no TODOs/hacks.
- If blocked: state why and propose the next viable step.
- Before large refactors/features: give 2–3 options and trade-offs; confirm direction before coding.
## 1) Build, lint, test (quality gates)
- Fish shell one-liner:
- cargo check --workspace; and cargo clippy --workspace --all-targets -- -D warnings; and cargo test --workspace
- Fix all warnings. Use snake_case for functions/files, PascalCase for types.
- Prefer inline rustdoc (///) and unit tests over standalone docs.
## 2) Run and debug (dev loop)
- Start the app with debug flags and use the command pipe at /tmp/ascii_rpg_debug.
- Quick start (fish):
- cargo run --bin app -- --skip-main-menu > debug.log 2>&1 &
- echo "debug viewport 0 0" > /tmp/ascii_rpg_debug
- echo "ui 30 15" > /tmp/ascii_rpg_debug
- Helper scripts at repo root:
- ./debug.sh, ./debug_keyboard.sh, ./debug_click.sh, ./debug_world.sh
- Logging rules:
- Use info!/debug!/warn!/error! (never println!).
- Don’t log per-frame unless trace!.
- Use tail/grep to keep logs readable.
## 3) Testing priorities
1) Unit tests first (small, deterministic outputs).
2) Manual testing while iterating.
3) End-to-end verification using the debug system.
4) UI changes require visual confirmation from the user.
## 4) Architecture guardrails
- ECS: Components (data), Systems (logic), Resources (global), Events (comm).
- Principles:
- Prefer components over resource maps. Avoid HashMap<Entity, _> in resources.
- Optimize queries: marker components (e.g., IsOnCurrentMap), Changed<T>.
- Separate concerns: tagging vs rendering vs gameplay.
- Resources only for config/assets; not entity collections/relationships.
- UI: Adaptive content, builder pattern, size-aware components.
- Code layout: lib/ui (components/builders), engine/src/frontend (UI systems), engine/src/backend (game logic).
## 5) Completion criteria (definition of done)
- All crates compile with no warnings (clippy -D warnings).
- All tests pass. Add/adjust tests when behavior changes.
- Feature is verified in the running app (use debug tools/logs).
- No temporary workarounds or TODOs left in production paths.
- Code follows project standards above.
## 6) Never-give-up policy
- Don’t mark complete with failing builds/tests or known issues.
- Don’t swap in placeholder hacks and call it “done”.
- If truly blocked, state why and propose a viable next step.
## 7) Debug commands (reference)
- Pipe to /tmp/ascii_rpg_debug:
- debug [viewport X Y] [full]
- move KEYCODE (Arrow keys, Numpad1–9, Space, Period)
- click X Y [left|right|middle]
- ui X Y
- Coordinates: y=0 at bottom; higher y = higher on screen.
- UI debug output lists text top-to-bottom by visual position.
## 8) Dev convenience (asset editor)
- Combined dev script:
- ./asset-editor/dev.fish (starts backend in watch mode + Vite dev)
- Frontend only:
- ./asset-editor/start-frontend.fish
## 9) Tech snapshot
- Rust nightly (rust-toolchain.toml), Bevy 0.16.1.
- Workspace layout: apps/ (game + editors), engine/ (frontend/backend), lib/ (shared), asset-editor/.
Keep changes small, tested, and instrumented. When in doubt: write a unit test, run the app, and verify via the debug pipe/logs.
## 10) Design-first for large changes
- When to do this: large refactors, cross-crate changes, complex features, public API changes.
- Deliverable (in CURRENT_TASK.md):
- Problem and goals (constraints, assumptions).
- 2–3 candidate approaches with pros/cons, risks, and impact.
- Chosen approach and why; edge cases; test plan; rollout/rollback.
- Keep it short (5–10 bullets). Get confirmation before heavy edits.
I dictate rambling, disorganized, convoluted thoughts about a new feature into a text file.
I tell Claude Code or Gemini CLI to read my slop, read the codebase, and write a real functional design doc in Markdown, with a section on open issues and design decisions.
I'll take a quick look at its approach and edit the doc to tweak its approach and answer a few open questions, then I'll tell it to answer the remaining open questions itself and update the doc.
When that's about 90% good, I'll tell the local agent to write a technical design doc to think through data flow, logic, API endpoints and params and test cases.
I'll have it iterate on that a couple more rounds, then tell it to decompose that work into a phased dev plan where each phase is about a week of work, and each task in the phase would be a few hours of work, with phases and tasks sequenced to be testable on their own in frequent small commits.
Then I have the local agent read all of that again, the codebase, the functional design, the technical design, and the entire dev plan so it can build the first phase while keeping future phases in mind.
It's cool because the agent isn't only a good coder, it's also a decent designer and planner too. It can read and write Markdown docs just as well as code and it makes surprisingly good choices on its own.
And I have complete control to alter its direction at any point. When it methodically works through a series of small tasks it's less likely to go off the rails at all, and if it does it's easy to restore to the last commit and run it again.
Overloading of the term "generate" is probably creating some confused ideas here. An LLM/agent is a lot more similar to a human in terms of its transformation of input into output than it is to a compiler or code generator.
I've been working on a recent project with heavy use of AI (probably around 100 hours of long-running autonomous AI sprints over the last few weeks), and if you tried to re-run all of my prompts in order, even using the exact same models with the exact same tooling, it would almost certainly fall apart pretty quickly. After the first few, a huge portion of the remaining prompts would be referencing code that wouldn't exist and/or responding to things that wouldn't have been said in the AI's responses. Meta-prompting (prompting agents to prepare prompts for other agents) would be an interesting challenge to properly encode. And how would human code changes be represented, as patches against code that also wouldn't exist?
The whole idea also ignores that AI being fast and cheap compared to human developers doesn't make it infinitely fast or free, or put it in the same league of quickness and cheapness as a compiler. Even if this were conceptually feasible, all it would really accomplish is making it so that any new release of a major software project takes weeks (or more) of build time and thousands of dollars (or more) burned on compute.
It's an interesting thought experiment, but the way I would put it into practice would be to use tooling that includes all relevant prompts / chat logs in each commit message. Then maybe in the future an agent with a more advanced model could go through each commit in the history one by one, take notes on how each change could have been better implemented based on the associated commit message and any source prompts contained therein, use those notes to inform a consolidated set of recommended changes to the current code, and then actually apply the recommendations in a series of pull requests.
Ctrl+Alt+Delete, log off, then mash space to cancel the logoff. Kills the security software while leaving the rest of the system running. (Windows provides APIs to prevent this, but nobody writing "security" software uses them: in the Age of AI, I expect this to get even worse.
The Unix philosophy really comes down to: "I have a hammer, and everything is a nail."
ESR's claptrap book The Art of Unix Programming turns Unix into philosophy-as-dogma, where flaws are reframed as virtues. His book romanticizes history and ignores inconvenient truths. He's a self-appointed and self-aggrandizing PR spokesperson, not a designer, and definitely not a hacker, and he overstates and over-idealizes the Unix way, as well as and his own skills and contributions. Plus he's an insufferable unrepentant racist bigot.
Don't let historical accident become sacred design. Don’t confuse an ancient workaround with elegant philosophy. We can, and should, do better.
Philosophies need scrutiny, not reverence.
Tools should evolve, not stagnate.
And sometimes, yelling at clouds stirs the winds of change.
>In a 1981 article entitled "The truth about Unix: The user interface is horrid" published in Datamation, Don Norman criticized the design philosophy of Unix for its lack of concern for the user interface. Writing from his background in cognitive science and from the perspective of the then-current philosophy of cognitive engineering, he focused on how end-users comprehend and form a personal cognitive model of systems—or, in the case of Unix, fail to understand, with the result that disastrous mistakes (such as losing an hour's worth of work) are all too easy.
Donald A. Norman: The truth about Unix: The user interface is horrid:
>In the podcast On the Metal, game developer Jonathan Blow criticised UNIX philosophy as being outdated. He argued that tying together modular tools results in very inefficient programs. He says that UNIX philosophy suffers from similar problems to microservices: without overall supervision, big architectures end up ineffective and inefficient.
>Well, the Unix philosophy for example it has been inherited by Windows to some degree even though it's a different operating system, right? The Unix philosophy of you have all
these small programs that you put together in two like Waves, I think is wrong. It's wrong for today and it was also picked up by Plan Nine as well and so -
>It's micro services, micro services are an expression of Unix philosophy, so the Unix philosophy, I've got a complicated relationship with Unix philosophy. Jess, I imagine you do too, where it's like, I love it, I love a pipeline, I love it when I want to do something that is ad hoc, that is not designed to be permanent because it allows me- and you were
getting inside this earlier about Rust for video games and why maybe it's not a fit in
terms of that ability to prototype quickly, Unix philosophy great for ad hoc prototyping.
>[...] All this Unix stuff, it's the sort of the same thing, except instead of libraries or crates, you just have programs, and then you have like your other program that calls out to the other programs and pipes them around, which is, as far from strongly typed as you can get. It’s like your data coming in a stream on a pipe. Other things about Unix that seemed cool, well, in the last point there is just to say- we've got two levels of redundancy that are doing the same thing. Why? Get rid of that. Do that do the one that works and then if you want a looser version of that, maybe you can have a version of a language that just doesn't type check and use that for your crappy spell. There it is.
>[...] It went too far. That's levels of redundancy that where one of the levels is not very sound, but adds a great deal of complexity. Maybe we should put those together. Another thing about Unix that like- this is maybe getting more picky but one of the cool philosophical things was like, file descriptors, hey, this thing could be a file on disk or I could be talking over the network, isn't it so totally badass, that those are both the same thing? In a nerd kind of way, like, sure, that's great but actually, when I'm writing software, I need to know whether I'm talking over the network or to a file. I'm going to do very different things in both of those cases. I would actually like them to be different things, because I want to know what things that I could do to one that I'm not allowed to do to
another, and so forth.
>Yes, and I am of such mixed mind. Because it's like, it is a powerful abstraction when it
works and when it breaks, it breaks badly.
Upstream, virt-v2v supports conversions from VMware to either oVirt (RHV) or KubeVirt (OSV), and we don't plan to drop the oVirt support any time soon.
However Red Hat now only supports conversions to OSV, since RHV was deprecated (sadly).
The biggest problem is oVirt itself has not proven to be a very sustainable open source project. If oVirt dies, we'll likely remove support in v2v. (There's a great start up opportunity here, for a dull but money-making company that productizes oVirt again.)
To move VMs from RHV to OSV you can just copy the disk image since they already should have virtio drivers, qemu guest agent, and be able to boot on any qemu/KVM-based platform. I believe there's some automation for that, but it doesn't involve virt-v2v.
In my last role as a director of engineering at a startup, I found that a project `flake.nix` file (coupled with simply asking people to use https://determinate.systems/posts/determinate-nix-installer/ to install Nix) led to the fastest "new-hire-to-able-to-contribute" time of anything I've seen.
Unfortunately, after a few hires (hand-picked by me), this is what happened:
1) People didn't want to learn Nix, neither did they want to ask me how to make something work with Nix, neither did they tell me they didn't want to learn Nix. In essence, I told them to set the project up with it, which they'd do (and which would be successful, at least initially), but forgot that I also had to sell them on it. In one case, a developer spent all weekend (of HIS time) uninstalling Nix and making things work using the "usual crap" (as I would call it), all because of an issue I could have fixed in probably 5 minutes if he had just reached out to me (which he did not, to my chagrin). The first time I heard them comment their true feelings on it was when I pushed back regarding this because I would have gladly helped... I've mentioned this on various Slacks to get feedback and people have basically said "you either insist on it and say it's the only supported developer-environment-defining framework, or you will lose control over it" /shrug
2) Developers really like to have control over their own machines (but I failed to assume they'd also want this control over the project dependencies, since, after all, I was the one who decided to control mine with the flake.nix in the first place!)
3) At a startup, execution is everything and time is possibly too short (especially if you have kids) to learn new things that aren't simple, even if better... that unfortunately may include Nix.
4) Nix would also be perfect for deployments... except that there is no (to my knowledge) general-purpose, broadly-accepted way to deploy via Nix, except to convert it to a Docker image and deploy that, which (almost) defeats most of the purpose of Nix.
I still believe in Nix but actually trying to use it to "perfectly control" a team's project dependencies (which I will insist it does do, pretty much, better than anything else) has been a mixed bag. And I will still insist that for every 5 minutes spent wrestling with Nix trying to get it to do what you need it to do, you are saving at least an order of magnitude more time spent debugging non-deterministic dependency issues that (as it turns out) were only "accidentally" working in the first place.
In the USA a startup's market is, essentially, global - China. In the UK the market is, essentially the UK.
In the recent past it might have been argued that the market was the EU - but the reality is that the EU's market for services is fractured. The USA has the capital and power to enforce their companies access to markets everywhere (apart from China), the rest of the world does not.
You should play a short hike, especially if you have a steam deck.
Other great or lovely short games :
Firewatch
Steamworld dig 2 isn't as short but you can play it in 10 min chunks and it's cheerful and fun
Backbone (now called tails noir I think) is weird but worthwhile. It starts as cheerful beautiful detective mystery and ends as dark beautiful existential story.
Hades is also great for 10 minute bursts.
Amanita design games - botanicula machinarium samorost
Years of being bullshitted have taught me to instantly distrust anyone who is telling me about how many things they do per day. Jobs or customers per day is something to tell you banker, or investors. For tech people it’s per second, per minute, maybe per hour, or self aggrandizement.
A million requests a day sounds really impressive, but it’s 12req/s which is not a lot. I had a project that needed 100 req/s ages ago. That was considered a reasonably complex problem but not world class, and only because C10k was an open problem. Now you could do that with a single 8xlarge. You don’t even need a cluster.
10k tasks a day is 7 per minute. You could do that with Jenkins.
I think this argument is seductive but wrong because it ignores an invisible elephant in the room: funding.
Decentralization is a business problem, not a technical problem. Engineers tend not to see this because we're engineers and so we see technical problems first.
Usenet had no economic model. All the problems you list are solvable if there were funding available to solve them.
Free volunteer developer work tends to stop at the level of polish with which developers are comfortable, which is usually command line interaction and fairly manual processes. Developers generally have to be paid to develop new features and polish those features for the general audience, which is why there are precious few open source systems used by anyone other than developers.
Those that do exist tend to be subsidized by huge companies for the purpose of "commoditizing your compliments" or as tools to herd users into an ecosystem that has upsell opportunities built into it. Examples: Chrome, any open source client for a SaaS service, etc.
Non-profits can fund to some extent, but the truth is that polished feature-rich easy to use software is extraordinarily expensive to produce. A system that a developer can create in their spare time might cost millions to render usable to non-developers. Computers are actually very hard to use. We just don't see this because we're accustomed to it. Making them easy to use is a gigantic undertaking and is often far more difficult and complex than making something work at the algorithmic level.
Centralized systems with built-in economic models like SaaS or commercial software tend to triumph because they can fund the polish necessary to reach a wider audience. A wider audience means exponentially larger network effects. See Metcalfe's Law.
Cryptocurrency could have offered an alternative model but failed for entirely different reasons: perverse incentives that attract scammers. In crypto by far the most profitable thing to do is build a fake project that can appear just credible enough to attract retail buyers onto whom you can dump your tokens. There is no structural incentive to stick with a project and really develop it because all the money is made up front through the initial offering. This also ruins the ecosystem because "the bad chases away the good." Scammers make legitimate people not want to go anywhere near crypto, transforming the whole ecosystem into a "bad neighborhood."
The answer to this is easier (and harder) than you might think: just don't say anything.
You can get away with quite a bit just by being silent, and for longer than you'd think. A big way that people get away with things for so long is just by not answering questions. Someone says, "Is this [illegal thing] yours" and you say nothing. Now you've got to burn hours and dollars trying to prove someone owns something so that you can go after them.
You'll find domains, web hosts, countries, and employees who are all onboard with the same philosophy. When everything requires a subpoena at the highest level to move something forward, it can easily take years for anything to happen at all. Some countries are known for having slow legal systems. Stack jurisdictions with slow court systems and you can start with an 18 month window before anything can happen.
You've got a domain in Tonga registered to a company in another country, owned by a large company in another country owned by a trust in a third country. Often small countries with limited resources and archaic or corrupt bureaucracies. And where is it hosted? That's probably another connect the dots. And the site can change hands and then you have to start all over again. Are you going to refocus on the new owner or are you going to spend even more resources trying to track down the former owner?
And any of these entities may lead to nothing more than a mule, fake person, or dead person. Sure, it's someone's fault for having inaccurate records—but who? How long has this been going on? Did they know? Was it intentional? It shouldn't be like this, but it is… what do you do now? Are you going to go after the recordkeeper too?
You can do illegal shit for years or even decades if you just say nothing and respond to no one.
There's a "rachet effect" - BigCorp needs a new CEO, the committee looks for a replacement. Are they looking for an "average" CEO or an above average CEO? Compensation gets decided by a separate committee who use consultants who want to get re-hired. Buffett wrote about this in his 2005 letter to shareholders:
"Too often, executive compensation in the U.S. is ridiculously out of line with performance. That won’t change, moreover, because the deck is stacked against investors when it comes to the CEO’s pay. The upshot is that a mediocre-or-worse CEO – aided by his handpicked VP of human relations and a consultant from the ever-accommodating firm of Ratchet, Ratchet and Bingo – all too often receives gobs of money from an ill-designed compensation arrangement.
Take, for instance, ten year, fixed-price options (and who wouldn’t?). If Fred Futile, CEO of Stagnant, Inc., receives a bundle of these – let’s say enough to give him an option on 1% of the company – his self-interest is clear: He should skip dividends entirely and instead use all of the company’s earnings to repurchase stock.
Let’s assume that under Fred’s leadership Stagnant lives up to its name. In each of the ten years after the option grant, it earns $1 billion on $10 billion of net worth, which initially comes to $10 per share on the 100 million shares then outstanding. Fred eschews dividends and regularly uses all earnings to repurchase shares. If the stock constantly sells at ten times earnings per share, it will have appreciated 158% by the end of the option period. That’s because repurchases would reduce the number of shares to 38.7 million by that time, and earnings per share would thereby increase to $25.80. Simply by withholding earnings from owners, Fred gets very rich, making a cool $158 million, despite the business itself improving not at all. Astonishingly, Fred could have made more than $100 million if Stagnant’s earnings had declined by 20% during the ten-year period.
Fred can also get a splendid result for himself by paying no dividends and deploying the earnings he withholds from shareholders into a variety of disappointing projects and acquisitions. Even if these initiatives deliver a paltry 5% return, Fred will still make a bundle. Specifically – with Stagnant’s p/e ratio remaining unchanged at ten – Fred’s option will deliver him $63 million. Meanwhile, his shareholders will wonder what happened to the “alignment of interests"
Recognize that the work life is not the ends, but the means. You're selling your time in exchange for money which then allows you to pursue your personal goals.
Also, enter a state of mind where you watch office politics from the sidelines without getting personally invested in it. Maintain a metaphorical "strategic popcorn reserve".
The moment your boss has a boss, its just plain easy to hide.
The most important thing to understand about big people structures is every thing is a 'cost center'. The expenses aggregate at your boss, its irrelevant who is consuming how much in a team. For example Boaty McBoatyFace could have negotiated a big bonus from his boss Scrooge McDuck. Scrooge could have 10 people reporting to him, but Boaty's bonus is largely an expense divided by 10 across Scrooge's team, and largely appears as expense per person to the Scrooge's boss.
More precisely its like
SELECT SUM(person_expense)
FROM expense_table
GROUP BY manager_name;
Sure some one could fire a query and see it was not 9 people who ballooned Scrooge's expenses, but only one employee called Boaty. But almost always no one does that(Because people who deal with expenses interact through Dashboards, not SQL queries).
I learned this first hand from my ex-manager. Also most organizations are likely to fire queries along the lines.
SELECT SUM(expense)
FROM expense_table
GROUP BY expense_category;
expense_category being things like lunch, project outing, education etc. Then companies decide to cut down on budgets related to that category. But that's on the common category alone.
Say an expense category was 'bonus', or 'stock grant'. Its very common in most orgs, that in a team, for 2 - 3 people to eat the whole team's budget, and yet be totally invisible. And better, make it look like the whole team finished the budget.
I'm unlikely to write a book, but here are a few more tidbits that come to mind.
Re the above -- I don't mean to imply that any of this is malicious or even conscious on anyone's behalf. I suspect it is for a few people, but I bet most people could pass a lie detector test that they care about their OKRs and the OKRs of their reports. They really, really believe it. But they don't act it. Our brains are really good at fooling us! I used to think that corporate politics is a consequence of malevolent actors. That might be true to some degree, but mostly politics just arises. People overtly profess whatever they need to overtly profess, and then go on to covertly follow emergent incentives. Lots of misunderstandings happen that way -- if you confront them about a violation of an agreement (say, during performance reviews), they'll be genuinely surprised and will invent really good reasons for everything (other than the obvious one, of course). It's basically watching Elephant In The Brain[1] play out right in front of your eyes.
Every manager wants to grow their team so they can split it into multiple teams so they can say they ran a group.
When there is a lot of money involved, people self-select into your company who view their jobs as basically to extract as much money as possible. This is especially true at the higher rungs. VP of marketing? Nope, professional money extractor. VP of engineering? Nope, professional money extractor too. You might think -- don't hire them. You can't! It doesn't matter how good the founders are, these people have spent their entire lifetimes perfecting their veneer. At that level they're the best in the world at it. Doesn't matter how good the founders are, they'll self select some of these people who will slip past their psychology. You might think -- fire them. Not so easy! They're good at embedding themselves into the org, they're good at slipping past the founders's radars, and they're high up so half their job is recruiting. They'll have dozens of cronies running around your company within a month or two.
From the founders's perspective the org is basically an overactive genie. It will do what you say, but not what you mean. Want to increase sales in two quarters? No problem, sales increased. Oh, and we also subtly destroyed our customers's trust. Once the steaks are high, founders basically have to treat their org as an adversarial agent. You might think -- but a good founder will notice! Doesn't matter how good you are -- you've selected world class politicians that are good at getting past your exact psychological makeup. Anthropic principle!
There's lots of stuff like this that you'd never think of in a million years, but is super-obvious once you've experienced it. And amazingly, in spite of all of this (or maybe because of it?) everything still works!
I’ve been using Emacs for ~15 years right now, but I don’t recommend it for the last few.
I love it, it’s great, and as many others, I tried to move out of it but there was something I couldn’t do I KNEW I could get in Emacs and it frustrated me so much I kept going back.
But I don’t think it brings a lot of added value. There are many very very powerful IDEs and editors which offer out of the box great UX and feature discoverability. Hell, probably if IntelliJ toolset would allow me to customize it deeper with Lua/Lisp/JS/whatever I’d probably switch in a jiffy.
I compare Emacs to vinyls or paper books. It requires investment, in many cases it is worse than competition and requires more energy to just be on par. But it is absolutely lovable. Vinyl record comparison - they are expensive, heavy, require a lot of maintenance but for specific type of people it makes their heart skip a beat when they take it from the sleeve.
That’s why people are constantly talking about their Emacs. Same with vim or nvim. I rarely hear people talking with excitement about WebStorm or VS code.
So yeah, if you’re not into it just keep in mind that like some freak who spend their weekend on polishing rims of dream come true 1959 Fiat 500, some of us spend their time with Emacs.
Don’t get bullied into it, don’t get FOMO about it, but please don’t spoil our fun.
Upper management – employee productivity has gone way down in the last couple of years.
Employees – yikes, sounds like we should do something about that. Can you tell us how you are measuring productivity? Is shipping velocity slower? Is our revenue lower than projected? Maybe we need to adjust priorities or roadmaps? Let's come up with a plan to make sensible product and process changes that will help us better hit our targets.
Management – ...
Employees – can we even say for sure that productivity is down?
Management – we can, trust us.
Employees – ok, what can we do about it?
Management – 10% of you are fired.
Employees – that will just make the remaining people less productive.
Management – and everyone has to come in to the office 3 days a week.
Employees – but we were hired as fully remote. Most of us don't even live near an office. What will this accomplish?
Management – this will fix all our problems, trust us.
Employees – but what problems are we trying to fix?
The worst part of this is that a year later these companies will magically declare that all employees are 2.37x more productive now, and WFH was always a mistake. The corporate world/media will eat it up, and so office culture will get even more entrenched.
> I can write C that frees memory properly...that basically doesn't suffer from memory corruption...I can do that, because I'm controlling heaven and earth in my software. It makes it very hard to compose software. Because even if you and I both know how to write memory safe C, it's very hard for us to have an interface boundary where we can agree about who does what.
There's one thing that worries me about EVs: local pollution.
So, ICE cars release CO2 + they release local pollutants (NO, CO, etc), particulate matter, that kind of thing.
EVs obviously don't directly release CO2 nor do they release those local pollutants.
However, EVs are on average much heavier (probably 200-400kgs heavier), which means that they probably need bigger tires and they wear them out faster. They probably also wear roads faster.
For local pollution, is the extra tire & road wear and tear equal or worse to the ICE tire & road wear and tear + exhaust gases?
You actually know something about the finance industry, not sure this is the thread for you ;) You're absolutely right about the core problem.
I worked for several years on an 'enterprise blockchain' system. We often called it distributed ledger technology because our platform didn't actually use chains of blocks or proof of work, and it didn't have a token or anything like that. It was essentially a type of database but it had a lot of ideas from Bitcoin in it and was blockchainy enough that customers accepted it as such.
I have to admit, at the very start I was skeptical about why so many large companies seemed to want this stuff. Like Tim, I also spent a lot of time talking to staff at these large institutions to understand where they were coming from. Unlike him I didn't work for AWS which is pretty much the exemplar of centralised infrastructure, so maybe it was easier to pick up on some of the more subtle issues involved.
There are a few things to understand about businesses that decided they wanted blockchain:
1. They have complicated inter-firm communication and synchronization needs.
2. They solve those needs today via creating centralized trusted intermediaries (like CLS). This comes with a world of pain and problems like the obvious "too big to fail" issue that was exposed in 2008, but there's also a lot of less obvious problems, like these institutions immediately becoming stagnant rent seekers who don't innovate.
3. They exist in markets that lack obviously dominant players who can push things forward. Many people in the tech world don't understand this because we rely so heavily on a handful of ultra-profitable, ultra-huge tech firms that spend lots of treasure on giving out freebies and standards setting, but that's abnormal.
So they heard about how Bitcoin can synchronize different companies view of a real financial object like a ledger without a SWIFT-like organization sitting in the middle, and thought "yes that sounds like what we need". And they're kind of right, as long as you think in terms of business problems rather than technology!
"Blockchain" is a concept that works for them speaking in the purely abstract social sense because it acts as a neutral rallying point. If one company in a market observes that maybe having a giant specialized too-big-to-fail clearing house like CLS isn't the ideal way to solve atomic transactions, and they go to their partners saying "hey let's use this cool thing we designed" the answer will always be no because their partners will say, why should we empower our competitor? Blockchain as a concept is owned by nobody, and the core idea is that it empowers nobody, so it acted as an enabler for conversations that would otherwise never have happened. Whether the final result actually uses proof of work or even has blocks at all actually isn't so relevant except in the sense that business owners don't want to get ripped off by people lying to them about what they built.
Tim Bray should really understand this dynamic better than most people because he created XML, which crops up in the enterprise space all over the place, often in ways that aren't really appropriate. Something like protobufs would have been a far better fit but they use XML. Why is that? Well, XML went through a massive hype wave ~20 years ago thanks to the W3C relentlessly pushing this vapourware "semantic web" concept, so for a brief period XML was the future of everything. Again, this created a socially acceptable rallying point at which complex inter-firm business problems could be solved in a relatively decentralized way. The tech got used regardless of merit simply because it was something everyone could agree on and unlike ASN.1 the protocols/tooling was free.
Back to blockchain. The platform I designed for this use case (Corda) was a competitor of the DA platform used in ASX and has done relatively well in the market - it reached number 1 by number of projects using it and unlike the ASX case actually has real deployments that are actually decentralized. Over 90% of Italian banks are using it for inter-bank reconciliation! [1] There are other projects that use it too which seem to be working (I'm not involved in any of them directly and haven't worked on Corda for several years now). You never hear about them on HN because they're too Starship Enterprise to be interesting to the crowd here, but the idea that there are no working blockchain projects isn't actually true.
We managed this partly by walking the tightrope between what people said they wanted (blockchain!) and what they were telling us they actually wanted in customer interviews (what we came to call distributed ledger technology). There was definitely a fair bit of overlap but not enough to just throw Ethereum at a problem and call it solved. What they really needed was a combination of better inter-firm messaging, signing/cryptography, atomic financial transactions without a CLS-style intermediary that actually takes custody of the assets, a robust identity framework, good developer support and training materials etc. Some of this can be found in blockchain related research like BFT algos, others were somewhat solved already, but blending them together into a coherent platform was hard.
There's lots more that can be said about this - some customer projects failed, but the reasons ran the gamut from tech to social/political/business reasons. It wasn't as simple as "blockchain is a scam", which is what Bray seems trying to imply here. There actually is a there, there. It's just really, really hard to solve these problems without a hype wave to coordinate and synchronize intent.
Also subtle head and eye movements, which is something a lot of people like to ignore when discussing camera-based autonomy. Your eyes are always moving around which changes the perspective and gives a much better view of depth as we observe parallax effects. If you need a better view in a given direction you can turn or move your head. Fixed cameras mounted to a car's windshield can't do either of those things, so you need many more of them at higher resolutions to even come close to the amount of data the human eye can gather.