I think that XP was the only true agile methodology. Agile just got more and more corrupted over the years through stupidity.
Clearly AI programming allows you to quickly close feedback loops. I don't think everything needs a comprehensive set of unit tests though.
But if people can go back and understand the core concept of XP (which again is about feedback loops to me) and take advantage of LLM-based agent systems to create those tight closed feedback loops, then that will be an advance for software engineering.
Starting with XP and in shops doing XP quite intensely has ruined me because I simply can't stomach working in "SCRUM" shops where a whole pile of stuff is taken as "agile" dogma which is mostly just ritualized meaningless bastardizations of things that XP pioneered, turned inside out.
I think the ideal scenario is usually two paired programmer using a shared set of AI agents on the same working branch together. It's an ideal feedback loop of paired planning, reviewing, building, and testing.
“Unit tests are small tests, each one exercising a little piece of functionality. The units tested are usually individual methods, but sometimes clusters of methods or even whole objects.”
Extreme Programming Explained: Embrace Change, 2nd Edition (2004) Kent Beck
You're agreeing with me there. "each one exercising a little piece of functionality". In Beck's approach the tests are added at each step for a simple functionality addition, which then gets implemented in the project to pass the test. These days unit tests are more commonly used as "every method is a unit to be tested completely", often after the implementation is already there.
I also wonder if this is written from a statically-typed perspective. In dynamic-typing land there are so many more stupid little things that can break that the compiler would otherwise catch for you
Either that or tracing/logging/debugging, but other than specific niches like parsing (of specific bug repros) I think integration tests are generally a lot more bang for the buck.
Anyway, if you want to go down a related-but-unrelated rabbit hole, J.R. Thompson's lecture on the Space Shuttle Main Engines is a good one. You can probably watch it at higher speed to smooth out the many, many "uh"s (believe me, it's bad):
I think a more accurate version of this is: unit tests were not only per-method but also per functionality. This was often called BDD (Behavior Driven Development), e.g. Ruby's cucumber. Your intuition here is correct though.
I disagree with the "not only". The idea in xp is to write the test first. http://www.extremeprogramming.org/rules/testfirst.html You don't know how many methods/functions (if any) you're going to add to make it pass, so they're explicitly per-functionality.
There are other models that do that, such as photogrammetry models.
But someone could possibly extend the work so it was a few photos rather than one or many. The way you ask the question makes it sound like you think it was a trivial detail they just forgot about.
There is a fundamental misunderstanding about popularity. People think that popularity is directly related to merit or rationality.
Technical things are largely popular for the same reason non-technical things are popular: trends. In other words, they are popular because other people perceive them to be popular. Humans are herd animals.
async is harder and associated with Node.js/JavaScript which probably makes it uncool for a certain influential python subculture.
But actually Fast API has basically taken over and now I think people should recognize that means async IS popular in python at this point.
Very experienced full stack engineer with a focus on AI agents for the past few years. Have built complex agents for things like: real estate cash flow analysis and presentation generation, medical necessity review, grade school math tutoring and illustrated story generation, sales agents, website generation, interactive avatars, etc. Created the MindRoot comprehensive open source agentic application platform.
Also experienced with Node.js and other web and desktop technologies. I've built numerous different types of applications end-to-end: TV news production, auctions, GIS decision support, infrastructure as a service, complex database systems, 3d retro computer programming platform, etc.
I know this is not really what they mean, but in case you want to give AI a literal VM, I have a service that deploys a VM with an open source AI agent on it that can act as a sysadmin. Also gets it's own IP address. https://mindroot.io
True, but this advantage is strictly for AI inference and only when using the very low resolution FP4 and sparse matrices.
When using bigger number formats and/or dense matrices the advantage of Thor diminishes considerably.
Also the 50 Tops is only from the low power NPU. When distributing the computation also over GPU and CPU you get much more. So for a balanced comparison one has to divide the Thor value by 4 or more and multiply the Ryzen value by a factor that might be around 3 or even more.
The Ryzen CPU is significantly better, and the GPU has about the same size but a much higher clock frequency, so it should also be faster, so for anything except AI inference a Ryzen Max at half price will offer much more bang for the buck.
You are forgetting Thor has CPU too. So far we haven't seen how good Ryzen CPU is at matrix operations. Which makes given numbers 4 and 3 questionable at best.
The best would be to run a real test. However for robotics it's obvious, Thor with camera interfaces is unbeatable. They are used for (stereo) vision and obstacle avoidance. Add to that software stack.
Seems like a great concept. Hope they are commercially successful.
It reminded me about another geothermal energy idea: dig about 3 or so miles straight down and harvest the heat that is there already. I guess that's a lot harder than making a dirt pile. But maybe it could become practical if there was enough commercial effort and large scale manufacturing of the equipment.
Kind of brings it around full bore though. Why do that kind of project when you can just harvest actual fuel like oil or gas?
I think this stuff can become practical with more scale and wide manufacturing of equipment and development of efficient techniques. But it requires you to do a lot of upfront work based on principal rather than the bottom line.
So anyway again great idea because it eliminates a lot of challenges and costs that come with concepts like "Journey to the Center of the Earth" etc.
> another geothermal energy idea: dig about 3 or so miles straight down and harvest the heat that is there already
Deep geothermal ought to work. Deep drilling is hard, but it's been done. Eavor-Deep got down to where they got 250C water. [1] That was back in 2023. Not much new since.
The problem seems to be that when you drill into really hot rock, most drilling techniques run into trouble. Rock becomes plastic and clogs things up. The drilling tools have problems with the heat. Progress continues, slowly.
There's these guys, trying to drill with microwaves: [1]
On September 4, they're going to do a public demo and try to drill a 100 meter hole.
The other issue is that the steam loses energy as it's pumped over a long distance. Iceland and a few other locations are big users since they are on fault zones with hot rocks relatively close to the surface. Even if deep drilling is demonstrated, it's unclear if geothermal power will become geographically independent.
Forget climate change, the best reason is national security. Russia's war plummeted Europe into gas shortages and price hikes. Many countries would love to not be dependent on them. And since WW2, all war requires vast quantities of oil; maybe drones will reduce that a bit, but you still need to move stuff (and people) around a lot. So you need a reliable source of energy both in peacetime and wartime.
For the US, the best reason is sustainable energy. Gas, oil and coal are not renewable, so you eventually need to adapt a new form of energy. Just transporting it is problematic, with most communities rejecting pipelines. In the meantime you're polluting your local environment and putting workers at risk. Whereas if your energy plan is largely "the sun shines", "the wind blows", and "dirt holds heat", that is ridiculously more sustainable.
The biggest problem we have is we demand too much energy. AI has made this problem way worse. Nuclear is the only thing that's going to fill the gaping chasm of demand.
Sure, if that's what convinces people to do the right thing, so be it. Though the continuation of that thinking tends to cause behavior that's not very friendly w.r.t. climate or environment.
Clearly AI programming allows you to quickly close feedback loops. I don't think everything needs a comprehensive set of unit tests though.
But if people can go back and understand the core concept of XP (which again is about feedback loops to me) and take advantage of LLM-based agent systems to create those tight closed feedback loops, then that will be an advance for software engineering.