┍━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┑
│ how-to-make-a-damn-website.html │
├──────────────────────────────────────────────────────────────────────────┤
│ <title>How to Make a Damn Website</title> │
│ <h1>How to Make a Damn Website</h1> │
│ │
│ │
│ <p>A lot of people want to make a website but don’t know where to start │
│ or they get stuck.</p> │
┕━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┙
HTML is very forgiving! You can start really simple and work your way up to more complexity when you need it.
Web browsers are indeed forgiving when it comes to incomplete HTML. Some time ago, I did a small experiment to see what minimal HTML is required to display a simple 'Hello' page while adhering to the specification, passing HTML Tidy validation and also satisfying the Nu HTML Checker. As far as I can tell, it is this:
The body tag is unnecessary, tidy might complain but that is not the spec. The meta tag is generally unnecessary (the content encoding should be set by the server in the headers since it applies to not just HTML). The html tag is unnecessary if you do not want to declare the language of the document (which is generally a warning).
So I guess smallest without errors should be
<!DOCTYPE html><title>a</title>
And smallest without errors or warnings should be
<!DOCTYPE html><html lang><title>a</title>
And then any content that is not links, scripts, meta tags, etc. will automatically be within a body after like
Merging a PR with rebase doesn't lose provenance. You can just keep all the commits in the PR branch. But even if you squash the branch into a single commit and merge (which these tools automate and many people do), it still doesn't lose provenance. The provenance is the PR itself. The PR is connected to a work item in the ticketing system. The git history preserves all the relevant info.
No, the original base is in the commit history. It's just not relevant any more after rebase. It's like your individual keystrokes before a commit are not relevant any more after a commit. They're not lost provenance.
I checked with Gemini 3 Fast and it provided instructions on how to set up a Dev Container or VM. It recommended a Dev Container and gave step-by-step instructions. It also mentioned VMs like VirtualBox and VMWare and recommended best practices.
This is exactly what I would have expected from an expert. Is this not what you are getting?
My broader question is: if someone is asking for instructions for setting up a local agent system, wouldn't it be fair to assume that they should try using an LLM to get instructions? Can't we assume that they are already bought in to the viewpoint that LLMs are useful?
the llm will comment on the average case. when we ask a person for a favourite tool, we expect anecdotes about their own experience - I liked x, but when I tried to do y, it gave me z issues because y is an unusual requirement.
when the question is asked on an open forum, we expect to get n such answers and sometimes we'll recognise our own needs in one or two of them that wouldn't be covered by the median case.
I think you're focusing too much on the word 'favourite' and not enough on the fact that they didn't actually ask for a favourite tool. They asked for a favourite how-to for using the suggested options, a Dev Container or a VM. I think before asking this question, if a person is (demonstrably in this case) into LLMs, it should be reasonable for them to ask an LLM first. The options are already given. It's not difficult to form a prompt that can make a reasonable LLM give a reasonable answer.
There aren't that many ways to run a Dev Container or VM. Everyone is not special and different, just follow the recommended and common security best practices.
- Spent fuel is a solved problem, we just store it securely
- Who can be relied upon: who do you rely upon to run your drinking water?
- Failure modes of accidents: have been extensively studied and essentially designed out
- Multiple catastrophic failures: sounds bad until you realize that you can name only two:
1. Chernobyl: old flawed reactor design, basically impossible today, a few unfortunate deaths among first responders in the cleanup, that's it
2. Fukushima: no radiation deaths. You would get a higher dose of radiation flying to Japan to visit Fukushima than from drinking the irradiated leaked water there.
> upwards of $1 trillion if not more.
Where are you getting this number? According to https://cnic.jp/english/?p=6193 it was estimated at JPY 21.5 trillion (roughly USD 150 to 190 billion).
> Spent fuel is a solved problem, we just store it securely
This is simply untrue. Depending on the type and enrichment of the fuel it will need to be actively cooled for some period, possibly decades. After that you can bury it. You need facilities for all of this. You need personnel (done by the NRC currently) to transport and install new fuel, remove old fuel and transport it to suitable sites as well as manage those sites. Before they even make it to storage sites they'll typically be stored onsite or in the reactor for years.
> Who can be relied upon: who do you rely upon to run your drinking water?
Given the current administration, almost nobody. The state of drinking water in places like Flint, MI is a national disagrace. The continued existence of lead pipes that leech lead into drinking water in many places is a national disgrace. The current administration gutting the EPA and engineering the Supreme Court to overturn things like the Clean Air Act and the Clean Water Act are just the cherry on top.
A significant ramp up of nuclear power would necessitate a commensurate ramp up of the NRC in all these capacities.
> Failure modes of accidents: have been extensively studied and essentially designed out
Like I said, hand waved away.
> Where are you getting this number?
Multiple sources [1][2]. Fukushima requires constantly pumping water to cool the core. That water needs to be stored (in thousands of tanks onsite) then processed and ultimately released back into the ocean, which itself is controversial. Removing the core requires inventing a bunch of technologies that don't exist yet. The decomissioning process itself is something most of us won't live to see the end of [3].
The $1 trillion and a century for 1 nuclear plant. Pro-nuclear people will point to the death figure because it suits their argument. It's economically devastated that region however.
And as for Chernobyl, billions of euros was spent building a sarcophagus for the plant, only to have the integrity of that shield destroyed by a Russian drone.
The issue with spent fuel has to do with the long term (essentially permanent) storage part and is purely political. It's a solved problem except for getting approval for the solution.
The other fuel issues you mention are already dealt with today as a matter of course. It's just the final part that remains up in the air.
You are the one hand waving about failure modes. As with aircraft, as failures have happened we've learned from them. New designs aren't vulnerable to the same things old ones were. All the mishaps have happened with old designs.
Personally I think the anti-nuclear FUD that the climate activists push is unfortunate. We would likely have been close to carbon neutral by now if we'd started building it out in the late 90s.
That said, I'm inclined to agree that solar might be a better option at this point in environments that are suited to it. The batteries still aren't entirely solved but seem to be getting close. In particular, the research into seasonal storage using iron ore looks quite promising to me.
Yes, because others were mostly not affected by the Fukushima disaster despite being in the impact area. Why? Because they took safety precautions. Onagawa was closer to the epicentre, but they built on a high embankment and did not flood and lose power.
Anti-nuclear people conveniently ignore, because it suits their argument, that Japan is restarting their nuclear energy program. They finally understood that there's no other viable option for energy security, price, and achieving decarbonization goals.
> The combination has had a toll on Japanese automotive (and other) exports. Barring Fukushima’s impacts, one would assume a return to pre-2008 fiscal meltdown exports by now. But basically they’re static. That’s in the range of $200 billion in lost exports just for the automotive industry.
>
> It’s likely fair to attribute $20 to $50 billion of that to irrational fear of radiation.
Like, are you serious? This is the most bizarro accounting I've ever seen.
> ...that’s about $100 billion in extra fuel costs.
And now it's counting as part of the cost of Fukushima the fossil fuels needed to replace it. Even more wacky accounting.
> another $22 billion for unexpected health costs due to burning extra fossil fuels.
It continues to get even more wacky, if that was possible, by attributing this cost to the Fukushima disaster. These are costs that would be avoided with a strong nuclear electricity generation program! These are arguments in favour of nuclear! It's not cost-effective for Japan to cover their land mass and offshore areas with solar and wind arrays! They have regular earthquakes and typhoons which would knock these vast arrays offline and take massive amounts of time and money to get back online!
You said: 'Fukushima will likely take a century to clean up and cost upwards of $1 trillion if not more.' The sources you provide don't provide the numbers or, if they do, they include bogus numbers that actually make the case for nuclear.
Really just the grammatical correctness in the generated text (pronouns and possessives). Early versions produced awkward sentences when the model guessed incorrectly. It is a good point I didn't think about regarding making it optional to input.
Thanks for pointing out a mitigation. I'm confused though. How does "htmx sends a request header HX-Request: true with every request." happen without javascript? And does this imply you need a backend server that understands whatever this header is for the graceful fallback? Ie, it wouldn't work with just nginx...
> How does "htmx sends a request header HX-Request: true with every request." happen without javascript?
It doesn't. If JavaScript is disabled, this header is not sent.
> you need a backend server that understands whatever this header is for the graceful fallback
Yes, as I mentioned in my blog post linked earlier: 'the backend server can use a fairly simple heuristic to figure out that it should respond with a fragment:...The request has a header HX-Request...The request does not have a header HX-History-Restore-Request...If these two conditions are fulfilled, it can respond with a fragment. Otherwise, it can respond with a full page ie <!DOCTYPE html>... and so on.'
But...htmx is not really meant to work with just Nginx or other static web servers, it is meant to work with a BFF (backend-for-frontend) that specifically knows how to serve and handle the app in question.
From the criteria you have mentioned so far:
- Works without JavaScript
- Works with a static web server like Nginx
I can only conclude that you are talking about serving static sites with no dynamic interactivity. That's not really what htmx is about. Htmx is more like a simplified way to do SPA-like things.
> Htmx is more like a simplified way to do SPA-like things.
Okay. Then we agree. "Htmx is power tools for using Javascript to alter HTML".
With the implicit premise of starting from javascript you see that as "Power Tools for HTML". Without this premise I see it as "Power Tools for Javascript".
I think this distinction is not only important because I question the premise, but because many people who've talked to me about Htmx are confused about what Htmx is and believe that it works without javascript. This is not Htmx's fault, of course, but it could be made clearer by avoiding easily misinterpreted headlines like this.
>htmx gives you access to AJAX, CSS Transitions, WebSockets and Server Sent Events directly in HTML, using attributes, so you can build modern user interfaces with the simplicity and power of hypertext
I hope you see how the full context can make it sound like it's HTML based and not javascript based, even if, yes, AJAX and WebSockets are JS things.
Ecosystems have their downsides too. Just a small example, no htmx users were impacted by the React Flight Protocol vulnerabilities. Many htmx users have no-build setups: no npm, no package.json, nothing. We don't have to worry about the security vulnerability treadmill and packages and tools arbitrarily breaking and no longer building after some time passes. We just drive the entire webapp from the backend, and it just works.
No, you need less than that! :-)
HTML is very forgiving! You can start really simple and work your way up to more complexity when you need it.reply