nah. its a transform for XML and its super useful when you need it. People such as yourself who claim its not a great programming language are probably not trying to use it for the right thing.
I was using it to transform DocBook XML into LaTeX, based on an existing tool that used xslt code to do just that (https://sourceforge.net/projects/dblatex/). We had to modify the existing xslt code to allow for enhancements to DocBook that accommodated some linguistic structures (like interlinear text).
So yes, we were using it for exactly the right thing: transforming XML.
I stand by my comment; apparently you don't recognize hyperbole when you see it.
At any rate, there's absolutely no reason for a command in any programming language to require open and close XML tags: it's a waste of space, and just gets in the way of understanding the program,. If you need to close a command, there are things like {}, semicolons, parens (LISP), exdenting (Python), or even--yes--the 80th column in a punch card.
I wrote a blog post that generated a lot of traffic on HackerNews last year when it briefly was on #1 here. My blog was (and still is) hosted on a 9-year old Dell Latitude E7250 with Intel Core i5-6300U processor. The server held up fine with ~350 concurrent readers at its peak. It was actually my fiber router that had trouble keeping up. But even though things got a bit slow, it held up fine, without Cloudflare or anything fancy.
Computers are stupid good at serving files over http.
I’ve served (much) greater-than-HN traffic from a machine probably weaker than that mini. A good bit of it dynamic. You just gotta let actual web servers (apache2 in that case) serve real files as much as possible, and use memory cache to keep db load under control.
I’m not even that good. Sites fall over largely because nobody even tried to make them efficient.
I’m reminded of a site I was called in to help rescue during the pandemic. It was a site that was getting a lot higher traffic (maybe 2-3x) than they were used to, a Rails app on Heroku. These guys were forced to upgrade to the highest postgres that Heroku offered - which was either $5k or $10k a month, I forget - for not that many concurrent users. Turns out that just hitting a random piece of content page (a GET) triggered so many writes that it was just overwhelming the DB when they got that much traffic. They were smart developers too, just nobody ever told them that a very cacheable GET on a resource shouldn’t have blocking activities other than what’s needed, or trigger any high-priority DB writes.
SF is mostly served by AT&T, who abandoned any pretense of upgrading their decrepit copper 20 years ago, and Comcast, whose motto is “whatcha gonna do, go get DSL?”
AT&T has put fiber out in little patches, but only in deals with a guaranteed immediate ROI, so it would mean brand new buildings, where they know everyone will sign up, or deals like my old apartment, where they got their service included in the HOA fee, so 100% adoption rate guaranteed! AT&T loves not competing for business.
Sure, others have been able to painstakingly roll out fiber in some places, but it costs millions of dollars to string fiber on each street and to get it to buildings.
Lived in an older neighborhood in Georgia a couple years back. A new neighborhood across the street had it (AT&T), but we didn't.
Caught an AT&T tech in the field one day, and he claimed that if 8 (or 10—memory's a little fuzzy) people in the neighborhood requested it, they'd bring it in.
I never did test it, but thought it interesting that they'd do it for that low a number. Of course, it may have been because it was already in the area.
Still, may be worth the ask for those who don't already have it.
In the US, it’s not about money or demand. The more entrenched cities (especially in California, for some historic reasons/legislation) tend to have a much more difficult time getting fiber installed. It all comes down to bureaucracy and NIMBYism.
Sure, but it took much longer for it to roll out in LA than it should have, and even then (as you pointed out) the furthest they could get was the pole in most cases. FTH is mostly reserved to the more suburban areas (the Valleys) and the independent cities.
we have fiber in half of SF via Sonic - where there are overhead wires. The other half of SF has its utilities underground making economics more difficult.
The same could be said of mail in paper ballots too, which have seen widespread adoption in the United States starting in 2020, so I don’t think this should be a knock against this system.
You haven't heard people "knocking" about the widespread adoption of mail in paper ballots? They simply offer no protection against vote coercion which is not a good choice in any election of importance. Pretty sure at least one of the two parties has ending mail-in voting as a long-held position.
At the least, this will often result in heads of household voting for their entire families. At the most, it can result in people voting under the supervision of a local gang/militia member.
If anyone is looking for the right terminology to find papers, it's "no-receipt" voting. The holy grail is no-receipt, yet verifiable voting, but it might be mathematically impossible.
How would you prove that you voted how you said you did?
If you took a picture of your ballot, or even if you filmed yourself putting it in the envelope and putting it in the mailbox, there's nothing stopping you from taking it out later, tearing it up, and going to vote differently in person.
Just do it in person. The voter fills out the ballot in front of the buyer, seals and signs the envelope, and hands it to the buyer in exchange for cash. The buyer then puts it in the mail on the voter’s behalf.
The voter could go to a polling place afterwards and attempt to cast a provisional ballot but my understanding is that this is difficult, varies significantly state to state, and in many cases is not possible given that mail in ballots are detached from the voter identity ahead of Election Day in many states.
First, this is too much trouble and many won't do this, second, you can lie to people that you have the means to verify their vote, third, you might require a person to write a code word on the ballot so that you can verify that they actually casted that ballot.
The main benefit is the flexibility to seamlessly move this logic from server to client and vice versa without rewriting all of your code.
Purely server rendered apps tend to have much slower interactions and exhibit weird behaviors so it is valuable to be able to do some stuff on the client.
This is a straw man. There's no such thing as a "purely server rendered app", and never has been (except maybe in the pre-JS days of the internet). At the end of the day, all webpages are produced by a server, and rendered by a client.
The only reason that server-rendered apps are "slower" is because people don't think about what they're doing, and leap right to 100% client-side rendering. Very few things actually need the latency guarantees of client-side rendering.
Right, exactly. I almost wrote something about that, but stopped myself. It's a very common pattern for people to move everything to an SPA for "latency", but then treat the data channel to the browser as if it's a app-server database connection, which is the same thing, but worse.
Calling this argument a strawman is pretty dismissive and is not a technical arugment. There are lots of client side interactions that do not need to be blocked on data from the server. Popup menus, modals, collapsible lists, rich text editing etc.
Also think about what actually has to happen when you interact with a server rendered app vs a client rendered app. Once the client app is booted you don't have to pay network latency, html parsing, FOUC etc on every interaction as you would with a server rendered app.
> Calling this argument a strawman is pretty dismissive and is not a technical arugment.
It's quite literally a technical argument, and not "dismissive" in the slightest. My first sentence tells you why it's a strawman: there is no such thing as a "server-rendered" app. The framing of the original comment was black-and-white, when reality is (and has always been) shades of gray. Webapps have, since basically the earliest days of the internet, a question of where you draw the line between server-side and client-side rendering. Leaping to SPAs is just as lazy and thoughtless as trying to do everything on the server.
> There are lots of client side interactions that do not need to be blocked on data from the server. Popup menus, modals, collapsible lists, rich text editing etc.
Yes, of course. And if you implement your popups, modals, accordions and whatnot via asynchronous server calls, you are doing it in a silly way that guarantees bad performance.
There's a middle ground. For example: you can trivially render top-level dynamic elements (like menu content) into a SSR page. It doesn't require an SPA, yields imperceptible UX latency, and allows you the ability to do things like SEO without crazy infrastructure.
(Oh, and you don't need to "boot the app" in the browser. That's latency too.)
There's clearly something else going on here because I am having trouble following this argument.
You pretty clearly laid out what you meant by server rendered in your original comment:
> Were I forced to write a website today, it’d still end up looking similar to a traditional Django server-side rendering system with pages, and deep links, and all that.
Nowhere did I advocate leaping to SPAs for no reason. I simply made the case - and stand by it - that building with the same set of technologies client and server side leads to more flexibility and less code.
Re: the comment about booting the app, I think that is one of the strongest arguments in favor of server rendered apps, which is why I qualified my comment to "interactions", which happen after the page loads.
I lived through the server-rendering-only days of the original Hotmail client, and the php-plus-jquery spaghetti era. They were terrible for developers and terrible for users. Going backwards would be a huge mistake.
> Were I forced to write a website today, it’d still end up looking similar to a traditional Django server-side rendering system with pages, and deep links, and all that.
I didn't write this. You're confusing me with someone else.
Dagster is the data orchestration platform built for productivity. Data engineers all around the world, in every industry, use Dagster to build scalable and maintainable data applications like ETL pipelines, ML training pipelines, data integrations and similar systems.
Data applications are the true core of AI and ML training systems, and our goal is to make Dagster the de facto standard for structuring these systems.
We’re an early stage, well-funded startup team with a proven track record of shipping open source software with global adoption. We put a premium on respectful, clear, and complete communication, and we expect each other to be creative, curious, effective, and empathetic.
Their founder and CEO created The Algorithm at Instagram. Rest of the team is super strong too. Makes a lot of sense for OpenAI to acquire a team at the intersection of AI and consumer products.
Seems pretty obvious they decided to be a product maker. This is them doubling down.
They started out obviously doing API stuff. chatGPT was some sort of proof-of-concept or whatever but once it went viral, the obvious pivot is to be a product maker. The margins on ChatGPT+ is way better than an API, as every ChatGPT clone will tell you. A viral product is really hard to make, never mind make and throw away to focus on lower margin corporate customers.
They seem to have a great vision now - make a good product (WIP, but rapidly iterating), and sell the underlying models to Microsoft to offer as an API for the residual value they can’t capture directly. This should make it clear that if you’re selling a Chatbot off of their APIs, they’re planning to compete with you.
100% Yep. I like OpenAI but I think their peril will be due to lack of vision more than anything else. Other players are catching up. IMHO they should stick to providing AI infrastructure.
That being said, considering Microsoft's investment, the most likely outcome is that OpenAI will move into the consumer business while being the research arm for Azure.
don’t let those darn things trick you into thinking they are people. legally it gets murky, but they aren’t people. it is all just a ploy for emotional connection for the sake of taking advantage of you.
> I think they have to decide if they want to be an API provider or a product-maker.
> Businesses don't tend to do well when they directly compete with their clients.
Yes it's a great idea and I have a version that is basically a convolution over the transcript. It works much better than the current version - it can automatically create cohesive chapters and summaries of those chapters - however, it consumes an order of magnitude more ChatGPT API calls making it uneconomical (for now!)
Thanks for the kind words. I built it on a few cross-country plane rides and now I mostly just leave it alone. The infrastructure and tooling we have these days is so incredible.
Sure. The old one just splits the transcript into 5 minute chunks and summarizes those. The reason this sucks is because each 5 minute chunk could contain multiple topics, or the same topic could be repeated across multiple chunks.
This dumb technique is actually pretty useful for a lot of people though, and has the advantages of being super easy to parallelize and requiring only 1 pass through the data.
The more advanced technique does a pass through large chunks of the transcript to create lists of chapters in each chunk. Then it combines them to a single canonical chapter list with timestamps (it usually takes a few tries for the model to get it right). Then it does a second pass through the transcript, summarizing the content for each chapter.
The end result is a lot more useful, but is way slower and more expensive.
Dagster is OSS and most folks do local development on their laptop which is quite fast. The speedups in this post are for production deployments (i.e. landing a commit in your master branch) and for branch deployments (the deployments that run on every pull request).
I hope it's straight line development on HEAD without branches. Continually rebasing what will eventually land now is cheaper than figuring out mega merges later.
- Variable scoping https://developer.mozilla.org/en-US/docs/Web/XML/XSLT/Refere...
- Function definition and application https://developer.mozilla.org/en-US/docs/Web/XML/XSLT/Refere...
- Flow control https://developer.mozilla.org/en-US/docs/Web/XML/XSLT/Refere...
- Loops https://developer.mozilla.org/en-US/docs/Web/XML/XSLT/Refere...
- Module system https://developer.mozilla.org/en-US/docs/Web/XML/XSLT/Refere...
Yep, it's a programming language. And not a great one!