Hacker Newsnew | past | comments | ask | show | jobs | submit | kipple's commentslogin

> The actionable tip here is to start with the data. Try to reduce code complexity through stricter types on your interfaces or databases. Spend extra time thinking through the data structures ahead of time.

This is why I love TS over JS. At first it feels like more work up front, more hurdles to jump through. But over time it changed how I approached code: define the data (& their types) first, then write the logic. Type Driven Development!

Coming into TS from JS, it might feel like an unnecessary burden. But years into the codebase, it's so nice to have clear structures being passed around, instead of mystery objects mutated with random props through long processing chains.

Once the mindset changes, to seeing data definition as a new first step, the pains of getting-started friction are replaced by the joys of easy future additions and refactors.


(tangential) In theory I like TS. But in practice, unless I'm the one writing it and can KISS, it can quickly turn into an unmaintainable nightmare that nobody understands. TS astronauts, left unchecked, can make extremely complex types using generics, conditionals, and all the esoteric features of TS, resulting in extremely obtuse code.

For example, I doubt anyone could explain this "type" without studying it for several hours:

https://github.com/openapi-ts/openapi-typescript/blob/main/p...

In this case, the "type" is really an entire program.


I must be a part of the problem because reading that type isn't too difficult.

I also think types like this aren't innately problematic when they live in libraries. They should be highly focused and constrained, and they should also be tried, tested, and verified to not get in the way, but they can absolutely be a huge benefit to what we do.

Maybe it's mostly problematic when type astronauts litter the application layer with types which are awful abstractions of business logic, because types are far less intuitive as programs than regular JavaScript or common data structures can be. Just type those in the dumbest way possible rather than wrap the definition of them up into monolithic, unnavigable, nested types and interfaces.

If a library allows me to define valid states related to events which drive a data store or something narrow like this, that's awesome (assuming it's intuitive and out of the way). I like this kind of type-fu. If it's trying to force coworkers to adhere to business logic in unintuitive ways, in a domain that's not unlikely to shift under our feet, that's a huge problem.


> I must be a part of the problem because reading that type isn't too difficult. I also think types like this aren't innately problematic when they live in libraries.

Despite the star count on the repo (which, if you aren't paying attention to the 0.X versioning, might lead you to believe it's a well tested "library" type), that particular type I linked to has a ton of bugs with it that are currently documented in at least half a dozen open issues, some of which are super esoteric to solve:

https://github.com/openapi-ts/openapi-typescript/issues/1778...

In this case ^ the problem was due to "behavioral differences based on the strictNullChecks ... compiler option. When that option is false (the default), undefined is considered to be a subtype of every other type except never"

Maybe I'm old school, but as long as we are using metaprogramming to solve a problem, I'd rather codegen a bunch of dumb types vs. implement a single ultra complex type that achieves the same thing. Dumb types are easy to debug and you won't run into strange language corner cases like when `undefined extends` has different behavior when strict mode is on or off.

I guess my point is, maybe you find it easy to read, but apparently it's a nightmare to maintain/test otherwise there wouldn't be so many bugs with it:

- https://github.com/openapi-ts/openapi-typescript/issues/1769

- https://github.com/openapi-ts/openapi-typescript/issues/1525

I'm pretty sure I could fairly easily implement `openapi-fetch` by code generating dumb types and it would avoid all of these bugs, and maybe I should as a reference implementation just for comparison purposes in the future for discussions like this.


I'm not trying to say all types in libraries are okay. There are tons of awful ones there, too. One of my favourite libraries actually has some of the worst typing issues I've encountered, and like you're saying, code generation is the perfect solution for the problems they're facing. They actually had a code generator for a previous version of the library, but significant API changes in the latest version caused the code generator to break.

It's imperative that the crazy astro types actually are good; otherwise they really are just going to get in the way. I think my point about libraries though is that if they're hyper-focused on solving a single problem, there's a better chance that the typing will stay relevant, stable, and improve over time. In an application this seems to be less true, leading to all kinds of clever and/or verbose type definitions trying to solve this and one million other problems at once. It's brutal.

After looking closer at that type you linked to, there's this one embedded type called `MaybeOptionalInit`, haha. MaybeOptional. I guess it's optional, sure, and maybe it won't be provided at all (hence the `never` condition), but... Why is that MaybeOptional and not just Optional? That is a bit weird. I see what's happening but I'm not crazy about how it's implemented.


> I'm pretty sure I could fairly easily implement `openapi-fetch` by code generating dumb types and it would avoid all of these bugs, and maybe I should as a reference implementation just for comparison purposes in the future for discussions like this.

FFR: I ended up doing just that: https://github.com/RPGillespie6/typed-fetch


> TS astronauts, left unchecked, can make extremely complex types using generics, conditionals, and all the esoteric features of TS, resulting in extremely obtuse code.

Disclaimer: I guess I'm a fellow TS astronaut.

Most of the time TS astronauts will stick to your methodology of keeping things simple. Everyone likes simple, I think.

However, the type-austronautics is necessary once you need to type some JS with some really strange calling conventions/contracts (think complex config objects inputs, or heterogenous outputs that end up with _not quite the same_ call signatures, using large objects trees for control flow, etc; basically really shit code that arises from JS being a very dynamic language) without modifying the interfaces themselves. Sure you can be a bit lenient, but that makes the code unsound and crates a false sense of security until the inevitable runtime bug happens.

The correct solution would be to refactor the code, but that's not always possible. Especially if your manager was the author of said magnum anus—apologies, I meant magnum opus—and sabotages any attempts at refactoring.

I guess the moral hiding in this anecdote is that I should looking for a new job.


I will agree that some TS libraries have insanely complicated types, and compared to other programming languages I have used (e.g. Clojure), it takes a longer time to understand library code.

But the example provided here doesn't seem too bad. Here is my attempt after skimming it twice.

  Paths extends Record<string, Record<HttpMethod, {}>>
I assume the

  Record<HttpMethod, {}>
is a hacky (but valid) way to have a map where the keys must be HttpMethod and the values contain arbitrary map-like data. e.g. maybe it describes path parameters or some other specific data for a route.

Moving on.

  Method extends HttpMethod
  Media extends MediaType
These seem self-explanatory. Moving on.

  <Path extends PathsWithMethod<Paths, Method>, Init extends MaybeOptionalInit<Paths[Path], Method>>(
    url: Path,
    ...init: InitParam<Init>
  ) => Promise<FetchResponse<Paths[Path][Method], Init, Media>>
Looks like we have two generic parameters: Path should be a type satisfying PathsWithMethod<Paths, Method>. That's probably just requiring a choice of path and associated HTTP method. As for Init, that looks like it's to extract certain route-specific data, probably for passing options or some payload to fetch.

Lastly,

  Promise<FetchResponse<Paths[Path][Method], Init, Media>>
Taken everything I have just guessed, this represents an async HTTP response after fetching a known valid path -- with a known valid method for that path -- together with Init parameters passed to fetch and possibly Media uploaded as multi-part data.

I probably got some details wrong, but this is what I surmised in about 15 seconds of reading the type definition.


Not defending that code - and I agree with you that wild TS code gets nightmarish (I usually call it a “type explosion”) but

Waaaay back when in my C++ days, starting to get into template metaprogramming, the “aha!” moment that made it all much easier was that the type definition could be thought of as a function, with types as input parameters and types as output parameters

Recentlyish, this same perspective really helped with some TS typing problems I ran into (around something like middleware wrapping Axios calls).

It’s definitely a “sharp knife” if you overuse it, you screw yourself, but when you use it carefully and in the right places it’s a super power.


I'd be interested in reading that Axios-wrapper if it's openly available.


It isn’t :/ I’d be down to recreate it if you can point me at an open-source project to do it in! :)

Basically - we had some custom framework-ish code to do things like general error handling, reference/relationship/ORM stuff, and turning things into a React hook.

I rewrote what we had to use function passing, so that you could define your API endpoint as the simple Axios call (making it much easier to pass in options, like caching config, on a per-endpoint basis).

So you’d define your resource nice and simple, then under the hood it’d wrap in the middleware, and you’d get back a function to make the request (or a React hooks doohickey, if you wanted).

But typescript doesn’t really play nice with function currying, so it took some doing to wrap my head around enough of the type system to allow the template type to itself be a template-typed function. That nut cracked when I remembered that experience with C++ typing; in the end it actually came out pretty clean, although I definitely got Clever(TM) in some of the guts.


Back when I used to work in plain js I saw very complicated structures, but there is no type annotation. It’s worse. The horrible part is where the type changes in different cases, so you have to trace everything to know if what you are changing is safe.


You don't have to understand it necessarily to use it. I'm sure there's plenty of library level code that people don't understand. But that's the point. Typescript will tell you if you screw up. A lot of this has to do with generics, and if you're using it and typescript can infer the generic types to use, it'll be a lot simpler and you'll know exactly what's breaking.

And for libraries like this, you'll unfortunately be limited to Typescript ninjas to maintain, but there's no alternative really. I guess use javascript without types, which doesn't remove the dependencies or complexity just hides it away, and who knows what happens at run time


> And for libraries like this, you'll unfortunately be limited to Typescript ninjas to maintain, but there's no alternative really.

The alternative (in this case , at least) is to generate a dumb fetch interface from the openapi spec. You have to generate the openapi spec types anyway, just take it a step further and generate a dumb fetch interface as well and then you don't need complex generics, you just call dumb typed functions in the generated fetch interface.


Oh this is so funny. That exact type was my introduction to typescript! I came over from Python a few months ago for a solo web project, and I struggled with that type mightily!

In the end it took me a few tries to land on something idiomatic and I actually just ended up using inferred types (which I think would be the recommended way to use it?) after several iterations where I tried to manually wrap/type the client class that type supports. Before I found a functional pattern for inferring the type correctly, I was really questioning the wisdom of using typescript when probably several whole days were spent just trying to understand how to properly type this. But in doing so I learned essentially everything I currently know about TS (which admittedly is still very limited) so I don’t think it was wasted time.


Yes, I find myself in type hell with some regularity. TBH it happens with my own codebase too when libraries I want to use are authored by these type astronauts.


I worked in fully typed Java for 8 years before jumping to fully untyped Ruby.

Having leaned hard on the Java type system for many years, I was terrified of the type anarchy.

But it turned out to not be a problem at al. For me at least, being ambitious with writing tests made not miss types at all. In practice, a good test suite catches pretty much any problems typing handles, and then some!

This is only my experience. I'm not saying everyone should or could work that way, or that I'm better than you etc.


My own experience is that working in typed languages, then going to untyped ones... your sense of types and the problems they address is already fairly high. Your 'untyped' code likely still avoids a lot of the problems you might otherwise encounter, just because of whatever habits you may have picked up in the typed system. Going the other way - untyped to typed - tends to present a lot of ... tough moments along the way, because you're having to put a lot more thought about things that you didn't have to before.


Nitpick: Ruby isn't untyped, it's dynamically typed.

Forth and assembly are untyped, as these languages truly lack distinctions between different kinds of data.


>For example, I doubt anyone could explain this "type" without studying it for several hours

From skimming it for about a minute it seems like it's just a strongly typed way to generate HTTP requests? It really doesn't look too complicated


Well it is. You won't discover how deep the rabbit hole goes with that type until you start trying to debug it. For example, try fixing this issue which is the result of a problem with that type:

https://github.com/openapi-ts/openapi-typescript/issues/1769


Interesting, I wonder how much of that is due to poor implementation by the authors vs. issues with TS vs. issues inherent to building a typed language on top of the mess that is JavaScript?

Most languages with strong type systems (I'm thinking at least as strong as Java or C#, maybe stronger) wouldn't have those same sort of footguns. In C# I've run into other kinds of fun nightmares with types, like trying to use interfaces with Entity Framework Core. But I think that's more EF Core's fault than C#'s.


For your linked example…

It has documentation

> This type helper makes the 2nd function param required if params/requestBody are required; otherwise, optional

The type here is the implementation not the documentation. I guess we are so used to types being the documentation, which they are for value/function level programming, but not in type level programming.

I think maybe you are disappointed at the tooling? I do think the docs here should be attached to the type so that it appears in the IDE as well.


Honestly that type you linked looks like a dream to use (I've never OpenAPI). I love APIs that enforce any necessary restrictions on the type level.


It would be... if it wasn't so bug ridden at the moment


side note: don’t have much experience with TS, but the overuse of extend is also common in “enterprise” Java/C# apps


Eh, not anymore. Arbitrary inheritance chains are frowned upon in C# and people get mad when you do so. You also occasionally run into everything sealed by default because of overzealous analyzer settings or belief that not doing so is a performance trap (it never is). Enterprise does like (ab)using interfaces a lot however.


Type-driven development has been a big win for me as well, specifically when writing web front ends. Whether its client side rendering (sometimes a necessary evil) or on the server with a tool like Astro, I try really hard to start by defining types and UI components totally separately.

I'll actually build out the full data flow and UI components in complete isolation, leaving the glue code for the final step. Its kind of a weird pattern from what I've seen, I have gotten some interesting code reviews over the years, but it really is nice focusing on one concern at a time. At the end its also fun watching a bit of glue code wire up the entire app.


I think this is definitely the best way. It feels like you're violating DRY, but you're not.


Agree overall (yay data structures, and that types prompt thinking about them) but I don’t think you need types to make good data structures, and, just because you have types doesn’t mean you end up with good data structures.

But yes, definitely - working in a typed language encourages that mindset, and it’s the application of that mindset that yields the benefits (imo).


To a decent first approximation, and especially given TypeScript's erasure and generally very opt-in design approach, types are just tests the compiler doesn't allow you to not run.

That's not a good way to think about them forever. But it might be a good way to start thinking about them, for those as yet unfamiliar or who've only had bad experiences.

(I've had bad early experiences with a lot of good tools, too, when learning to use them fluently required broadening my perspective and unlearning some leaky prior intuitions. TypeScript was one such tool. I don't say that's the only reason someone would bounce, but if that's the reason you-the-reader did so, you should consider giving it a more leisurely and open-minded try. You may find it rewards your interest more generously than you expected it might.)


When you've really got your data structures and the rules for manipulating them pinned down, and you've built a good interface on top of it, the result is usually something that's so simple and easy to understand that it kind of doesn't matter anymore whether you're working in a static or dynamically typed language.

IOW, I think that the value in static typing (speaking only about this specific issue!) isn't that it makes you do things well; it's that it puts a limit on how poorly you can do them. But I also sometimes worry that it puts a limit on how well people can do, too. I've met way too many people who tacitly believe that all statically typed domain modeling is automatically good domain modeling.


Strict types are a great way to paint yourself into a corner. Good design should only impose strict types within a single module, with very loose coupling outside the module (meaning loose types)

Having a well defined data model is important, but you often can't really know what that data model should be until you've banged on a prototype. So the faster (in the long run), "better" way is to first prototype with very loose types and find what works, and then lock it down, within the scope of the above paragraph


> Strict types are a great way to paint yourself into a corner.

I've never really understood this stance. It's all code. It's not like you can't change it later.

> So the faster (in the long run), "better" way is to first prototype with very loose types and find what works, and then lock it down, within the scope of the above paragraph

I think this depends on the programmer, and what they're experienced with, how they like to think, etc. For example, as counterintuitive as it might seem, I find prototyping in Rust to be much quicker than in Python.


> I've never really understood this stance. It's all code. It's not like you can't change it later.

Actually, you often can't :) Ask Microsoft how easy it is for them to change some code once it's been shipped.

The "new thinking" is that you should teach your users to upgrade constantly, so you can introduce breaking changes and ditch old code, sacrificing backwards compatibility. But this often makes the user's life worse, and anyone/anything else integrating with your component. In the case of a platform it makes life hell. For a library, it often means somebody forks or sticks with the old release. For apps it means many features the users depend on may stop working or work differently. It basically causes problems for everyone except the developer, whose life is now easier because they can "just change the code".

In many cases you literally can't go this route, due to technical issues, downstream/upstream requirements, contractual obligations, or because your customers will revolt. This affects almost all codebases. As they grow and are used more often, it makes changes more problematic.


> Actually, you often can't :) Ask Microsoft how easy it is for them to change some code once it's been shipped.

My understanding is that the OP was talking about prototyping. Once code is in a public interface in the wild, it's hard to change either way. I don't see how dynamic typing will save you there. In fact, stronger typing can at least help you restrict the ways in which an interface can be called.


> Once code is in a public interface in the wild, it's hard to change either way.

Yes! Definitely for the final version (when the prototype becomes production, which is the moment a customer first uses it) everything should be locked down.

> I don't see how dynamic typing will save you there. In fact, stronger typing can at least help you restrict the ways in which an interface can be called.

Strong typing isn't inherently bad here, but it's often associated with strong coupling between components. Often people strongly type because they're making assumptions (or direct knowledge) about some other component. That's death. You want high cohesion and loose coupling, and one way to do that is just not depend on strong types at the interface/boundary.

To recap:

  1. When prototyping, loose types everywhere, to help me make shitty code faster to see it work
  2. When production, loose types at the component boundaries, and strict types within components
https://en.wikipedia.org/wiki/Cohesion_(computer_science) https://en.wikipedia.org/wiki/Loose_coupling


> Often people strongly type because they're making assumptions (or direct knowledge) about some other component.

I'm not really sure why strong typing would have that effect. It seems like an orthogonal concern to me.

In fact, strong static types can potentially help make it easier to see where things are loosely or strongly coupled. Often with dynamic typing it's difficult to tell where implicit assumptions are inadvertently causing strong coupling.


>> Strict types are a great way to paint yourself into a corner.

> I've never really understood this stance. It's all code. It's not like you can't change it later.

Try maintaining a poorly designed relational database. For example, I am dealing with a legacy database where someone tacked on an innocent "boolean" column to classify two different types of records. Then years later they decided that it wasn't actually boolean and now that column is 9-valued. And that's how you get nonsense like "is_red=6 means is_green". Good luck tearing the entire system apart to refactor that (foreseeable) typing/modeling error. The economical path is usually to add to the existing mess, "is_red=10 means is_pink".


And if this was an untyped blob of JSON it would somehow be better?

I'd argue that dynamic typing makes it easier to paint yourself into a corner.


The stuff you are complaining about is caused exactly by lack of type strictness.

Nobody fixes it because nobody has any idea on what code depends on that column. With strict types you just ask the computer what code depends on it, and get a list a few minutes later.


> So the faster (in the long run), "better" way is to first prototype with very loose types and find what works, and then lock it down, within the scope of the above paragraph

Disagree. You can still prototype and refactor with strict types. I don't find working with loose types to be faster at all. Once a program reaches non-trivial complexity loose types make iteration development significantly more difficult and error prone.


Why loosen the types on the API, when the type system is fully capable of encoding higher-level restrictions? That then locks you into that API as a contract. If you instead lock the API down to the strict set, you are free to expand it over time.


Iv started with learning python and java during highschool but python really stuck.

Now as I work on my degree iv started to try to learn C for reverse engineering and low level development. While I do understand some things its a big leap in terms of skill in python. I love how flexible it is and how fast it is. Recently I started a new challenge based on shodfycast's most recent video. ( https://www.youtube.com/watch?v=M8C8dHQE2Ro ) and currently am just focusing on single thread performance, and using array structures. Then I realized my random number isn't true random and debating if my prg is sufficient enough. I also debated generating all the numbers at once into an array, throwing it into CPU cache, then doing the logic for rolls using the faster memory so I'm not waiting for numbers to generate. Single core laptop time is like 56 minutes using wsl.

I'm tempted to try this on my dual socket system using tinycore Linux, so I can shave off some time from useless overhead and use some debug tools to find the slow spots.

Unsure how much time I should sink into this though.


>Unsure how much time I should sink into this though.

You should stop immediately once you start learning less and are fixating on hyper specific problems.

>I also debated generating all the numbers at once into an array, throwing it into CPU cache

You don't control the cache. I recommend that you treat the CPU as a black box, strictly until you can no longer so so. If you are learning C and aren't even writing multi threaded code, you should not fixate on the specifics of how the CPU handles memory.

Please pretend that is true. Manipulating cache is difficult. And you should worry about other things. This will become important when you are doing multi threading.


The funny thing is that...this is why I like dynamic languages. Because I do all of this with the database and when I'm using something like Rails, ActiveRecord handles all of the types automatically based on what I've already defined in the database.

For web apps at least, about 90% of the data coming into and out of the application is just being stored or retrieved from the database. Converted from string to datatype in the DB and then converted back to string as it's being returned back. Enforcing types strictly in this layer that is largely pass-through can create so much additional boilerplate with minimum benefit since it's already handled where it matters.

For a NoSQL DB, sure, I get it. You NEED the application to define your types because your database isn't.

And then there are people who feel very strongly about having it everywhere, all the time and can't imagine working without it.

I like that we work in a field where we can have options.


I totally agree, though this process isn't without its faults. One thing I've had to learn to be mindfull of is "type-crastination" -- if I'm not careful, I can really overdo it and spend way too much time and energy on defining types.


You can think about data structures in a fully dynamic language like JS or python. But you have to write software in a way, which utilizes these types and acknowledges that they exist.

Thinking about what your data structures are is important in any language. Strict typing helps you in pinning them down and communicating them, but the approach is not exclusive to strict typing.

Once your software is about passing anything objects around, you have already lost and proper thinking about data structures becomes impossible. I agree that stricter typing helps you to avoid that trap.


That’s why I really like to write some c++ in my free time after a week of javascript at work


From js to c++?! Come on... For sure there is at least one thing js and c++ have in common - crappy standard library


I feel similarly about FP. It can be more work up front, but what a gift it is to your future self when f(x) == f(x).


+1, arrow keys for turning pls


They already do that.


I've noticed this same save issue in Bruno

The fix for me: cmd+s only works if the left pane (Query, Body, etc) has focus

Save does not work if the right pane (Response, Headers, etc) has focus


Gold star to dfbrown, who pointed this out in the release announcement comments yesterday: https://news.ycombinator.com/item?id=38547713


> the cold calculus here is the very nature of the banality of evil


It is also pretty bad calculus. Monkeys have almost the same level of intelligence we have (and probably more in some departments) so killing even one monkey is a debt that cannot be equalised with even one saved human.


You're just running away from the issue though


I hate the Apple/Google text message incompatibility — but I can't argue with the results... I switched from Android to iPhone after years of my friends complaining about me degrading our group chat experience.

I was stubborn at first, but some messages would get sent out of order or dropped altogether, and finally I succumbed to the pressure.


I am an iPhone user. Where’s the

> Apple/Google text message incompatibility

?

SMS works. I think you meant iMessage, didn’t you? That’s not a messaging incompatibility. That’s just a messaging platform that is only supposed to work on Apple devices nowhere else.

I assume you’re from USA (because nowhere else I’ve known people to universally adopt a messaging platform that works only one kind of hardware). You know there are and where messaging options that don’t even discriminate based on what OS you’re from.


> I switched from Android to iPhone after years of my friends complaining about me degrading our group chat experience.

Really? Is this a common thing?


I have heard that in the US people will exclude you from group chats if you are on Android because they can only manage iMessage.

I have known Americans who would mock Android users as being "poor", because they don't want to use an iPhone.

Americans can be complete dicks to each other over the silliest things.


Scary, very scary if this is true!


I've used Comlink in a production React app (for client-side text search) and enjoyed it.

The lib author wrote a nice blog post about "why webworkers" and "why comlink" a few years ago: https://surma.dev/things/when-workers/


In another comment here, they're saying they just deleted that tag to avoid this access issue — https://news.ycombinator.com/item?id=36810393


good call out - please as an internet mob let us not ascribe to malice what can be attributed to sheer unintentional impacts of complex software


Agreed, I want markup in my logic — not logic in my markup


CSS and HTML in JS was always a bad idea. HTML can live on its own. JS is a dependent technology, on the same level as CSS. The HTML was always the foundational technology.

Centering JS has always been like putting the cart before the horse.


This seems intuitively right but doesn't really result in good outcomes in practice (I'm curious if you have any concrete reasons to justify saying it "was always a bad idea").

In practice what I've found is that HTML is fine/optimal for event-driven interactions layered onto entirely static content, but for inherently interactive UIs, what ends up happening is you get one of two systems:

1. If you start from a HTML-forward standpoint, you can go a certain distance separating your concerns entirely, but at scale the drawback is maintaining disparate references, negatively impacting code colocality: having to edit a reference in three separate files in a repo is incredibly bad for DX and readability. To overcome this, a common approach is to make the JS side generic by embedding DSL with dynamic functionality into your HTML: the worst of both worlds, creating both a Frankenstein HTML+weird-extras language and still needing to bundle JS with it.

2. If you're starting a greenfield project with the early intent of having a fully interactive UI, instead of growing into it, you can avoid the above mess and architect your project from the offset with good code colocality by embedding templates for highly dynamic/transient UI components into the imperative part of your application logic (i.e. the JS).


Svelte is neither a Frankenstein nor incredibly bad. In implementation, the compiler is JavaScript and there is JS bundled in the output. However, the developer experience is comparatively indistinguishable from plain old HTML, CSS, and JS in a substantially similar form to a static HTML page to the point where vanilla JS libraries can bind to HTML elements without modification. This is in fact how I develop for Svelte: I start with a static page, put in some elements and CSS to look like my design target, drop them into Svelte largely unmodified, and add if/each/await control flow.

I have yet to find a greenfield React project with more than a few developers over a few years that didn't turn into a big ball of mud that only the original developers could understand (at best). State management alone on React is an utter, total, and indisputable shitshow. Codebases with a mixture of classes and functional Hooks are far from ideal for developer experience.

Svelte code by comparison is a breath of fresh air both to develop as well as maintain. Whether Svelte wins or something inspired by Svelte does not matter as much to me as something that takes away the inverted JSX model, the virtual DOM, and allows the use of vanilla libraries without React wrappers interfacing with other React wrappers.


> Svelte is neither a Frankenstein nor incredibly bad.

"bad" is subjective I guess but "Frankenstein" can equally be applied to ".svelte" & ".jsx" - both are non-standard amalgamations of imperatives with markup. In my mind the qualitative differences here end up being in parser support (jsx as a js language extension, svelte as... I'm not sure... not quite a HTML extension but closer to it I guess) and output consistency: this is either dynamic output by the node server in Svelte's case, or perhaps a multifile static bundle with Sapper/SvelteKit; that bundle taking various forms depending on your bundler chunking plugin configs, etc. JSX in contrast is a line-by-line one-to-one simply defined translating to basic native JS render calls: it's eminently auditable and will work with an infinity of preexisting testing/parsing tools that support basic JS.

> I have yet to find a greenfield React project with more than a few developers over a few years that didn't turn into a big ball of mud that only the original developers could understand (at best).

I've seen this with Svelte and React in relatively similar measure (I've seen hundreds of large React projects, some elegantly structured but most a mess. I've seen <20 large Svelte projects, all much much younger, all but 1 a complete mess). All in all this is a problem with the development/developer landscape and not either individual project dependency. Contributor scale is a hard problem.


very interesting, i wonder what's your opinion on tools like htmx which work on the exact opposite basis


this!


I forget, how does this come up in Anathem?

I remember similar notions in his novel Fall — where the internet is so full of fake news and sponsored content that people need additional services to filter out useful information


Oh cool, I haven't read Fall but it's similar in Anathem

The reticulum (aka internet) is filled with software generated crap. Initially most of it is blatant and easy to filter out, like an over the top "but cheap v14gra" spam email.

Eventually the companies that make the crap filtration software get into an arms race with each other, they realize if they can generate their own crap that their competitors don't detect, it'll give them a competitive advantage. The reticulum becomes filled with "high quality" crap, eg text that's ALMOST correct but wrong in subtle ways. Imagine a wiki article about pi where everything is correct except the 9th digit, or an op-ed with slightly flawed logic.

Eventually crap goes beyond text, and the reticulum starts to see deepfaked images and videos that parallel news articles. One of the jobs of the ITA is to try and find the signal in all the noise.


(Syndev = computer (syntactic device), reticulum = internet)

> Early in the Reticulum—thousands of years ago—it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information,’ Sammann said.

> ‘Crap, you once called it,’ I reminded him.

> ‘Yes—a technical term. So crap filtering became important. Businesses were built around it. Some of those businesses came up with a clever plan to make more money: they poisoned the well. They began to put crap on the Reticulum deliberately, forcing people to use their products to filter that crap back out. They created syndevs whose sole purpose was to spew crap into the Reticulum. But it had to be good crap.’

> ‘What is good crap?’ Arsibalt asked in a politely incredulous tone.

> ‘Well, bad crap would be an unformatted document consisting of random letters. Good crap would be a beautifully typeset, well-written document that contained a hundred correct, verifiable sentences and one that was subtly false. It’s a lot harder to generate good crap. At first they had to hire humans to churn it out. They mostly did it by taking legitimate documents and inserting errors—swapping one name for another, say. But it didn’t really take off until the military got interested.’

> ‘As a tactic for planting misinformation in the enemy’s reticules, you mean,’ Osa said. ‘This I know about. You are referring to the Artificial Inanity programs of the mid-First Millennium…’”


This sounds more inspired by trap streets than AI data pollution, though I suspect the people trying to fight generative AI will eventually resort to trap streets anyway, and wind up producing the same outcome.


Which fairly closely mirrors the conspiracy theory about antivirus vendors...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: