Nice work! Real-time visualization isn't just about speed, it fundamentally transforms how we interact with data. This creates three distinct workflows: Traditional batch processing (matplotlib), API-based interactivity (plotly), and Real-time "data conversation" (fastplotlib).
When visualization requires seconds or minutes, we carefully plan each plot. When it's instantaneous, we can explore intuitively, following curiosity rather than pre-planned queries. Faster plotting doesn't just save time - it changes which insights I discover. Patterns that require multiple quick iterations to notice suddenly become obvious.
My question, how does fastplotlib differ from other GPU-accelerated visualization tools, such as Kinetica or Heavy.ai (MapD), which are proprietary solutions?
YC had successfully filled the gap between the low cost of starting software companies and the high barriers VCs placed on funding them.
However, today there are three key shifts have fundamentally changed the startup landscape: the commoditization of starting up (AWS, no-code tools, standardized playbooks); massive capital influx making early funding widely available; and market saturation in many software categories.
What do you think YC's model should evolve into for the next 20 years?
We're getting complaints from users that your comments are LLM-generated. Of course it's hard to say for sure, but if they are (in whole or part), please stop. Generated comments aren't allowed here; HN is for human conversation.
What you have identified is nurturing tech development and this has become easier. But more complex tech development should now be the next focal point of incubators. And this means founders who have more speciality in the work they do.
There is still a gap that needs fulfilling. YC is one player who can fulfill it. Late-stage startup development is a business structuring problem and the biggest blank slate for new founders. From term sheets to understanding convertible debt to navigating VC maze.
What's fascinating about the Factorio Learning Environment is how it exposes the gulf between what we think LLMs are capable of (general reasoning) versus what they actually excel at (coding within known patterns). The models' struggles with spatial reasoning and error recovery aren't just benchmark failures - they're revealing fundamental limitations in how these systems build and maintain internal world models.
I've been experimenting with a hybrid approach that might address this: coupling an LLM's planning abilities with a specialized spatial reasoner that maintains a symbolic representation of the factory state. The LLM handles high-level strategy and code generation, while the spatial module manages entity placement, rotation, and connection planning. Initial results suggest a 3-4x improvement in factory complexity without increasing token usage.
This pattern of "specialized cognitive modules + LLM orchestration" seems increasingly necessary as we push these systems toward more complex real-world tasks. Has anyone else been exploring similar hybrid architectures for domain-specific reasoning problems? I'd be particularly interested in implementations that maintain clear interfaces between the symbolic and neural components.
Fascinating. I was thinking how the factory should be communicated to the model, and represented "internally". Images aren't the right solution (very high bandwidth for no real benefit). An ASCII grid of the game's tiles (more likely, a small chunk of it) is orders of magnitude better, but you still don't need to simulate every tile in a conveyor. It's just a line, right? So the whole thing is actually a graph!
That compresses nicely into text, I imagine.
I'd like to hear more details about your symbolic approach!
>An ASCII grid of the game's tiles (more likely, a small chunk of it) is orders of magnitude better, but you still don't need to simulate every tile in a conveyor. It's just a line, right? So the whole thing is actually a graph!
Until you accidentally feed a different material into your belt and need to clean it up
Probably the memory model of the game itself is the best representation. The devs have already spent a significant amount of development cycles optimizing this down to a minimal compressed form - belt runs, for example, are one entity regardless of how long they are. The LLM is then effectively modeling the degrees of freedom of the game simulation and picking code paths within them.
This is really interesting, do you have a repo or anything describing the approach? I would be particularly interested in trying your approach in FLE to see how it affects layout design. How are you performing the spatial reasoning?
The way I think of it is this. Yes, the LLM is a "general reasoner." However, it's locked in a box, where the only way in and out is through the tokenizer.
So there's this huge breadth of concepts and meanings that cannot be fully described by words (things like, spatial reasoning, smells, visual relationships, cause/effect physical relationships etc). The list of things that can't be described by words is long. The model would be capable of generalizing on those, it would optimize to capture those. But it can't, because the only thing that can fit through the front door is tokens.
It's a huge and fundamental limitation. I think Yann Lecunn has been talking about this for years now and I'm inclined to agree with him. This limitation is somewhat obscured by the fact that we humans can relate to all of these untokenizable things -- using tokens! So I can describe what the smell of coffee is in words and you can immediately reconstruct that based on my description, even though the actual smell of coffee is not encoded in the tokens of what I'm saying at all.
This native TypeScript compiler isn't just about performance – it's a profound shift in how we think about developer tools. When a language team abandons self-hosting (TS in TS) for raw performance (Go), it signals we've hit fundamental limits in JS/TS for systems programming.
What fascinates me most is the implicit admission about JavaScript's performance ceiling. Despite V8's heroic optimizations, there comes a point where GC pauses, JIT warm-up, and memory usage simply can't compete with native code for compiler workloads.
The Go choice over Rust/C# reveals something deeper: Microsoft prioritized shipping a working solution over language politics. Go's simplicity (compared to Rust) and deployment model (compared to C#) won the day. Even Anders Hejlsberg – father of C# – chose Go for pragmatic reasons!
I wonder: is this the beginning of a larger trend where JS/TS tooling migrates to native implementations? Will we see more hybrid ecosystems where the app logic stays in TS but the infrastructure migrates to compiled languages?
Anyone building developer tools should be taking notes. The 10x difference isn't incremental – it's transformative.
Javascript is not slow because of GC or JIT (the JVM is about twice as fast in benchmarks; Go has a GC) but because JS as a language is not designed for performance. Despite all the work that V8 does it cannot perform enough analysis to recover desirable performance. The simplest example to explain is the lack of machine numbers (e.g. ints). JS doesn't have any representation for this so V8 does a lot of work to try to figure out when a number can be represented as an int, but it won't catch all cases.
As for "working solution over language politics" you are entirely pulling that out of thin air. It's not supported by the article in any way. There is discussion at https://github.com/microsoft/typescript-go/discussions/411 that mentions different points.
I think JS can really zoom if you let it. Hamsters.js, GPU.js, taichi.js, ndarray, arquero, S.js, are all solid foundations for doing things really efficiently. Sure, not 'native' performance or on the compile side, but having their computational models in mind can really let you work around the language's limitations.
JS can be pretty fast if you let it, but the problem is the fastest path is extremely unergonomic. If you always take the fastest possible path you end up more or less writing asm.js by hand, or a worse version of C that doesn't even have proper structs.
I find these userland libraries particularly effective, because you'll never leave JS land, conveniently abstracting over Workers, WebGL/WebGPU and WASM.
JS, interestingly, has a notion of integers, but only in the form of integer arrays, like Int16Array.
I wonder if Typescript could introduce integer type(s) that a direct TS -> native code compiler (JIT or AOT) could use. Since TS becomes valid JS if all type annotations are removed, such numbers would just become normal JS numbers from the POV of a JS runtime which does not understand TS.
AssemblyScript (for WASM) and Huawei's ArkTS (for mobile apps) already exist in this landscape. However, they are too specific in their use cases and have never gained public attention.
> reveals something deeper: Microsoft prioritized shipping a working solution over language politics
Its not that "deep". I don't see the politics either way, there are clearly successful projects using both Go and Rust. The only people who see "politics" are those who see people disagreeing, are unable to understand the substance of the disagreement and decide "ah, it's just politics".
This is not accusatory, but do you write your comments with AI? I checked your profile and someone else had the same question a few days ago. It's the persistent structure of "it isn't X – it's Y" with the em dash (– not -) that makes me wonder this. Nothing to add to your comment otherwise, sorry.
Sorry for being pedantic but they are using an en dash (–) not an em dash (—) which is a little strange because the latter is usually the one meant for adding information in secondary sentences—like commas and and parentheses. In addition, in most styles, you're not supposed to add spaces around it.
So, I don't think the comment is AI-generated for this reason.
"The en-dash is also increasingly used to replace the long dash ('—', also called an em dash or em rule). When using it to replace a long dash, spaces are needed either side of it – like so." https://en.wikipedia.org/wiki/En_(typography)
You're right, oops. I agree with your reasoning (comment still gives off slop vibes but that's unprovable). But the parent has been flagged, so I'm not sure if that means admins/dang has agreed with me or if it was flagged for another reason.
em-dash is shift-option-hyphen on macOS, so it's not a good heuristic—I use it myself.
They're using en-dash which is even easier: option-hyphen.
This is the wrong way to do AI detection. For one, LLM would have used the right dash. But at least find someone wasting our time with belabored or overwrought text that doesn't even interact with anything.
The em dash thing is not very conclusive. I have been writing with the em dash for many years, because it looks better and is very accessible on Mac OS (long press on dash key), while carrying a different tone than the simple dash. That, and I read some Tristram Shandy.
> The Go choice over Rust/C# reveals something deeper: Microsoft prioritized shipping a working solution over language politics. Go's simplicity (compared to Rust) and deployment model (compared to C#) won the day.
I'm not sure that this is particularly accurate for the Rust case. The goal of this project was to perform a 1:1 port from TypeScript to a faster language. The existing codebase assumes a garbage collector so Rust is not really a realistic option here. I would bet they picked GCed languages only.
I can imagine C# being annoying to integrate into some CIs, for instance. Go fits a sweet spot, with its fast compiler and usually limited number of external dependencies.
> Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.
> We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code.
Personally, I'm a big believer in choosing the right language for the job. C# is a great language, and often is "good enough" for many jobs. (I've done it for 20 years.) That doesn't mean it's always the best choice for the job. Likewise, sometimes picking a "familiar language" for a target audience is better than picking a personal favorite.
> When a language team abandons self-hosting (TS in TS) for raw performance (Go), it signals we've hit fundamental limits in JS/TS for systems programming.
I hope you really mean for "userspace tools / programs" which is what these dev-tools are, and not in the area of device drivers, since that is where "systems programming" is more relevant.
I don't know why one would choose JS or TS for "systems programming", but I'm assuming you're talking about user-space programs.
But really, those who know the difference between a compiled language and a VM-based language know the obvious fundamental performance limitations of developer tools written in VM-based languages like JS or TS and would avoid them as they are not designed for this use case.
Yeah, the term has changed meaning several times. Early on, "systems programmer" meant basically what we call a "developer" now (by opposition to a programmer or a researcher).
If my memory serves, the "programmer" was essentially a mathematician, working on a single algorithm, while a "system developer" was building an entire system around it.
>The Go choice over Rust/C# reveals something deeper: Microsoft prioritized shipping a working solution over language politics. Go's simplicity (compared to Rust) and deployment model (compared to C#) won the day. Even Anders Hejlsberg – father of C# – chose Go for pragmatic reasons!
I don't follow. If they had picked Rust over Go why couldn't you also argue that they are prioritising shipping a working solution over language politics. It seems like a meaningless statement.
Go with parametric types is already a reasonably expressive language. Much more expressive than C in which a number of compilers has been written, at least initially; not everyone had the luxury of using OCaml or Haskell.
There is already a growing number of native-code tools of the JS/TS ecosystem, like esbuild or swc.
Maybe we should expect attempts of native AOT compilation for TS itself, to run on the server side, much like C# has an AOC native-code compiler.
I wish there was a language like rust without the borrow checking and lifetimes that was also popular and lives in the same area as go. Because I think go is actually the best language in this category but it’s only the best because there is nothing else. All in all golang is not an elegant language.
O'Caml is similar, now that it has multicore. Scala is also similar, though the native code side (https://scala-native.org/en/stable/) is not nearly as well developed as the JVM side.
Rust loses a lot of its nice properties without borrow checking and lifetimes, though. For example, resources no longer get cleaned up automatically, and the compiler no longer protects you against data races. Which in turn makes the entire language memory unsafe.
OCaml and Haskell already have that nice type system (and even more nice). If OCaml's syntax bothers you, there is Reason [1] which is a different frontend to the same compiler suite.
Also in this space is Gleam [2] which targets Erlang / OTP, if high concurrency and fault tolerance is your cup of tea.
I think they went for Go mostly because of memory management, async and syntactic similarity to interpreted languages which makes total sense for a port.
> it signals we've hit fundamental limits in JS/TS for systems programming
Really is this a surprise to anyone? I don't think anyone thinks JS is suitable for 'systems programming'.
Javascript is the language we have for the browser - there's no value in debating it's merits when it's the only option. Javascript on the server has only ever accrued benefits from being the same language as the browser.
I can appreciate the pain points you guys are addressing.
The "diagonal scaling" approach seems particularly clever - dynamically choosing between horizontal and vertical scaling based on the query characteristics rather than forcing users into a one-size-fits-all model. Most real-world data workloads have mixed requirements, so this flexibility could be a major advantage.
I'm curious how the new streaming engine with out-of-core processing will compare to Dask, which has been in this space for a while but hasn't quite achieved the adoption of pandas/PySpark despite its strengths.
The unified API approach also tackles a real issue. The cognitive overhead of switching between pandas for local work and PySpark for distributed work is higher than most people acknowledge. Having a consistent mental model regardless of scale would be a productivity boost.
Anyway, I would love to apply for the early access and try it out. I'd be particularly interested in seeing benchmark comparisons against Ray, Dask, and Spark for different workload profiles. Also curious about the pricing model and the cold start problem that plagues many distributed systems.
Disclosure, I am the author of Polars and this post. The difference with Ibis is that Polars cloud will also manage hardware. It is similar to Modal in that sense. You don't have to have a running cluster to fire a remote query.
The other is that we are only focussing on Polars and honor the Polars semantics and data model. Switching backends via Ibis doesn't honor this, as many architectures have different semantics regarding NaNs, missing data, order of them, decimal arithmetic behavior, regex engines, type upcasting, overflowing, etc.
And lastly, we will ensure it works seamlessly with the Polars landscape, that means that Polars Plugins and IO plugins will also be first class citizens.
It’s funny you mention Modal. I use modal to do fan-out processing of large-ish datasets. Right now I store the transient data in duckdb on modal, using polars (and sometimes ibis) as my api of choice.
I did this, rather than use snowflake, because our custom python “user defined functions” that process the data are not deployable on snowflake out of the gate, and the ergonomics of shipping custom code to modal are great, so I’m willing to pay a bit more complexity to ship data to modal in exchange for these great dev ergonomics.
All of that is to say: what does it look like to have custom python code running on my polars cloud in a distributed fashion? Is that a solved problem?
I've played around a bit with ibis for some internal analytics stuff, and honestly it's pretty nice to have one unified api for duckdb, postgres, etc. saves you from a ton of headaches switching context between different query languages and syntax quirks. but like you said, performance totally depends on the underlying backend, and sometimes that's a mixed bag—duckdb flies, but certain others can get sluggish with more complex joins and aggregations.
polars cloud might have an advantage here since they're optimizing directly around polars' own rust-based engine. i've done a fair bit of work lately using polars locally (huge fan of the lazy api), and if they can translate that speed and ergonomics smoothly into the cloud, it could be a real winner. the downside is obviously potential lock-in, but if it makes my day-to-day data wrangling faster, it might be worth the tradeoff.
curious to see benchmarks soon against dask, ray, and spark for some heavy analytics workloads.
My experience with it is that it's decent, but a "lowest-common denominator" solution. So you can write a few things agnostically, but once you need to write anything moderately complex, it gets a little annoying to work with. Also a lot of the backends aren't very performant (perhaps due to the translation/transpilation).
> Is this just shifting complexity from JS to HTML?
Very well said. This is the problem.
There's an old adage that every "scripting" language starts out small, but then ultimately needs all the features of a full programming language. If we start putting programming features into HTML, we'll eventually turn it into a full turing-complete programming language, with loops, conditionals, variables, and function calls. We'll have recreated javascript, outside of javascript.
>> Is this just shifting complexity from JS to HTML?
> Very well said. This is the problem.
It is a problem. Counterintuitively, it is also a solution.
Lifting some complexity out of JS and into HTML solves some problems. Lifting all complexity out of JS and into HTML creates new problems.
For example, I have a custom web-component `<remote-fragment remote-src='...'>`. This "shifts" the complexity of client side includes from "needing a whole build-step with a JS framework" into directly writing a plain html file in notepad that can have client-side includes for headers, footers, etc.
This results in less overall complexity in the page as a whole.
Shifting the "for" loop and "if" conditionals from JS to HTML, OTOH, results in more overall complexity in the page as a whole.
Horrible. We're going to end up with three separate languages: CSS, HTML, and Javascript, which will each be turing-complete programming languages with completely-overlapping featuresets, and there will be no clear reason to use one over the other.
Browsers will have to implement support for three PL runtimes. Everything will be complicated and confused.
There are tradeoffs. Further increasing the barrier to entry for new web browsers benefits the entrenched players and hurts end users by yielding fewer alternatives.
That's what people say of C++ too. Too many features makes it harder to learn a language and ramp up on codebases; they'll have different standards on what they use.
Developers can use whatever features they want, but users can only watch as their computer uses up more and more energy because it suddenly has to perform yet another build step that previously was done on the server.
It's not just shifting complexity. It improves locality of behavior (LoB). You can look at a <button> element and immediately know what it does (within the limited domain of "command"s). This is a dramatic improvement to readability.
My long-shot hope is that the page can come to embody most of the wiring on the page, that how things interact can be encoded there. Behavior of the page can be made visible! There's so much allure to me to hypermedia that's able to declare itself well.
This could radically enhance user agency, if users/extensions can rewire the page on the fly, without having to delve into the (bundled, minified) JS layers.
There's also a chance the just-merged (!) moveBefore() capability means that frameworks will recreate HTML elements less, which is a modern regression that has severely hampered extensions/user agency. https://github.com/whatwg/dom/pull/1307
> My long-shot hope is that the page can come to embody most of the wiring on the page, that how things interact can be encoded there.
I would love this. As a Tailwind user the last few years, it’s really been refreshing to have my styles both readable and inline on the elements instead of filed away in SCSS I’ll never see again. Even with scoped styles, some components get large enough that it feels unwieldy
Nah that's good. JS is way too powerful and 99% of pages don't need bloat like webrtc, navigator api or thousands of other points that are almost never used for good but for evil.
Html should be powerful enough on its own to provide basic web page functionality to majority of use cases and only then the user should give explicit permission for the server to rum unrestricted code.
That's why I rather keep HTML limited than embracing something like Svelt (which I haven't heard of before). Looking at it's inline syntax in the example, it is yet another something to learn. Yet another thing with peculiarities to keep track of, solving one kind of problem but introducing more complexity by spreading logic out (taking logic from one place and putting in another creates an additional relationship which is a relationship with that from which the logic was taken: Svelt and JS has to coexist and may for example overlap).
My favorite experience of shifting logic is writing a template engine in PHP. PHP is a template engine. I soon discovered I tried to replicate more and more PHP feestures, like control flow etc. I realize PHP did a way better job being a template engine. (This does not disqualify all use of said things, just that all seemingly good use cases aren't motivated, and may not turn out as expected.)
> Abstraction vs. magic: Is this just shifting complexity from JS to HTML? Frameworks already abstract state—how does this coexist?
The same way React or other frameworks can hook into things like CSS animations. If CSS animations didn't exist, the JS framework would have to write all the code themselves. With them existing, they can just set the properties and have it work.
Even if you're writing a basic menu popup directly in React, having these properties to use directly in your JSX means less code and less room for error.
Then if you need to do something special, you may need to do it by hand and not use these properties, just like if you needed a special animation you might have to do it without using the CSS animation properties.
I'm not a web dev, so I apologize if my questions are naive. Does this mean its a chrome-only thing or does it become a web standard? I ask because I would like to imagine the future isn't tied to Google's whims, graveyard of initiatives, and requirements.
That would be really nice, but, that’s been the way of it for the last few features too… it might not get adopted, but if enough people start using it…
My experience with anything declarative is that features are gradually bolted on until it eventually just becomes imperative (and ugly). For example HCL.
I believe declarative should stay purely declarative to describe data/documents in a static state (e.g. HTML, JSON) and an imperative “layer” can be used to generate or regenerate it, if needed.
When we talk about making AI safer, we often slide into paternalistic frames where we dictate outcomes rather than enabling capabilities with appropriate guardrails. The distinction she makes between providing capabilities and forcing functions seems critical.
I'm curious if anyone has explored applying Nussbaum's theory directly to AI development frameworks. What would her capabilities list look like for artificial intelligence? Could this be a more productive framework than current alignment approaches?
Based on your recent post history where you consider some issue X, its negation, and then ask a question with “I’m curious if…”, these seem to be LLM generated. In which case: please don’t post that here.
This is such a strange comment. The parent has used that phrase in a _few_ of their comments, but not all. People sometimes re-use phrases in their speech. Please don't post unsubstantiated accusations here.
Between LLVM's optimization passes, static analysis, and modern LLM-powered tools, couldn't we build systems that not only identify but automatically fix these performance issues? GitHub Copilot already suggests code - why not have "Copilot Performance" that refactors inefficient patterns?
I'm curious if anyone is working on "self-healing" systems where the optimization feedback loop is closed automatically rather than requiring human engineers to parse complex profiling data.
500 years ago, betting on the Pope was punishable by excommunication. Today, crypto-powered prediction markets are placing odds on the next conclave. Have we come full circle, or has technology fundamentally changed the ethics of speculation? Should there be limits to what we can bet on, or is ”information price discovery“ an absolute good?
Are decentralized prediction markets a net positive for transparency, or are they just incentivizing bad behavior?
I've been thinking about prediction market designs that could preserve information discovery benefits while minimizing harm - maybe through delayed settlement periods, anti-manipulation mechanisms, or separating financial stakes from informational ones.
As web3 and DeFi make these markets more accessible and resistant to regulation, should we be building more guardrails into the protocols themselves? Or is this an unsolvable tension in market design?
When visualization requires seconds or minutes, we carefully plan each plot. When it's instantaneous, we can explore intuitively, following curiosity rather than pre-planned queries. Faster plotting doesn't just save time - it changes which insights I discover. Patterns that require multiple quick iterations to notice suddenly become obvious.
My question, how does fastplotlib differ from other GPU-accelerated visualization tools, such as Kinetica or Heavy.ai (MapD), which are proprietary solutions?