Hacker Newsnew | past | comments | ask | show | jobs | submit | tosh's commentslogin

Which llm is best at driving DuckDB currently?

DuckDB exposes Postgres SQL, and most coding LLMs have been trained on that.

Of the small models I tested, Qwen 3.5 is the clear winner. Going to larger LLMs, Sonnet and Opus lead the charts.


It used to be possible to type immediately while the page is loading and have all key presses end up in the input field.

Why run this check before user can type?

Why not run it later like before the message gets sent to the server?


I would argue it is an anti-pattern and irritating the core audience they want to reach with Claude Code

Let's say I have a bunch of objects (e.g. parquet) in R2, can the agent mount them? Or how do I best give the agent access to the objects? HTTP w/ signed urls? Injecting the credentials?


Dynamic Workers don't have a built-in filesystem, but you can give them access to one.

What you would do is give the Worker a TypeScript RPC interface that lets it read the files -- which you implement in your own Worker. To give it fast access, you might consider using a Durable Object. Download the data into the Durable Object's local SQLite database, then create an RPC interface to that, and pass it off to the Dynamic Worker running on the same machine.

See also this experimental package from Sunil that's exploring what the Dynamic Worker equivalent of a shell and a filesystem might be:

https://www.npmjs.com/package/@cloudflare/shell


You don't have to throw a chef's knife away when it becomes dull, you just sharpen it.


At first I was trying to figure out why the parent comment was getting downvoted, then I read the last line. Yeesh, ya, you don't need to "learn" to sharpen, just get one of those pull-throughs. They is a minuscule learning curve with with it. It doesn't do the best sharpening job but as a particularly YouTuber once said: "The best sharpener is the one you will use."


People don't want to do it and they don't want to learn to do it. It's easier for them to buy a new knife. They're not expensive. Maybe keep the old one for garage stuff and gardening.


I have one of these for travel: https://store.177milkstreet.com/products/suehiro-for-milk-st...

All you have to do is run the knife through it a few times for a decent sharpen. No power, no effort, no skill required.


A new knife might not be expensive, but it's a new thing that has to be produced, and packaged, and shipped, and stored, and so on. Just keep your old stuff in shape, people.


For the TPC-DS results it would also have been nice to show how the macbook neo compares to the AWS instances.

Or am I missing something?


Indeed, it would have been interesting but I really wanted to get the blog post out on the launch day of the MacBook Neo and did not have the bandwidth to run additional cloud experiments.

I ran TPC-DS SF300 now on the c6a.4xlarge. It turns out that it's still quite limited by the EBS disk's IO: while 32 GB memory is much more than 8 GB, DuckDB needs to spill to disk a lot and this shows on the runtimes. Running all 99 queries took 37 minutes, so about half of the MacBook's 79 minutes.

> Command being timed: "duckdb tpcds-sf300.db -f bench.sql"

> Percent of CPU this job got: 250%

> Elapsed (wall clock) time (h:mm:ss or m:ss): 37:00.96

> Maximum resident set size (kbytes): 25559652


ty for the follow up!


a bit less capable but ~comparable to qwen 3.5 122b

~ 2x faster inference than qwen 3.5 122b

~ 7x faster inference than gpt-oss 120b

probably most important: training datasets and training recipe available (!)

in other words this is an open source llm release (not just open weights!)


Oh. That's nice. Thanks for sharing this in the comments.


aged very well


Tony Hoare on how he came up with Quicksort:

he read the algol 60 report (Naur, McCarthy, Perlis, …)

and that described "recursion"

=> aaah!

https://www.youtube.com/watch?v=pJgKYn0lcno


SSD is 2x faster read/write


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: