Hacker Newsnew | past | comments | ask | show | jobs | submit | benesch's commentslogin

The exact CPU depends on the region/cloud provider, but this Granite Rapids CPU is representative: https://www.intel.com/content/www/us/en/products/sku/240777/...


Thanks!


Often not a dealbreaker, actually! We can spin up new tpuf regions and procure dedicated interconnects to minimize latency to the on-prem network on request (and we have done this).

When you're operating at the 100B scale, you're pushing beyond the capacity that most on-prem setups can handle. Most orgs have no choice but to put a 100B workload into the nearest public cloud. (For smaller workloads, considerations are different, for sure.)


For local dev + testing, we recommend just hitting the production turbopuffer service directly, but with a separate test org/API key: https://turbopuffer.com/docs/testing

Works well for the vast majority of our customers (although we get the very occasional complaint about wanting a dev environment that works offline). The dataset sizes for local dev are usually so small that the cost rounds to free.


> although we get the very occasional complaint about wanting a dev environment that works offline

It's only occasional because the people who care about dev environments that work offline are most likely to just skip you and move on.

For actual developer experience, as well as a number of use cases like customers with security and privacy concerns, being able to host locally is essential.

Fair enough if you don't care about those segments of the market, but don't confuse a small number of people asking about it with a small number of people wanting it.


As someone who works for a competitor, they are probably right holding off on that segment for a while. Supporting both cloud and local deployments is somewhere between 20% harder and 300% harder depending on the day.

I'm watching them with excitement. We all learn from each other. There's so much to do.


Can confirm. With a setup that works offline, one can

- start small on a laptop. Going through procurement at companies is a pain

- test things in CI reliably. Outages don’t break builds

- transition from laptop scale to web scale easily with the same API with just a different backend

Otherwise it’s really hard to justify not using S3 vectors here

The current dev experience is to start with faiss for PoCs, move to pgvector and then something heavy duty like one of the Lucene wrappers.


Yep, we're well aware of the selection bias effects in product feedback. As we grow we're thinking about how to make our product more accessible to small orgs / hobby projects. Introducing a local dev environment may be part of that.

Note that we already have a in-your-own-VPC offering for large orgs with strict security/privacy/regulatory controls.


That’s not local though


having a local simulator (DynamoDB, Spanner, others) helps me a lot for offline/local development and CI. when a vendor doesn't off this I have often end up mocking it out (one way or another) and have to wait for integration or e2e tests for feedback that could have been pushed further to the left.

in many CI environments unit tests don't have network access, it's not purely a price consideration.

(not a turbopuffer customer but I have been looking at it)


> in many CI environments unit tests don't have network access, it's not purely a price consideration.

I've never seen a hard block on network access (how do you install packages/pull images?) but I am sympathetic to wanting to enforce that unit tests run quickly by minimizing/eliminating RTT to networked services.

We've considered the possibility of a local simulator before. Let me know if it winds up being a blocker for your use case.


> how do you install packages/pull images

You pre-build the images with packages installed beforehand, then use those image offline.


My point is it's enough of a hassle to set up that I've yet to see that level of restriction in practice (across hundreds of CI systems).


Look into Bazel, a very standard build system used at many large tech companies. It splits fetches from build/test actions and allows blocking network for build/test actions with a single CLI flag. No hassle at all.

The fact that you haven't come across this kind of setup suggests that your hundreds of CI systems are not representative of the industry as a whole.


I agree our sample may not be representative but we try to stay focused on the current and next crop of tpuf customers rather than the software industry as a whole. So far "CI prohibits network access during tests" just hasn't come up as a pain point for any of them, but as I mentioned in another comment [0], we're definitely keeping an open mind about introducing an offline dev experience.

(I am familiar with Bazel, but I'll have to save the war stories for another thread. It's not a build tool we see our particular customers using.)

[0]: https://news.ycombinator.com/item?id=46758156


you pull packages from a trusted package repository, not from the internet. this is not rare in my experience (financial services, security) and will become increasingly common due to software supply chain issues.


I should have clarified, by local dev and testing I did in fact mean offline usage.

Without that it’s unfortunately a non starter


So I can note this down on our roadmap, what's the root of your requirement here? Supporting local dev without internet (airplanes, coffee shops, etc.)? Unit test speed? Something else?


I listed some reasons in another comment: https://news.ycombinator.com/item?id=46757853

I appreciate your responsiveness and open mind


Thanks, appreciate this! Jotted down some notes on our roadmap.


I wish you the best


It’s hard to overstate the amount of service Ian provided to the Go community, and the programming community at large. In addition to gccgo, Ian wrote the gold linker, has blogged prolifically about compiler toolchains, and maintains huge swaths of the gcc codebase [0]. And probably much, much more that I’m not aware of.

I’ve had the pleasure of trading emails with Ian several times over the years. He’s been a real inspiration to me. Amidst whatever his responsibilities and priorities were at Google he always found time to respond to my emails and review my patches, and always with insightful feedback.

I have complicated feelings about the language that is Go, but I feel confident in saying the language will be worse off without Ian involved. The original Go team had strong Bell Labs vibes—a few folks who understood computers inside and out who did it all: as assembler, linker, two compilers, a language spec, a documentation generator, a build system, and a vast standard library. It has blander, corporate vibes now, as the language has become increasingly important to Google, and standard practices for scaling software projects have kicked in. Such is the natural course of things, I suppose. I suspect this cultural shift is what Ian alluded to in his message, though I am curious about the specific tipping point that led to his decision to leave.

Ian, I hope you take a well-deserved break, and I look forward to following whatever projects you pursue next.

[0]: https://github.com/gcc-mirror/gcc/blob/master/MAINTAINERS


It's very important for both the compiler tools chains of go to continue working well for redundancy and feature design validation purposes. However, I'm generally curious -- do people / organizations use gcc-go for some use cases ?


GCC Go does not support generics, so it's currently not very useful.


I assume it will follow gjc footsteps if no one steps up for maintenance.

GCC has a high bar for having frontends added into the standar distribution, and if there isn't a viable reason why they should be kept around, they get eventually removed.

What kept gcj around for so many years, after being almost left for dead, was that it was the only frontend project that had unit tests for specific compilation scenarios.

Eventually someone took the effort to migrate those tests, and remove gcj.


It has it's niche uses, such as compiling Go for lesser used architectures. It's a bit awkward to not have full language capabilities, but it still feels nicer than writing C/C++.


> GCC Go does not support generics, so it's currently not very useful.

I don't think a single one of the Go programs I use (or have written) use generics. If generics is the only sticking point, then that doesn't seem to be much of a problem at all.


You’re also at the mercy of the libraries you use, no? Which likely makes this an increasingly niche case?


> You’re also at the mercy of the libraries you use, no?

To a certain extent. No one says you must use the, presumably newer, version of a library using generics or even use libraries at all. Although for any non-trivial program this is probably not how things are going to shake out for you.

> Which likely makes this an increasingly niche case?

This assumes that dependencies in general will on average converge on using generics. If your assertion is that this is the case, I'm going to have to object on the basis that there are a great many libraries out there today that were feature-complete before generics existed and therefore are effectively only receiving bug fix updates, no retrofit of generics in sight. And there is no rule that dictates all new libraries being written _must_ use generics.


I just used them today to sort a list of browser releases by their publication date. They're not universal hammers but sometimes you do encounter something nail shaped that they're great at.


Yes, the three major open table formats are all quite similar.

When AWS launched S3 Tables last month I wrote a blog post with my first impressions: https://meltware.com/2024/12/04/s3-tables

There may be more in depth comparisons available by now but it’s at least a good starting point for understanding how S3 Tables integrates with Iceberg.


Cool, thank you. It feels like Athena + S3 Tables has the potential to be a very attractive serverless data lakehouse combo.


> I'm sure they'd quickly argue it's wire compatibility, but even then it's a slippery slope and wire compatible is left open to however the person wants to interpret it.

I actually think that they'd argue they intend to close the feature gap for full Postgres semantics over time. Indeed their marketing was a bit wishful, but on Bluesky, Marc Brooker (one of the developers on the project) said they reused the parser, planner, and optimizer from Postgres: https://bsky.app/profile/marcbrooker.bsky.social/post/3lcghj...

That means they actually have a very good shot at approaching reasonably full Postgres compatibility (at a SQL semantics level, not just at the wire protocol level) over time.


> I liked the author's write-up, but as an old programmer take umbrage at the idea that changing your parser in the middle of a program is "crazy", we used to do this... well maybe not all the time... but with a greater frequency than we do today.

I think Justin addresses that point, though! He writes:

> The development of programming languages over the past few decades has been, at least in part, a debate on how best to allow users to express ways of building new functionality out of the semantics that the language provides: functions, generics, modules.

And indeed by modern PL standards patching the parser at runtime is very unusual.

The "modern" language that I've worked in that comes closest is Ruby, since the combination of monkey patching and the lack of symbols in the function call syntax is well suited to constructing DSLs. But most teams I've worked with that use Ruby eventually developed a strict "no monkey patching" rule, based on lived experience. At scale allowing developers to invent DSLs on the fly via monkey patching made the programs as a whole too complicated to reason about—too hard to move between modules in the codebase if every module essentially had its own syntax that needed to be learned.

I suppose describing this as "dark, demonic pathways" is a bit overstated for comedic effect but indeed "change the language syntax at runtime" does seem to be generally accepted these days as a bad software engineering practice. Works fine at a small scale, but doesn't age well as a team and codebase grows.


Yes! I’m actively working on it, in fact. We’re waiting on the next release of the Rust `object_store` crate, which will bring support for S3’s native conditional puts.

If you want to follow along: https://github.com/slatedb/slatedb/issues/164


> Anecdotally I have had to do this in js a few times. I have never had to do this in Rust. Probably because Rust projects are likely to ship with fewer bugs.

Still anecdotal, but I have worked on a large Rust codebase (Materialize) for six years, worked professionally in JavaScript before that, and I definitely wouldn’t say that Rust projects have fewer bugs than JavaScript projects. Rust projects have plenty of bugs. Just not memory safety bugs—but then you don’t have those in JavaScript either. And with the advent of TypeScript, many JS projects now have all the correctness benefits of using a language with a powerful type system.

We’ve forked dozens of Rust libraries over the years to fix bugs and add missing features. And I’m know individual Materialize developers have had to patch log lines into our dependencies while debugging locally many a time—no record of that makes it into the commit log, though.


It could be that I just haven't written enough Rust to encounter this issue. Thanks for the insight!


> It would be so much better if this were a Postgres extension instead.

I've thought about this counterfactual a lot. (I'm a big part of the reason that Materialize was not built as a PostgreSQL extension.) There are two major technical reasons that we decided to build Materialize as a standalone product:

1. Determinism. For IVM to be correct, computations must be strictly deterministic. PostgreSQL is full of nondeterministic functions: things like random(), get_random_uuid(), pg_cancel_backend(), etc. You can see the whole list with `SELECT * FROM pg_proc WHERE provolatile <> 'i'`. And that's just scratching the surface. Query execution makes a number of arbitrary decisions (e.g., ordering or not) that can cause nondeterminism in results. Building an IVM extension within PostgreSQL would require hunting down every one of these nondeterministic moments and forcing determinism on them—a very long game of whack a mole.

2. Scale. PostgreSQL is fundamentally a single node system. But much of the reason you need to reach for Materialize is because your computation is exceeding the limit of what a single machine can handle. If Materialize were a PostgreSQL extension, IVM would be competing for resources (CPU, memory, disk, network) with the main OLTP engine. But since Materialize is a standalone system, you get to offload all that expensive IVM work to a dedicated cluster of machines, leaving your main PostgreSQL server free to spend all of its cycles on what it's uniquely good at: transaction concurrency control.

So while the decision to build Materialize as a separate system means there's a bit more friction to getting started, it also means that you don't need to have a plan for what happens when you exceed the limits of a single machine. You just scale up your Materialize cluster to distribute your workload across multiple machines.

One cool thing we're investigating is exposing Materialize via a PostgreSQL foreign data wrapper [0]. Your ops/data teams would still be managing two separate systems, but downstream consumers could be entirely oblivious to the existence of Materialize—they'd just query tables/views in PostgreSQL like normal, and some of those would be transparently served by Materialize under the hood.

[0]: https://www.postgresql.org/docs/current/postgres-fdw.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: