Hacker Newsnew | past | comments | ask | show | jobs | submit | erdaniels's commentslogin

No, but it's pretty common IME to create an Atlas cluster that has internet-wide access (0.0.0.0/0) when testing and forgetting to turn this off. According to https://jira.mongodb.org/browse/SERVER-115508, this affects unauthenticated ops. Based on the repro code itself, it looks like this happens way before authentication is checked for the corresponding OP at the OP_MSG decoding level.

So if you're using Atlas, check that your Cluster has auto upgraded already. If you're using 0.0.0.0/0, stop doing that and prefer a limited IP address range and even better, use VPC Peering or other security/network boundary features.


We received communication that all Atlas clusters were upgraded with the fix before the vulnerability was announced.


This is a good example of a benefit of certificate-based authentication option for MongoDB, because you need to at least present a valid client certificate to transmit any data.


> No, but it's pretty common IME to create an Atlas cluster that has internet-wide access (0.0.0.0/0) when testing and forgetting to turn this off.

That is a ridiculous default.


Time to get off for good. We're moving to https://forgejo.org/. With downtime and this, screw them.


I think fixtures generally work fine. If a change to one breaks many tests, introduce a new one and start using that. I also think it's okay to make some manual changes to them in the test and it's distinct from wanting factories; needing factories only in test code feels like a waste.

100% agree with "Test only what you want to test".


This sounds promising. Keep us posted! If there's anywhere we can track progress, please link :)


I was getting frustrated trying to figure out how to get a wheel and pex built in my work's uv monorepo/workspace. I've been used to how easy it is to build a go binary for any platform. Anyway, once I figured it out, I figured I should publish my cleaned up findings. This is great for deploying self-contained python executables and not needing to worry about any packaging non-sense at runtime.


The end of the post disturbs me. I'm finding it hard to believe this writing is AI. I don't want my future to be reading articles written by AI that I mistake for being human written.


I'm not the target audience but the GitHub and website getting started page feel so poorly explained. What the hell is a schedule?


This MIT article covers it a bit more (with a slightly too generic title) High-performance computing, with much less code https://news.mit.edu/2025/high-performance-computing-with-mu... (https://news.ycombinator.com/item?id=43357091)


Word "schedule" is already taken for thread scheduling in the kernel, so reuse of that word is confusing. This is a code generator that operates on nested loops - allows to reorder and split them, replace instructions etc. All to maximize performance.


A schedule is the order in which machine instructions get executed.

So, I've done this professionally (written assembler code, and then scheduled it manually to improve performance). Normally you don't need to do that these days, as even mobile CPUs use out-of-order cores which dynamically schedule at runtime.

It's only going to be useful if you're writing code for some machine that doesn't do that (they give examples of TPU etc)


> out-of-order cores which dynamically schedule at runtime.

OOO architectures don't reschedule dynamically - that's impossible - they just have multiple instruction buffers that can issue the instructions. So scheduling is still important for OOO it's just at the level of DDG instead of literally linear order in the binary.

Edit: just want to emphasize

> It's only going to be useful if you're writing code for some machine that doesn't do that

There is no architecture for which instruction scheduling isn't crucial.


> There is no architecture for which instruction scheduling isn't crucial.

In my experience doing back-end compiler work, it's definitely last on the list of major concerns. Obviously you can't ignore it, but it's not where any significant gains are coming from, and anything 'fancy' you do there is likely to be picked up by future generations of hardware.


> but it's not where any significant gains are coming from

i have no clue what you're saying here - if your scheduler fucks up your pipelining you're gonna have a bad time (conversely if you're scheduler recognizes pipelining opportunities you're gonna have a good time). as usual anyone who says something like "in my experience it's not a big deal" simply does not have enough experience.


Yes, clearly we should have spent more time trying to coax our cpus into fucking up pipelining.


If you're talking about modifying the DDG, I would not call that scheduling. Because then you need to do serious work to prove that your code is actually doing the same thing. But I haven't spent a lot of time in the compiler world, so maybe they do call it that. Perhaps you could give your definition?


Needing to do serious work would correlate with Exo having multiple publications about its design. It's a broader sense of scheduling than "reorder trivially-independent instructions", but, if you remove the "trivially-" part and replace "instructions" with "operations", it's the same concept of doing transformations that only move around when certain things happen.



Agreed. It looks like if you need to optimize it would be much easier to just modify the code directly. The result will also be more readable and therefore easier to support in the future.


As a candidate, I would drop out of the interview process if I knew this was being used. This level of mistrust doesn't bode well for my future employer. As an employer, I don't care if someone cheats. In the context of programming and software engineering, I want to know that they explain the solution to the problem well and thoughtfully answer my questions. At the end of the day, we all cheat on the job!


I completely understand your perspective. The goal of Lyra isn’t to create an atmosphere of mistrust but to ensure a fair evaluation process. We’re not targeting code-related cheating, as our AI focuses only on verbal responses rather than programming tasks.

The challenge we’re addressing is the rise of AI-powered tools that feed candidates real-time answers during interviews, making it difficult to assess their actual understanding and communication skills. This goes beyond simple preparation—it allows candidates to present knowledge they don’t actually have, leading to mismatches between hiring expectations and on-the-job performance.


I'd suggest you start with reducing the amount of daily information you uptake. Where possible, try to subscribe to newsletters with weekly digests on the topics you care about. With politics, you can just check the news like once a week and not have missed very much day by day.


thanks for this. yeah im trying to do weekly digests (or daily if it is straight to the point). I feel like visiting some platforms creates the stress right away, with the bombarding of info. But i was also referring to this from a solopreneurship perspective. As in, if you FOMOing on building a cool thing, or not using the latest tech etc etc. cheers


Upgrading from QT4 to 5 broke the appending of QStrings to QByteArrays such that it stored half the data from a QString (some wonkiness with UTF8 and UTF16 IIRC). Took a rewrite of the RTMP/AMF layer in the codebase to figure it out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: