Hacker Newsnew | past | comments | ask | show | jobs | submit | mttpgn's commentslogin

There are many computing platforms that rust devs simply cannot be bothered to support (for just one example, Cygwin). The Python devs, by contrast, have actually put in a lot of work over the years to support a plethora of operating systems and architectures.

Requiring Rust as a CPython dependency would be to abandon all their users stuck in obscure environments.


> The Python devs, by contrast, have actually put in a lot of work over the years to support a plethora of operating system

This pre-pep is proposed by core Python devs. The discussion includes comparing the supported targets of both Python and Rust, the people who are left worse off are devs external to both groups, who've made Python work on unsupported targets (e.g. Gentoo, who are represented in the thread).


I wasn't happy with yet another RIR, however apparently plenty folks of core team, including Guido, seem to be up for it.

The proposal is a long way off from a complete rewrite. It would be many years before end users were even compelled to use a Rust-required CPython to have a supported version.

I would rather see more PyPy love, but here we are.

Your site has a search bar for typing in a full prompt to an LLM about what is my current mood, and I just find it interesting that one's mood is the important thing for your users to supply as input to your service. For me, unless a major event has taken place, I usually don't take time to think much about what's my mood beyond one or two words. If I've been on a journaling kick I'll usually write about the concrete experiences of the day as a proxy for describing my mood without actually getting to what this means for my energy levels/affectations, etc. The mood descriptors I do recognize in myself (eg. kinda sad!) generally factor little into my content consumption decisions (at least consciously). More important to me are questions like "What are folks talking about? (driving discourse online or at the office)", "Which movies have been recommended to me (by friends/family or by advertising)", and "What's accessible? (On a service I already subscribe to without needing an additional purchase)".

Your point is excellent and cuts to the core of what we're trying to explore. You're right, ‘mood' can be a fuzzy, high-friction starting point.

The hypothesis behind the prompt isn't that everyone consciously identifies a mood. It's more that "mood" is a useful shorthand for a complex set of preferences at a given moment. When you think, "I want something mindless and funny after that long meeting," that's a mood proxy. The goal of the open-ended prompt is to capture that full sentence, not just the one-word label.

You've identified the three major discovery engines that dominate today:

Social Proof ("What are folks talking about?") Direct Recommendation ("What was recommended to me?") Access & Friction ("What's on my services?"). These are powerful because they require zero cognitive effort from the user. You're reacting to signals. Our experiment is asking: what if you reversed the flow? What if you started with your own internal state—even if vaguely defined as "kinda sad" or "need distraction" and used a model to map that to a title? It's inherently more work, which is its biggest hurdle.

The interesting technical challenge is whether an LLM can act as a translator between your messy, human input ("just finished a complex project, brain fried, want visual spectacle not dialogue") and the structured metadata of a database (genres, pacing, tone, plot keywords). It's not about mood detection; it's about intent parsing. A future iteration might not ask for a mood at all, but simply: "Tell me about your day." The model's job would then be to infer the desired escapism, catharsis, or reinforcement from the narrative. Would that feel more natural, or just more invasive?

We're early, and you've nailed the key tension. Does discovery work best when it's passive (social/algorithmic feeds) or active (intent-driven search)? The former is easy; the latter might be more satisfying if we can reduce the friction enough. Thanks for giving me a much better way to frame this.


> Pandas code is untestable

The thousand-plus data integrity tests I've written in pandas tell a different story...


The BBC published this article. I agree with "all of literature" being hyperbolic though.


Phillip


I too have a `mkcd` in my .zshrc, but I implemented it slightly differently:

  function mkcd {
    newdir=$1
    mkdir -p $newdir
    cd $newdir
  }


> So I prompted my brain

Is there anyone for whom this phrasing has a clarity advantage over "So I asked myself..."/"So I thought to myself..." ?


The people who reply to every post on the internet with "I asked ChatGPT and it said..."


Ditto


This submission links to the actual paper. The other submission is the NY Times' summary of it.


We still consider it a dupe if it's about the same underlying topic, which it certainly is in this case.

Sometimes we'll update the URL to the canonical source (i.e., the study paper) if the submitted article is just a summary that adds no substance (aka blog spam). But if it's a report from a reputable publication that adds substance, we'll leave that as the URL and link to the study paper from the top text, which I've done now.


What a great reply this graph can make the next time we see VCs on X complaining how, "Everyone I talk to has so, so many engineering positions they just can't fill". By these measurements, the job market looks worse now than it did at the troughs of lockdown.


Midpandemic was great for hiring in my memory, but I guess that was after the lockdowns probably.


Ha. Not that anyone lies to VCs or anything.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: