Hacker Newsnew | past | comments | ask | show | jobs | submit | ddxv's commentslogin

Agreed! The one that I really don't like is that social platforms promote / prefer screenshots of text. Search engines promote sites that link to themselves. All the good parts of URLs are missing. How often I see something interesting, just to realize it's a screenshot and I have to go dig around myself figuring out where it came from.

While some people (my friends included) are out there paying $200 a month to OpenAI and Anthropic, I’d just like to share that if you need to save some money now is the time to cash in on the high valuations and free tiers that all the major LLMs provide.

I think most people on the $200 tiers could get 90% of what they want from a cheaper tier

When I’ve talked to friends about this, they’re ‘sure’ they’re maxing out or using it to it’s fullest, but I have a sneaking suspicion that if they were to try a cheaper / free tier setup they would probably be mostly fine.

So, if you have the money and enjoy it, continue on, but if you’ve been looking for a way to save $200-$180 a month, try the free tiers, they’re really just as good.


Not really. Paid access gets you more tokens per month on better models. Some models are actually better at coding complex tasks than others. And without tokens, you can't submit anything at all.

This is why folks making software products that sell or are expected to sell are willing to pay the monthly fee. It's peanuts versus the value of the code we're getting.

The modern tools have a pay-to-play interface and you can see your actual cost in dollars and cents for each prompt. If you don't have enough budget in your plan (or pay extra) you simply can't submit such a prompt.

Disclosure: I spent over $100 on Cursor this month and will likely go to the $200 plan next month.


I've seen an interesting politically motivated one. It didn't appear to be a bot, just a user from China:

https://github.com/umami-software/umami/pull/3678

The goal is "Taiwan" -> "Taiwan, Province of China" but via the premise of updating to UN ISO standards, which of course does not allow Taiwan.

The comment after was interesting with how reasonable it sounds: "This is the technical specification of the ISO 3166-1 international standard, just like we follow other ISO standards. As an open-source project, it follows international technical standards to ensure data interoperability and professionalism."

The politics of the intent of the PR was masked. Luckily, it was still a bit hamfisted. The PR incorrectly changed many things and the user stated their political intention in the original PR (the above is from a later comment).


Doubly interesting (and relevant to this discussion) is that it was an AI code review tool that detected the issues with the PR

One of these days I need to make a bot that scans FOSS repos for this kind of little pink nonsense behavior.

The insecurity of wanting to call a place "country name, province of different country name" should alone be mocked. Imagine, "Ukraine, province of Russia," or "India, colony of The United Kingdom." Absurd on its face.


It's just information warfare, sign of the times.

Every little thing counts, even if it's just changing names in an open source app like that.


The problem with this is that, for some folks, its not absurd or nonsense because that is not considered a "country name" to them. It is considered a province name. So the inverse (calling a province a country) is considered absurd/nonsense.

Yeah, I agree. I consider them like public keys or IPs.

This has been slowly growing as a topic in the back of my mind lately. The past couple days I had some awkward interactions with ChatGPT that made me realize how it can remember conversations from last year, and certainly everything is logged.

I hope that local models will become efficient enough that more private llms will become available without heavy GPUs, but that seems a long ways off still.


Honestly I barely care which model I am using and switch between them all. Usually in a 'this is terrble' to 'this is amazing' and back cycle.

What I definitely do care about is speed and efficiency. I recently canceled CoPilot to go back to Cursor, it's just so much faster for the inline code completion.

When I do have something difficult, I open four browser tabs and copy paste a big long promp into the free versions of the top models so I can take my time reasoning out their answers.

I use agents when I have a basic task that I can easily judge their output in code review.


Mind blowing they couldn't get this to work. It's struck me lately that the models don't seem to matter anymore, they're all equally good.

The UX and integration with regular phone features is what makes the tool shine and by now there should be plenty of open source models and know how to create their own.

What is Google offering that Apple can't figure out on their own?

Maybe people don't personal assitant AI enough to justify the investment? My phone has probably 6 or 7 AI tools that have talking features that I don't ever explore.


LLM business is not a one-shot figure it out and then collect some easy money, it a constant work and expenses just for LLM functionality. So if Apple analyzed this and decided that they would rather rent such capability, it seems quite logical. Also Google already has ties to Apple, they may even strike a deal where search on iOS is bartered (maybe partially) for Gemini service. Win-win. And Google is not going out of business any time soon, so more reliable than any pure-LLM corporation.

Another, less likely possibility is that Apple may be reluctant to steal enough data to train own LLM to a competitive level and then continue this in perpetuity. They have this notion that they are privacy oriented FAANG company, and may want to keep up this idea.

Maybe it is a sum total of a lot of factors, which in the end tilted the decision to a rental model.


I don't know, Gemini 2.5 has been the only model that's been able to not consistently make fundamental mistakes with my project as I've been working with it over the last year. Claud 3.7, 4.0, and 4.5 are not nearly as good. I gave up on chatgpt a couple years ago so I have no idea how they perform. They were bad when I quit using it.

Do you find that Gemini results are slightly different when you ask the same question multiple times? I found it to have the least consistently reproducible results compared to others I was trying to use.

Sometimes it will alternate between different design patterns for implementing the same feature on different generations.

If it gets the answer wrong and I notice it, often just regenerating will get past it rather than having to reformulate my prompt.

So, I'd say yeah...it is consistent in the general direction or understanding, but not so much in the details. Adjusting temp does help with that, but I often just leave it default regardless.


I use all of them about equally, and I don't really want to argue the point, as I've had this conversation with friends, and it really feels like it is becoming more about brand affiliation and preference. At the end of the day, they're random text generators and asking the same question with different seeds gives different results, and they're all mostly good.

Google still gets to maintain it's monopoly control if "the downloads originate from linkouts from apps installed/updated by Google Play".

MinIO has moved away from having a free community fork, and I think it's base cost is close to $100k a year. I've been using Garage and been happy, but as a single dev and orders of magnitude smaller than the OP, so there are certainly edge cases I'm missing to compare the two.


I'm a fellow new Garage user. I have had a great time so far - but I also don't need much. My use case is to share data analysis results with a small team. I wanted something simple to manage that can provide an s3 like interface to work with off the shelf data analysis tools.


I'd agree. Certainly mentioning that information came from an LLM is important so people know to discount or manage it. It's possibly incorrect but still useful as an averaged answer of some parts of the internet.

Certainly citing GPT is better than just assuming it's right and not citing it along with an assertion.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: