Hacker Newsnew | past | comments | ask | show | jobs | submit | dgrabla's commentslogin

In Germany, Paypal uses NFC payments. It works on GrapheneOS

Taskwarrior is perfect. Give it a try.

  > t stats | grep -E 'Oldest|Total' \n
  Total                      7675
  Oldest task                2014-07-24


Back in the '90s, we joked about putting “the internet” on a floppy disk. It’s kind of possible now.


Yeah, those guys managed to steal the internet.


If MCP gets used this way I see big trouble when people hardcode stuff and then the provider updates the endpoints. MCP does not have versions as the list-tools is a living document, you are supposed to fetch and read the current version. AI would be totally fine with it because it would be able to reason the change and adapt, but the hardcoded app is going to break badly.


Exactly my thoughts after reading the article. I am surprised that so few have pointed this out because it entirely invalidates the article’s conclusion for any serious usage. To stay at the USB-C example: it‘s like plugging in a Toaster into a monitor but the Toaster changes its communication protocol every time it gets reconnected.


Indeed I have a ploopy 1 on the drawer because it gives me strong RSI on the thumb.



Great breakdown!. The "own your own AI" at home is a terrific hobby if you like to tinker, but you are going to spend a ton of time and money on hardware that will be underutilized most of the time. If you want to go nuts check out Mitko Vasilev's dream machine. It makes no sense if you don't have a very clear use case that only requires small models or really slow token generation speeds.

If the goal however is not to tinker but to really build and learn AI, it is going to be financially better to rent those GPUs/TPUs as needs arise.


Any M-series Mac is "good enough" for home LLMs. Just grab LM studio and a model that fits in memory.

Yes, it will not rival OpenAI, but it's 100% local with no monthly fees and depending on the model no censoring or limits on what you can do with it.


For what purpose? I'm asking this as someone who threw one of the cheap $500 Nvidia's with 16gb of VRAM and I'm already overwhelmed with what I can do already with Ollama, Krita+ComfyUI etc etc.


> spend a ton of time and money

Not necessarily. For non-professional purposes, I've spent zero dollars (no additional memory or GPU) and I'm running a local language model that's good enough to help with many kinds of tasks including writing, coding, and translation.

It's a personal, private, budget AI that requires no network connection or third-party servers.


on what hardware (and how much did you spend on it)?


This is correct. The cost makes no sense outside of hobby and interest. You're far better off renting. I think there is some merit to having a local inference server if you're doing development. You can manage models and have a little more control over your infra as the main benefits.


Terrific hobby? Sign me up!


It all depends of your location and the filament. I had PLA from 2018 that printed flawlessly last year and I had new, sealed PLA and PTEG spools that were brittle or printed a stringy mess unless they spent a night on the drier. The problem comes because there are so much variety and no way to know for the consumer what the composition of each filament is. It does not seem to have much to do with price. This inconsistency across manufacturers, filament types, colors and batches is why people have favorite brands they stick with to minimize the risk.


I have Bambu, Qidi and Creality printers. Qidi is a good compromise between open and 'print-quality-out-of-the-box'. My Q1 pro is easy to hack, but I have not done anything to it because it prints pretty much as well as Bambu.


It seems to be an elusive fruit in America. It is however a very common fruit in Spain (Chirimoya) and you will find it in all supermarkets - it is not particularly expensive. Orchards are in southern Spain and canary islands.


Chirimoya also originated in the Americas but is a very different fruit:https://en.wikipedia.org/wiki/Cherimoya https://en.wikipedia.org/wiki/Asimina_triloba

Compare, for example, the skin and cross section in photos.

Also compare ranges: chirimoya is tropical, pawpaw is temperate.


They are both in the same genus.


Same family, different genus (Annona and Asimina). They can't be grafted together. Are compatible at first, but grafts can't survive at long term

Cherimoya (and Atemoya, etc) are often placed among the royalty of fruits. If they are ripe, they are really good. Overripe... not so much. Develop an acrid aftertaste that is a clear warning.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: