Hacker Newsnew | past | comments | ask | show | jobs | submit | more hi_hi's commentslogin

I humbly request, if you are going to do this, please, please...use the 418 response. It deserves wider adoption :-)


Bit of a pet peeve: 418 is clearly defined as "I am a teapot", not "whatever I want it to mean".

Please do not use it for anything other than its specified purpose, even if it is a joke.


If an LLM can hallucinate an endpoint, then the server is allowed to hallucinate being a teapot :)



Is one being a little precious about one being a teapot!?


Are you a teapot? If you were, maybe you'd be precious about people falsely claiming to be one too!


It's really more about how when I say "I am a teapot", I want people to think "Oh, he's a teapot!" and not "He might be a teapot, or he might be chiding me for misusing llms or he might be signaling that the monkey is out of bananas or [...]"


I agree, though that means I really should stop returning my 404 - Server Unavailable response, but you'll never have my 500 - OK.


What would be an appropriate response code for "He might be a teapot, or he might be chiding me for misusing llms or he might be signaling that the monkey is out of bananas or [...]"?


Each of those should have a clear, unique response code. There should be no "maybe it's this, maybe it's that". A real-world example is login forms that tell you something like "Invalid e-mail or password".

Are you joking around with me or is my point just not as obvious as I believed it to be?

Edit: Not sure if that last bit sounds confrontational, please know that it's a genuine question.


So we've gone down a bit of a path here, and thats cool :-)

Thank you for taking the time to respond and ask. My original 418 message was very much intended as a light hearted joke, in the spirit of "if we wanted to return cheeky responses to previously nonsense APIs that AI invented" I actually like this idea of subverting AI in inventive ways.

Now to the point where we've got here, yes I 100% agree in real-world, production applications, you should return response codes which accurately represent the actual response. But there also a place for fun, even in production, and 418 represents that for me.

Thanks for caring about quality :-)


I think it's a good representation of a hallucination.


(on that note, I'm putting the kettle on :)


These starter kits are great. I'm a complete electronics newbie but was always interested, but found the sheer choice of equipment on offer, and the fear of buying a bunch of kit that wasn't compatible to be a barrier to getting started.

I came across a kit for the Micro:bit which I purchased as a Christmas gift for my young daughter. It's really captured that delight in working with technology for me again. Even starting with the LED "Hello World" examples, as described in the post here, led (haha, whata pun) me down a rabbit hole when I noticed blue lights were flickering, while red ones were fine. I thought it was a defective LED, but it turns out power requirements vary depending on wavelength of light being generated.

I never would have considered that in a million years, but then of course you get deeper into the physics of all this, and it's just fascinating. All thanks to a kids electronics starter kit.

I've purchased a few other bits and bobs now, and discovered simulators so you can build out your breadboard circuits without fear of frying components (luckily the kits include a few LEDs as I learnt the hard way!) I'm now onto trying to build out a magic wand for my daughter to control the house smart lights with gestures as she's just got into Harry Potter. I love how there's a whole hobby community around this stuff too, and the basic websites with datasheets and descriptions of the various gizmos and archaic "warnings". It reminds me of learning 3d graphics development back in the day, when openGL was the goto, and building things up from the math concepts without layer upon layer of abstractions and opinions getting in the way.


> but it turns out power requirements vary depending on wavelength of light being generated.

[pedantic]

It's actually because of the different forward voltages of the blue LED vs red, not the overall power

[/pedantic]

Clearly, I knew what you were getting at, but I made that comment because it's useful to understand that LEDs are primarily current controlled devices, not voltage controlled. Had you driven both LEDs with e.g., a 10mA constant-current driver, they would both be solidly visible.

For a regular indicator LED, this isn't really an issue (other than a too-low voltage will cause the flickering you observed), but for high-power illumination LEDs, or especially laser diodes, current management can be the difference between reliable operation or letting the smoke get out.


Thank you. I appreciate the correction. I knew I was in murky waters with that bit as it was a while ago and I don't have the best memory :-)


Are there any particular simulators you would recommend?


Wokwi seems to be the most popular Arduino-based one.

I'm not a huge fan of simulation for cases like this. The arduino forums are full of "it worked in simulation, but not on my breadboard..." complaints.

Also, simulation won't teach you not to connect a 10ohm resistor across a 12V supply and then touch it. Or that capacitors may explode when connected in reverse polarity, or just how to be careful in general. There's a lot of stuff that should become second nature, but you'll only learn by connecting physical circuits.


I have no idea if this is considered good, but I've started with https://www.tinkercad.com/

There may be better ones out there, I just found this quite accessible to get started with.


For those who want a "down under" flavour there's the great https://boganipsum.com.au/

G'day Boyter :-)


Thanks for introducing me to this. I hadn't previously considered there was a range of tools for kids to work with cardboard. It makes perfect sense.

I'm excited to try some of these with my kid. Paired with the Microbit and bag of various motors, LEDs and sensors, she can really start expanding her projects and imagination. I love it.


I really wish that Nintendo had teamed w/ these folks for Nintendo Labo.


This backfires on me, almost every time.

I reply in kind with "hello".

There can then be many hours to sometimes days.

Either they then reply AGAIN with "hello" (arghhh), or even worse, there is no reply, and I break asking what they want, and _maybe_ get a reply of "never mind, got it sorted" so I NEVER KNOW.


It's a clear positive for SpaceX. How much humanity stands to gain has yet to be seen.


As with anything else, it depends on your point of view. Does Hubble benefit humanity or should we have spent the money helping the homeless instead?

There is no correct answer; only preferences. I happen to like SpaceX’s goals.


Ah, yes, of course. I wasn't trying to make quite the esoteric point. More specifically that currently, if starship succeeds, it will be good business for SpaceX and enable them to launch many more satellites around earth at a cheaper cost but greater scale. I don't think this will be much of a positive for humans in general (beyond the current state).

Now if the lofty goals of enabling Mars and Moon habitation come to fruition, I would take a different view. For now I consider achieving that goal to be science fiction, but hopefully that changes in my lifetime.


Agreed! That makes sense.


While undoubtedly technically impressive, this left me a little confused. Let me explain.

What I think I'm seeing is like one of those social media posts where someone has physically printing out a tweet, taken a photo of them holding the printout, and then posted another social media post of the photo.

Is the video showing me a different camera perspective than what was originally captured, or is this taking a video feed, doing technical magic to convert to gaussian splats, and then converting it back into a (lower quality) video of the same view?

Again, congratulations, this is amazing from a technical perspective, I'm just trying to understand some of the potential applications it might have.


Yes this converts video stream (plus depth) into Gaussian splats on the fly. While the system is running you can move the camera around to view the splats at different angles.

I took a screen recording of this system as it was running and cut it into clips to make the demo video.

I hope that makes sense?


Are you able to share. I would love to see real world success stories of LLM use cases and integrations, beyond the common ones you see often (code gen, story gen, automated summaries, etc)


Of course.

Most of the AI cases (that turn out to be an actual success) focus around a few repeatable patterns and a limited use of "AI". Here are a few interesting ones:

(1) Data extraction. E.g. extracting specs of electronic components from data-sheets (it was applied to address a USA market with 300M per year size). Or parsing back Purchase Order specs from PDFs in fragmented and under-digitized EU construction market. Just a modern VLM and a couple of prompts under the hood.

(2) French company saved up to 10k EUR per month on translators for their niche content (they do a lot of organic content, translating it to 5 major languages). Switched from human translators to LLM-driven translation process (like DeepL but understanding the nuances of their business thanks to the domain vocabulary they through in the context). Just one prompt under the hood.

(3) Lead Generation for the manufacturing equipment - scanning a stream of newly registered companies in EU and automatically identifying companies that would actually be interested in hearing more about specific types of equipment. Just a pipeline with ~3-4 prompts and a web search under the hood.

(4) Finding compliance gaps in the internal documents for the EU fintech (DORA/Safeguarding/Outsourcing etc). This one is a bit tricky, but still boils down to careful document parsing with subsequent graph traversal and reasoning.

NB: There also are tons of chatbots, customer support automation or generic enterprise RAG systems. But I don't work much with such kinds of projects, since they have higher risks and lower RoI.


That last point (compliance gaps in fintech) sounds fascinating. Is there a place that I could read more about this?


Compliance gaps / legal analysis is a pretty common theme in my community (meaning - it was mentioned 3-4 times by different teams). Here is how the approach usually looks like:

0. (the most painful step) Carefully parse all relevant documents into a structural representation that could be walked like a graph.

1. Extract relevant regulatory requirements using ontology-based classification and hybrid searches.

2. Break regulatory requirements into actionable analytical steps (turning a requirement into checklist/mini-pipeline)

3. Dynamically fetch and filter relevant company documents for each analytical step.

4. Analyze documents to generate intermediate compliance conclusions.

5. Iteratively validate and adjust analysis approach as needed.

6. Summarize findings clearly, embedding key references and preserving detailed reasoning separately.

7. Perform gap analysis, prioritizing compliance issues by urgency.


Great. Thank you for taking the time to do that.


So it seems like the future is people to write in a command prompt style for llms to better parse and repeat back our information. God I hope that isn't the future of the internet.

How about an emoji like library designed exclusively for LLMs, so we can quickly condense context and mood without having to write a bunch of paragraphs, or the next iteration of "txt" speech for LLMs. What does the next step of users optimising for LLMs look like?

I miss the 80's/90's :-(


I think this is a bit overblown. Brave and Safari we're both private when I just tested. Chrome not so much, but thats expected.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: