Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As the head of the Engineering department said the class on day one of Intro to Computer Engineering at RIT in the fall of 1999:

Engineering = Physics + Economics

The physical system here is the LLM and the computing environment that interfaces with the LLM. Prompt engineering would be the knowledge of how to use the LLM in a programmatic manner. There are many cost related trade-offs. More tokens, better response? Can you limit token usage while keeping the quality response basically the same?

Is it not obviously engineering and not just throwing stuff at the wall when you begin to measure the quality of responses with regards to token usage, finding which approaches work and which don't? Or how to handle security issues like prompt injections? Or techniques for using a vector database with latent space embeddings?

Do 10 year olds do that when they use Google?

I think the confusion here is that people think it refers to just using the ChatGPT interface and not wiring up the API to a Python interpreter.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: