This looks great as well! :) Love the addition of a creative element. I also recently launched a new one: https://spaceword.org (90% of the code is written by cursor)
I think it's because the library calls directly the LLM AND saves any debug/trace info. While other tools let you use the standard (eg OpenAI) sdk and uses an https proxy to intercept the requests
On the contrary. Unless some AI armageddon happens, humans will be nurtured by AIs like in the movie Wall-E. Totally dependent by it, but a true utopia.
oh god, of all the amazing problems to solve with this wonderful technology, you surely did pick the most useless. What's this obsession with interviews on HN?
Hired a bunch of people in my career and 1 call was enough with a success ration of 98%
People obsess about interviews because they need to do well to get a job to make money so they won't live on the streets. The whole interview process is a total mess these days. Already filled with ML stuff to reject your application before you even get a chance at an in-person interivew.
People already use AI to punch of their resumes to make themselves look more attractive.
Doing great in a FAANG interview is life changing money for people from the lower and lower-middle classes. It can bring up your entire family. The stakes are high, which means people will use every tool to have an advantage.
Congratulations on your success ratio. If you don't mind me asking, however, what constitutes a "success" in your eyes? I also interviewed and hired / rejected many applicants through my career, but I don't know if we ever discussed our successes as a single quantitative metric. I'm interested to find out what you measured.
I "picked out" this problem to discuss here because I find the process of "approximating someone else's skills" an interesting endeavor without a clear solution. Do you think the current remote interviewing techniques are effective? Regardless of your answer, it looks like it's going to change dramatically. I find that to be interesting I guess :)
Nice demo but its lacks one feature to make it practical: splitting the input documents in chunks.
Without it the embeddings will be too broad, and when retrieved the docs will consume a lot of input tokens, making the request slower/expensive
Sure, because this implementation won't work in practice. He's embedding the whole documents at once, without splitting, summarization, Q&A extraction etc.