Hacker Newsnew | past | comments | ask | show | jobs | submit | lightbulbish's commentslogin

My understanding is that coaching is about helping the individual solve the problem himself/herself.

Mentoring is about adding information or skills such that the individual becomes more capable and thus are able to solve a problem.


that was great, thanks for the laugh.


_all_ models I’ve tried continuously, and still, have problems ignoring rules. I’m actually quite shocked someone would write this if you have experience in the area, as it so clearly contrasts with my own experience.

Despite explicit instructions in all sorts of rules and .md’s, the models still make changes where they should not. When caught they innocently say ”you’re right I shouldn’t have done that as it directly goes against your rule of <x>”.

Just to be clear, are you suggesting that currently, with your existing setup, the AI’s always follow your instructions in your rules and prompts? If so, I want your rules please. If not, I don’t understand why you would diss a solution which aims to hardcode away some of the llm prompt interpretation problems that exist


I feel I could argue the counterpoint. Hijacking the pathways of the human brain that leads to addictive behaviour has the potential to utterly ruins peoples lives. And so talking about it, if you have good intentions, seems like a thing anyone with the heart in the right place would.

Take VEO3 and YouTube integration as an example:

Google made VEO3 and YouTube has shorts and are aware of the data that shows addictive behaviour (i.e. a person sitting down at 11pm, sitting up doing shorts for 3 hours, and then having 5 hours of sleep, before doing shorts on the bus on the way to work) - I am sure there are other negative patterns, but this is one I can confirm from a friend.

If you have data that shows your other distribution platform are being used to an excessive amount, and you create a powerful new AI content generator, is that good for the users?


The fact is that not all people exhibit the described behavior. So the actions of corporations cannot be considered unambiguously bad. For example, it will help to cleanse the human gene pool of genes responsible for addictive behavior.


I never suggested they were unambiguously bad, I meant to propose that it is a valid concern to talk about.

In addition, with your argument, should you not legalize all drugs in the quest for maximising profits to a select few shareholders?

AFAIK, the workings of addiction is not fully known, I.e. it’s not only those with dopaminergetic dispositions that get ”caught”. Upbringing, socioeconomic factors and mental health are also variables. Reducing it down to genes I fear is reductionist.


> it’s not only those with dopaminergetic dispositions that get ”caught”. Upbringing, socioeconomic factors and mental health are also variables.

So we not only improving our pool of genes, but we also conduct a selection of effective cultural practices


A quick glance at your other comments shows that your account seems to be purpose-built to come up with the most inflammatory response every single time, you might very well just be a chatgpt prompt.


Counterpoint: eugenics are bad.

You are saying suffering is allowable/good because eventually different people won't be able to suffer that way. That is an unethical position to hold.


Thanks for the read. I think it's a highly relevant article, especially around the moral issues of making addictive products. As a normal person in the Swedish society I feel social media, shorts and reels in particular, has an addictive grip on many in my vicinity.

And as a developer I can see similar patterns with AI prompts: prompt, wait, win/lose, re-prompt. It is alluring and it certainly feels.. rewarding when you get it right.

1) I have been curious as to why so few people in Silicon Valley seems to be concerned with, even talking about, the good of the products. The good of the company they join. Could someone in the industry enlighten me, what are the conversations in SV around this issue? Do people care if they make an addictive product which seems to impact people's lives negatively? Do the VCs?

2) I appreciate the author's efforts in creating conversation around this. What are ways one could try to help the efforts? While I have no online following, I feel rather doomy and gloomy about AI pushing more addictive usage patterns out in to the world, and would like to help if there is something suitable I could do.


IIRC they talk about it here

https://www.youtube.com/watch?v=m2VqaNKstGc&ab_channel=Laten...

TL;DR If you want to engage with the MCP protocol you can do so here. https://github.com/modelcontextprotocol


the expert in question has written books on the topic, and consulted to US government agencies like CIA. I think he has some credibility.


Credibility and veracity are not the same thing. You can be a world class authority in something that is completely bogus.


this would be a definition of a paradox, how can you be world class authority in something bogus?


There were experts on phrenology.


Have you used it? Haven't heard about it but tbh I can see how it would eventually outperform cursor and/or windsurf. As LLMS get better, and more background tasks etc, will come, I don't see a sustainably moat around IDEs generally (except switching cost, but if it is mostly vs code... )

saw you did below. What is your experience so far? fast requests are great. Anything big lacking?

I was using roo code for a bit and it was cool to see how fast it was going compared to windsurf.


>What is your experience so far?

I cancelled my Cursor subscription and haven't used it since. I experimented with Aider for a bit, it's also pretty great. Their architect mode seems to be the way of the future. It allows you to use a pricier model for its reasoning abilities, and it directs a cheaper model to make the code changes.

That said, I keep going back to Void because Aider's system prompts have a tendency to introduce problems. If Void had Aider's architect mode, it would be perfect.


As an engineer founder I can only agree this is likely solving a real problem.

1) I don't understand your product based on your demo. It looks like a simulated thing, but it doesn't tell me about _my_ user experience. How much eye to eye contact would I break using your app? Where is the video? How does it look if the bot is with me instead of calling through your app? Those type of questions are unanswered by this quite lackluster 15 second demo.

1.5) The demo also feels very scripted. A call is way more messy.

2) Your "investors" page shows 3 types of products: transcription, in sales call, and sales training. I'm curious, do you really need all three right now?

3) Your logo should be remade. When sized down (to the size of your homepage) its not clear what it is. I had to zoom ni to understand it's an ai generated oil painting of a sunset (I presume). I believe a logo should be clearly identifiable in all presented format, otherwise it doesn't fulfill the purpose of a logo. (Look on appicon designs. Apple has a lot of guidelines on how to rerender your logo for lower pixel environments if needed, just for this purpose. A simple initial fix is just to remove complexity from it).

Cool job though! I can definitely understand the potential.


I thought this was fantastic! Surprised not more people are commenting on this. Is there a reason I am not aware of?

To the author: what happens to my voice after I upload it? What is your plan moving forward? I am too far left field to understand how to build a business and monetize an open source product like this, even though I found it fun to play around with.


Thanks! There is a model that turns the voice into an embedding that is used to determine the voice. Unlike the STT and TTS, we won't be releasing the weights of this voice cloning model, but we will provide it over an API so that we can do verification and prevent abuse.

edit: Ah yes, and we do not store the voice sample on our server. The voice embedding is cached for 24 hours.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: