Hacker Newsnew | past | comments | ask | show | jobs | submit | brianstrimp's commentslogin

> I imagine that there are various triggers of early mathematical derailment

I have come to believe that the main trigger by far is the attitude of society. Of parents, family, friends, tv stars, heck even many (non math) teachers. "I wasn't good at math haha" is such a standard phrase to hear, and parents telling their kids that they don't need to worry if they "don't get it" as if it's some mystical topic that only a few gifted can unlock. Plus the uncool stigma attached to "math nerds", folks who simply have an open mind to try to "get it", turns out that it isn't actually that hard. At least when talking high school math or some basic college classes.


Not for scalar, but certainly for vector multiplication which a large part of the audience certainly needs a lot.


Yeah, the submission heading should indicate that there is a high risk for a sales pitch in there.


Have you noticed any difference in picking up the language(s) yourself? As in, do you think you'd be more fluent in it by now without all the help? Or perhaps less? Genuine question.


I do tons of TypeScript in my side projects and in real life, and I usually feel heavy frustrations when I stray away.

When I stray out of this (e.g. I started doing a lot of IoT, ML and Robotics projects, where I can't always use TypeScript). I think one key thing that LLMs have helped me is that I can ask why something is X without having to worry about sounding stupid or annoying.

So I think it has enabled me at least a way to get out of the TypeScript zone more worry free without losing productivity. And I do think I learn a lot, although I'm relating a lot of it on my JS/TS heavy experience.

To me the ability to ask stupid questions without fear of judgment or accidentally offending someone - it's just amazing.

I used to overthink a lot before LLMs, but they have helped me with that aspect, I think a lot.

I sometimes think that no one except LLMs would have the patience for me if I didn't filter my thoughts always.


Well said. CharGPT is almost the opposite of stackoverflow -- you can ask a stupid question, and ask why a language is designed in such a way, and get nice, patient, nuanced answer without judgment or starting a war.


And how much can you trust those replies?


At least 80% of the time.

I have brains and can verify if it's correct or not.


about as much as I trust StackOverflow answers


For me it just speeds up learning the language, so I think i'd become fluent faster.

I do thoroughly review of the the LLM answers, and hardly every directly copy paste answer, so I feel this way I still learn the language.


"as a staff engineer"

Such an unnecessary flex.


It's entirely relevant here. The opinions of a staff engineer on this stuff should be interpreted very differently from the opinions of a developer with much less experience.


Not really because "Staff software engineer" has become the new "Senior Software Engineer" due to title inflation. It's become an essentially meaningless distinction at many companies.

Case in point, this person has around around 7 years of professional experience at just two companies, Zendesk and GitHub. I don't mean this as a personal dig in any way (truly) but this simply isn't what we used to mean by a "Staff" level software engineer.

This person is early-mid career, which we used to just call "Software engineer" then "Senior Software Engineer" and now (often enough) "Staff Software Engineer"


Exactly. I've been in the business for 20+ years and I think my title is still "Senior Software Engineer".

Not because of lack of skill, but I don't care. I could ask for a fancier one and most likely get it, but why?


They are at a place in their career where it still feels relevant to mention that title.



It's not relevant here, because this is a post from someone who worked on copilot. It's a shady sales pitch, disguised as an engineer's honest opinion.


Uhhh, I hate to break it to you but titles mean absolutely nothing in this industry.


To be honest I was expecting the article to focus primarily on internal docs, meetings, long Slack posts, etc. Staff engineers spend a relatively small percentage of their time writing code. A lot of what it takes to be successful is knowing how to communicate with different audiences which AI should be really useful for.


What is a Staff engineer anyhow? Sometimes I fell like all these titles and roles pop up all of the sudden to replace already existing ones that were already too boring.


A lot of companies use it for their IC track (Individual Contributir) track, to solve the problem of engineers being forced to move into management because otherwise their career progression stops.

I like this definition (which comes with a whole book): https://staffeng.com/


Not really, once you get past senior the “shape” of staff+ engineers varies greatly. At that level the scope is typically larger which can limit the usefulness of LLMs - I’d agree that the greatest value I’ve gotten is from being able to quickly get up to speed on something I’ve been asked to have an opinion on and sanity checking my work if I’m using an unfamiliar language or framework.

It also helps if you realize staff+ is just a way to financially reward people who don’t want to be managers so you end up with these unholy engineer/architect/project manager/product manager hybrids that have to influence without authority.


Extra bonus for just putting it out there with a Github link.

Instead of landing page, login, "just $4/month or $20/year" with a "Show HN" and everybody patting them on the back for a "successful launch".


Don't forget the growth hacker clasic "sign up for the wait list to access the private beta" and "join the community on Discord".


> So, for example, I could view what they googled and when, if I wanted to anyway.

How old are your kids and do they know you are doing this? There surely is a difference between a 5- and a 15-year old. But if they are not at all aware they are constantly being watched like that, man that's some serious breach of trust. This full-on surveillance could damage your kids for life.

I'm so glad this kind of tech hardly existed when I was a kid 30 years ago.


This tech existed 30 years ago, just wasn't packaged up for easy deployment. As late as 2012 you could MITM people in your network, even without being the person managing the router. ARP poisoning and mitmproxy or just some intelligent reverse proxy, you could pick up the cookies, URLs, and POST data for all the requests in the network.


Sure, a computer nerd dad could have somehow surveilled me dialling into some BBS with my 28.8 kbps modem, but the number of people in the world that actually did this to their kids can probably be counted on one hand, and they were all psychos.

MITM-ing https google searches with a custom root cert today, man, you don't want to leave your kids any privacy? Do you also have hidden cameras in their bedroom? That's roughly on the same level.


Yet people are fine about their employers doing it


Because that's with awareness and consent? That's a significant difference.


This is 100% the difference.

That said I think the bar for telling people how to raise their kids is super super high.


The internet of 1995 is very different from the internet of today.


Good interviews are a conversation, a dialog to uncover how the person thinks, how they listen, how they approach problems and discuss. Also a bit detail knowledge, but that's only a minor component in the end. Any interview where AI in its current form helps is not good anyway. Keep in mind that in our industry, the interview goes both ways. If the candidate thinks your process is bad then they are less inclined to join your company because they know that their coworkers will have been chosen by a subpar process.

That said, I'm waiting for an "interview assistant" product. It listens in to the conversation and silently provides concise extra information about the mentioned subjects that can be quickly glanced at without having to enter anything. Or does this already exist?

Such a product could be useful for coding to. Like watching me over the shoulder and seeing aha, you are working with so-and-so library, let me show you some key parts of the API in this window, or you are trying to do this-and-that, let me give you some hints. Not as intrusive as current assistants that try to write code for you, just some proactive lookup without having to actively seek out information. Anybody knows a product for that?


That might be good for newbie developers but for the rest of us it'll end up being the Clippy of AI assistants. If I want to know more about an API I'm using, I'll Google (or ask ChatGPT) for details; I don't need an assistant trying to be helpful and either treating me like a child, or giving me info that maybe right but which I don't need at the moment.

The only way I can see that working is if it spends hundreds of hours watching you to understand what you know and don't know, and even then it'll be a bit of a crap shoot.


I'm pretty sure I've been in an interview with an 'interview assistant' and that it was another person.

This was 2-3 years ago in a remote interview. The candidate would hear the question, BS us a bit and then sometimes provide a good answer.

But then if we asked follow up questions they would blow those.

They also had odd 'AV issues' which were suspicious.


And the consequence is that people get banged on the head if they either use sth existing (cause they will be using it wrong) or they build sth on their own (because that's obviously bad) or they get fed up and don't use anything.

The issue with security researchers, as much as I admire them, is that their main focus is on breaking things and then berating people for having done it wrong. Great, but what should they have done instead? Decided which of the 10 existing solutions is the correct one, with 9 being obvious crap if you ask any security researcher? How should the user know? And typically none of the existing solutions matched the use case exactly. Now what?

It's so easy to criticize people left and right. Often justifiably so. But people need to get their shit done and then move on. Isn't that understandable as well?


> The issue with security researchers, as much as I admire them, is that their main focus is on breaking things and then berating people for having done it wrong.

This is plain incorrect in my experience.

Recommended reading (addresses the motivations and ethics of security research): https://soatok.blog/2025/01/21/too-many-people-dont-value-th...

> Great, but what should they have done instead? Decided which of the 10 existing solutions is the correct one, with 9 being obvious crap if you ask any security researcher?

There's 10 existing solutions? What is your exact problem space, then?

I've literally blogged about tool recommendations before: https://soatok.blog/2024/11/15/what-to-use-instead-of-pgp/

I'm also working in all of my spare time on designing a solution to one of the hard problems with cryptographic tooling, as I alluded to in the blog post.

https://soatok.blog/2024/06/06/towards-federated-key-transpa...

Is this not enough of an answer for you?

> How should the user know? And typically none of the existing solutions matched the use case exactly. Now what?

First, describe your use case in as much detail as possible. The closer you can get to the platonic ideal of a system architecture doc with a formal threat model, the better, but even a list of user stories helps.

Then, talk to a cryptography expert.

We don't keep the list of experts close to our chest: Any IACR-affiliated conference hosts several of them. We talk to each other! If we're not familiar with your specific technology, there's bound to be someone who is.

This isn't presently a problem you can just ask a search engine or generative AI model and get the correct and secure answer for your exact use case 100% of the time with no human involvement.

Finding a trusted expert in this field is pretty easy, and most cryptography experts are humble enough to admit when something is out of their depth.

And if you're out of better options, this sort of high-level guidance is something I do offer in a timeboxed setting (up to one hour) for a flat rate: https://soatok.com/critiques


> I've literally blogged about tool recommendations before

Do you happen to know of a similar resource applicable to common HN deployment scenarios, like regular client-server auth?

For example, in your Beyond Bcrypt blog post[0] you seem to propose hand-writing a wrapper around bcrypt as the best option for regular password hashing. Are there any vetted cross-language libraries which take care of this? If one isn't available, should I risk writing my own wrapper, or stick with your proposed scrypt/argon2 parameters[1] instead? Should I perhaps be using some kind of PAKE to authenticate users?

The internet is filled with terrible advice ("hash passwords, you can use md5"), outdated advice ("hash passwords, use SHA with a salt"), and incomplete advice ("just use bcrypt") - followed up by people telling you what not to do ("don't use bcrypt - it suffers from truncation and opens you up to DDOS"). But to me as an average programmer, that just leave behind a huge void. Where are the well-vetted batteries-included solutions I can just deploy without having to worry about it?

[0]: https://soatok.blog/2024/11/27/beyond-bcrypt/

[1]: https://soatok.blog/2022/12/29/what-we-do-in-the-etc-shadow-...


Your tool recommendations are actively harmful and dangerous in places.

You whole-heartedly recommend sigstore, a trusted-third-party system which plainly trusts the auth flows of the likes of Google or Github. It is basically a signature by OpenID-Login. This is no better than just viewing everything from github.com/someuser as trusted. The danger of key theft is replaced by the far higher danger of account theft, password loss and the usual numerous auth-flow problems with OpenID.

Why should I take those recommendations seriously?


That's the unix philosophy of using one tool for one thing, that does it well. The advantage is that it really opens a marketplace which means you are not tied to one solution if that solution turns out to suck. An alternative will quickly pop up, you switch, everybody does that, and in no time the bad piece has been worked around and replaced. This works against the "batteries included" philosophy but avoids being stuck for a long time with sub-par components.

Over time, when things stabilize, that approach can change. But nvim is still very much a moving target.

Python tries a middle ground. An http server is included, sime crypto libs are as well, but if you need something specialized you can still install alternative modules. That model seems to work well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: