Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, I agree entirely: LLMs can produce very entertaining content.

I daresay that in this case, the content is interesting because it appears to be the actual thought process. However, if it is actually using EXIF data as you initially dismissed, then all of this is just a fiction. Which, I think, makes it dramatically less entertaining.

Like true crime - it's much less fun if it's not true.



I have now proven to myself that the models really can guess locations from photographs to the point where I am willing to stake my credibility on their ability to do that.

(Or, if you like, "trust me, bro".)


At the risk of getting pedantic -

> trust me, bro

That's just it. I cannot trust you. It wouldn't be hard to verify your claim, and I don't suspect it of being false. BUT - you have repeatedly dismissed and disregarded data that didn't fit your narrative. I simply cannot trust when you say you have verified it.

Sorry.


Well that sucks, I thought I was being extremely transparent in my writing about this.

I've updated my post several times based on feedback here and elsewhere already, and I showed my working at every step.

Can't please everyone.


You ARE being extremely transparent. That's not what I complained about.

My complaint is that you're saying "trust me" and that isn't transparent in the least.

Am I wrong?


I said:

"I have now proven to myself that the models really can guess locations from photographs to the point where I am willing to stake my credibility on their ability to do that."

The "trust me bro" was a lighthearted joke.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: