Hacker Newsnew | past | comments | ask | show | jobs | submit | no-name-here's commentslogin

Unfortunately that immediately terminates access for some subscriptions, such as Apple Music. :-(

Is Polymarket registered in Panama to make legal compliance easier/for the opposite reason of dodging the law?

Registering in Panama isn’t even the sketchy thing! They listed a false agent of process, hah.

No, Polymarket is sketchy as fuck. I was disputing the allegation that there's something inherently wrong with having a registered agent in another locale.

What are the ‘personal opinions or subjective interpretations’ you feel it’s using? What would a rewrite be to be non-‘editorializing’ in your opinion?

I guess an elephant-sized exception to this are the popular code editors that support extensions? Or perhaps such editors’ extensions typically aren’t constrained at all anyway.

The last one. It would make sense to have a sandbox system, but they don’t.

Except, strangely, for setup - if you don’t load your laptop’s touchpad/trackpad driver during the early select a partition screen, you seem to get stuck on later screens like when you are required to connect to WiFi.

- No screenshot of said form designer?

- Page is not responsive and requires zooming out or scrolling horizontally to view full text width on mobile


Not to mention it being very obviously written by AI.


There is oddly a configuration panel on the web page that lets you turn off many effects and make it readable.


> make it readable

It does not


> Page is not responsive

Just like WinForms :-p


Content at the OP link http://copy.fail seems fairly different from any normal CVE I’ve seen.

That was fixed, it’s now marked high.

I don't know your intent, but I've seen others post that with the idea that we should not care about this type of thing, because it's just acting like a human as we trained it that way.

But I think this and the other testing from Anthropic about LLMs being willing to kill a data center tech by flooding a room with gas (or blackmail them with their Google Drive files) to avoid being shut off, for example, is concerning - the important part isn't whether AI are trained on human behaviors, it's whether a good or bad human actor will accidentally or intentionally allow AI to control something that can hurt people, or a weapon, etc. Fiction like the Three Laws of Robotics at least assumed that we would try to put in place stronger 'laws' before allowing AIs to control such things.

I 100% agree this isn't sentience, but sentience isn't the concerning result for me. (And I think the Three Laws, Skynet, etc. were intended to be cautionary tales.)

AIs can do unexpected things. There was a news story in recent days about how a Cursor agent deleted a company's prod DBs:

> The agent was working on a routine task in our staging environment. It encountered a credential mismatch and decided — entirely on its own initiative — to "fix" the problem by deleting a Railway volume. To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on.


> I think the Three Laws, Skynet, etc. were intended to be cautionary tales.

Of course, that’s the reason there’s a story. “We did this and everything went dandy” isn’t that exciting, the purpose of science fiction tends to be to explore “we made this advancement and then shit hit the fan this way”. That and loud explosions in the vacuum of space, of course.


My intent is to point out that these results don't in any way shape or form indicate AI sentience. All I see is a human that said "act poorly" and we're somehow surprised that the model acts poorly.

These models pattern match on content from the internet, and are fine tuned to do whatever their human operator says. Occam's razor says these cases are merely playing out the "sentient AI sci fi" script, at the specific request of the researchers.

As you mention, it's bad actors controlling sycophantic-but-powerful models. And yeah, we definitely need to worry about that! It's a human problem, not an AI sentience problem. Let's focus on the bad actors themselves, not invent sci fi scenarios.


All good points - and that 1980s rental cost would be ~$11 inflation adjusted - for a rental at VHS quality where you also had the costs of driving to pickup/return the tape.

https://www.usinflationcalculator.com/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: