Hacker Newsnew | past | comments | ask | show | jobs | submit | rys's commentslogin

He does explain it in his blog post. He changed it after the erratic communication and actions of RC leadership, then after realising what they were really doing, left them to complete their “security audit”, assuming they’d discover it themselves and take appropriate action as part of that. That never happened (which is wild), so he let them know.

They still don’t seem to be in complete control or understanding of the infrastructure they forcefully took control of.


From Arko’s post I get the sense he actually cares about security.

Seeing that he still has root, which means others may, changing root is the most benevolent thing he can do.

It immediately means he has the only unauthorized access instead an unknown many, and that they’ll now cycle keys like they should have in the first place.


Also seems pretty obvious that there was no clear chain of command for the operators. The board themselves certainly aren’t deeply involved given the statement by the one board member about how they couldn’t be bothered to communicate with the community about what was happening because they are so busy in their day jobs.

So who should Arko contact? The guy who’s his “boss” just suspended a bunch of access, twice, and emailed contradictory things. Given how sloppy the overall security situation clearly was and continues to be, I’m guessing no one really understands how AWS security works except for Andre anyway.


I appreciate these viewpoints. I still think Arko would have been better off communicating quickly and proactively to Haught any changes he made or security issues he discovered, despite however confused or contradictory Haught had been. As you say, RC is the "boss" in this relationship (they unambiguously own the AWS infrastructure and sign the consulting checks). So that is your duty as the professional in the room. And it would have at least protected his image when we now get to this point.

Of course hindsight is 20/20. The whole debacle is a shame.



What about the PlayStation OS, which is reportedly based on FreeBSD?


I don’t know what the PS OS uses under the hood.


At least you wrote something...


In practice, what gets labelled as the L1 cache in a GPU marketing diagram or 3rd party analysis might well not be that first level of a strict cache hierarchy. That means it’s hard to do any kind of cross-vendor or cross-architecture comparison about what they are or how they work. They’re highly implementation dependent.

In the GPUs I work on, there’s not really a blurred line between the actual L1 and the register file. There’s not even just one register file. Sometimes you also get an L3!

These kinds of implementation specific details are where GPUs find a lot of their PPA today, but they’re (arguably sadly) usually quite opaque to the programmer or enthusiastic architecture analyst.


One interesting thing is they seem to have also cloned the number of stars each repo has on GitHub. At least that’s true for a cursory check of a few that I’ve committed to. If you click on the star count you’ll then see two separate counts for GitHub and GitCode stars.

It’s one thing to mirror the repos, but it’s another to initially misrepresent the interactivity the repo is getting, even if they’re clear about it when you dig in.

Also, how do you authenticate so you can keep committing and interact with the repo to manage PRs and issues if it’s yours?


> Developers can indeed request Gitcode to remove their projects, but this requires developers to authorize Gitcode using their Github accounts to verify their identity.

https://www.landiannews.com/archives/104677.html

Some users have reported that despite authorizing through GitHub, they are still unable to claim their namespace.


So was there ever a deal with OpenAI? Nothing in the keynote mentioned them or needs them. If there isn’t a deal, I’d love to know how everyone claiming it was signed on the dotted line was led so far down that garden path.


Sam is there, and the presentation isn't yet finished:

https://x.com/markgurman/status/1800198524031906258?ref_src=...


That's also my question. What exactly is apple custom LLM, and what is openAI tech ?

I'm quite confident in the ability of openAI to provide a great usable LLM tech, but much less so of apple. All the demo they've shown in the WWDC could just fall flat if the tech really isn't working well enough in practice. I guess we'll just have to wait and see..


There's an integration with ChatGPT, that requires user approval every time.

Sam: https://x.com/sama/status/1800237314360127905


Yes, they mentioned ChatGPT.

Siri reviews the request and decided if it can respond on its own or if it needs ChatGPT. It then pops up a dialog asking if it is OK ti send the request to ChatGPT. It will not be the default LLM.


OpenAI is an option when making a query, but Apple made it sound like the first deal they're making, not the tight collaboration everybody was expecting.

They gave more space and reverence to Ubisoft.


Siri will have ChatGPT integration (for free, apparently)


For me, if you spend your life reading and writing prose or code, text rendering quality is surely paramount. I’m curious what you think the energy should be spent on instead.


Text legibility is paramount, not rendering quality. Monochrome bitmap fonts are extremely legible once you're used to them, and they don't need 4K displays. Vector fonts with high-quality hinting and antialiasing disabled are almost as good.


It costs almost nothing to your GPU to render fonts in high quality with subpixel antialiasing. Especially when you can cache the rendered glyphs in a texture for reuse.


It costs you sharpness.


Compiling? Staying cool and not turning on the fan? Avoiding unnecessary waste?

Really anything rather than a thing that has only drawbacks.

As far as readability goes... can't see a difference with monospace text, maybe even worse as fonts tend to be thinner and need adjustments. Maybe a little better with proportional text on websites and documents.


It’s way more fun to set a huge pile of VC money on fire.


Just in case anyone reads the parent and the takeaway is you can’t: you can still operate UniFi that way if you want to. The cloud connection and apps are optional.


Interesting. Did that change at some point in the past? I'd seen reports from folks suggesting that you were stuck either using the cloud management bits or using their app, and that a plain local web connection didn't work. Does Ubiquity equipment work entirely with a local web connection now, with no app or cloud, for all their functionality (except, obviously, cloud management functionality)?


The answer depends on which piece of equipment, usually by product family. More and more is getting the cloud push over time though.


Ah. That kind of variability is the kind of thing that makes me want to avoid the entire brand.

Any recommendations for comparable equipment that doesn't have any kind of push towards cloud or app?


Agreed. As to the world lived in: most people are so enthralled with money that any risk to earning more in the future is unacceptable to them, sadly. Companies then bank on that behaviour to help sweep things under the rug.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: