Electrolytics are usually nothing too fancy, but it is proprietary. Water and electrolytes, hence the name. PCBs are in the big transformers and what used to be called bathtub caps which looked like this https://i.ebayimg.com/images/g/VjwAAOSwfGJjYtHx/s-l400.jpg (think 1950s electronics stuff)
I don't see how enabling secure boot helps here, since UEFI is responsible for enforcing that and is compromised. I'm sure some might recommend more roots of trust and signing down and verification that starts at the chipset, but I'd recommend an alternative with less attack surface and better user control: a jumper.
The article specifically says this is self-signed so won’t work with SecureBoot enabled.
This is technically a bootloader, so it has to find a way to get loaded by the UEFI. The article doesn’t say it’s able to do that, the guys has to manually trust the signing certificate or disable secureboot.
It's heartwarming to see that the spirit behind Azureus is still alive. SWT might not be what the Duke himself wants in a Java GUI framework, but it's practical and I remember the "chunks bar" in the Azureus GUI fondly. It'll enjoy firing up BiglyBT after all these years. Using a largely memory safe language makes a lot of sense for P2P software.
Potentially worth pointing out that Go is memory safe only when single threaded (races can corrupt memory), and this kind of application is very likely to use multiple threads.
But I do also generally expect it to be safer than C++. The race detector prevents a lot of badness quite effectively because it's so widely used.
Go is safe from the perspective of RCEs due to buffer overflow, which is what matters here. Happy to be enlightened otherwise, but "I broke your (poorly implemented, non-idiomatic, please use locks or channels ffs) state machine" is a lot better than "I am the return instruction pointer now"
Voting for new legislators, personally. I wish they'd do something about PG&E or housing instead of criminalizing software development of chatbots. Truly useless, and I wish we had more choice of non-insane candidates.
This was also my reaction almost immediately. Tattoos can have extensive correlation with social and lifestyle factors that could easily mean the difference between correlation and causation here.
I find it hard to believe that a Canadian company's model contained an undertrained token related to hockey (albeit in German). In all seriousness, this is pretty cool and am excited to see understanding of tokenization impacts on models improve. One notable finding is that a lot of the earlier open source models have issues with carriage returns, which are not that uncommonly introduced depending on where the data is coming from etc.
It wastes taxpayer funds on enforcing a moat for Sam Altman, it establishes a fixed computational bound in a legal regulation, it tries to police a free speech activity because of possible harms (but not the harms directly), and it is likely to have negative national security implications as other (less regulated) regions deal with fewer lawyers as they advance the state of the art.
Nice concise summary. The fundamental problem with all of these proposed "AI" "safety" regulations is that they adopt the corporate version of safety where LLMs refuse to talk about things that sound scary, mean, or even just controversial, while completely ignoring that these systems will be used to harm people at scale by turning gradually creeping corporate-individual power imbalances up to 11.
This exactly. I would be much happier if the regulation was "don't use GPT-4 to decide when to kick Grandma out of the hospital" or "don't use a Llama finetune to make policing decisions", which is where I see the most certain need of regulation in the near future.
I don't think this is a hot take at all, it's matches my understanding. One of the reasons language itself is so difficult (miscommunication, etc) is we have a mostly similar but not identical "compression table" of ideas that words map to, and why we spend so much time aligning on terms, to ensure correct "decompression".
We need compression because internally cognition has a very "wide" set of inputs/outputs (basically meshed), but we only have a serial audio link with relatively narrow bandwidths, so the whole thing that allows us to even discuss this is basically a neat evolutionary hack.
Laws should be about the outcome, not about processes that may lead to an outcome. It is already illegal in California to produce your own nuclear weapon. Instead of outlawing books, because they allow research into building giant gundam robots, just outlaw giant gundam robots.
> Laws should be about the outcome, not about processes that may lead to an outcome
They have to be about both because outcomes aren’t predictable, and whether something is an intermediate or ultimate outcome isn’t always clear. We have a law requiring indicator use on lane change, not just hitting someone while lane changing, for example.
But even this example is a ban on a specific action: changing lanes without using a legally defined indicator with a specific amount of display time.
The equivalent would be if the law simply said, "don't change lanes unsafely" but didn't define it much beyond that, and left it to law enforcement and judges to decide, so anytime someone changed lanes "unsafely" there's now extremely unknown legal risk.
Laws also should be possible (preferably easy) to implement. Why does DMCA ban circumvention tools? Circumvention is already illegal and it is piracy that should be outlawed, not tools to enable piracy? The reason is piracy tools are considerably easier to regulate than piracy.
The DMCA ban on circumumvention has been both stunningly useless at discouraging piracy and effective at hurting normal users including such glorious stupidity as being used to prevent 3rd party ink cartridges.
> Laws should be about the outcome, not about processes that may lead to an outcome.
Some outcomes are pretty terrible, I think there are valid instances where we might also want to prevent precursor technology from being widely disseminated to prevent them.
There are certainly types of data that are already prohibited for export and dissemination. In this case, I would argue no new law is needed, the existing laws cover the export or dissemination of dual use technologies. If the LLM becomes dual-use/export-restricted/etc because it was trained on export-restricted/sensitive/etc data, it is already illegal to disseminate it. Enforce the existing law, rather than use taxpayer money to ban and police private LLM training because this might happen.