They are implemented as pointers, but their role is to give temporary (often exclusive) access that is restricted to a statically know scope, which is pretty specific and fits only some uses of some pointers/C++ references. In C++ pointers typically mean avoiding copying, but Rust references avoid storing/keeping the data. When these goals don't overlap, people get stuck with a dreadful "does not live long enough" whack-a-mole.
>their role is to give temporary (often exclusive) access that is restricted to a statically know scope, which is pretty specific and fits only some uses of some pointers/C++ references
You could have a vector of references to heap allocated data, as long as the references were parametrized by the same lifetime. You might do this if implementing a tree iterator using a vector as a stack, for instance. That goes beyond a statically known scope. But implementing a mutable iterator the same way would require a stack of mutable pointers (and therefore unsafe code whenever you dereference them), since mutable references have to be unique. That does seem like a bad limitation.
Apple only sees developers as a revenue stream to squeeze dry. Investing into Apple-only technologies is getting yourself into an abusive relationship. macOS is still a good platform, but staying away from Swift gives you an escape plan.
There's also no point having a native UI on macOS any more. Apple ruined it themselves by switching to flat design, and making their own UIs and apps an uncanny valley between macOS and iPadOS. There's no distinct native look-and-feel on macOS any more.
FSD in principle could be, but the overpromised misnomer we have right now isn't. Being better than a drunk driver isn't good enough when it's also worse than a sober driver. The stats of crashes per mile are skewed by FSD being mainly used in easy conditions, and not for all driving.
There are real safety improvements from ADAS. For safety you only need crash avoidance, not a full-time chauffeur.
I've always been intimidated by the number of little tools and configuration options when building Debian packages. In the end, it's not very hard, and the format is quite reasonable, it just feels more complicated than it is.
I'm maintaining cargo-deb that builds a .deb from a Rust/Cargo binary project with no configuration needed.
The main source of complexity isn't the .deb format, but the tooling and infrastructure around the format. It's mired in overcomplexity, and it's very much still in a '90s mindset of building locally with multiple layers of Perl-based tools. If it was rethought to be git-native using docker images or equivalent then it could be of equivalent simplicity to other contemporary systems. When I look at what you can do with the FreeBSD ports and Poudriere or with Homebrew and other systems, I see how much of the complexity has been added incidentally and incrementally, with good intentions, but a radical rethink of the basic workflows are necessary to consolidate and simplify them.
[I used to maintain sbuild and was the author of schroot back in the day]
USA is just bad at governing. Tries not to tell corporations what to do, so it ends up with toothless half-assed laws that do nothing except being a tool for regulatory capture.
That's just an overcomplicated way of doing pre-authorization.
Talk about decentralisation and anti-deplatforming makes no sense here. Concerts are a physical thing happening in the real world, organized by selected "centralized" entities. Venues can refuse to host an artist. Artist can "rug pull" by refusing to host. Imaginary tokens can't do anything about that, and we already have laws, contracts, and currencies that have been dealing with that for as long as these things existed.
Card Network Rules: Payment processors and card networks have rules about the use of pre-authorizations. Excessive or inappropriate use can be flagged, potentially leading to penalties, holds on your account, or even termination of your merchant account.
Customer Experience: Imagine a customer who participates in several auction bids and has a pre-authorization placed for each bid. This can lead to:
Blocked Funds: A large amount of their credit limit could be temporarily blocked, making it difficult for them to use the card for other transactions.
Confusion: Customers might be confused about multiple holds on their account, leading to inquiries and chargebacks.
Negative Experience: A poor customer experience can hurt your reputation.
Risk of Expiration & Release: If pre-authorizations expire, and the auction is not completed, you might have to re-authorize, which can be disruptive and annoying to the customer.
False Availability of Funds: Since not all bidders will win, placing holds on all bidders' accounts gives a misleading view of how much funding you might actually have available to you.
Chargebacks & Disputes: Confused customers with multiple pre-authorizations are more likely to dispute charges, which can hurt your merchant standing and reputation.
Processor Scrutiny: A merchant running a high volume of pre-authorizations relative to actual sales could be perceived as risky behavior. Processors will scrutinize businesses with higher dispute rates and high pre-authorization-to-capture ratios.
Can be … oh we may negotiate with the middlemen to not deplatform us. How nice. Blockchain doesn’t solve any problems in the same way that giving people universal single payer health insurance didn’t solve any problems since you can always find a good employer who will just treat you well.
I find they're useful for the first 100 lines of a program (toy problems, boilerplate).
As the project becomes non-trivial (>1000 lines), they get increasingly likely to get confused. They can still seem helpful, but they may be confidently incorrect. This makes checking their outputs harder. Eventually silly bugs slip through, cost me more time than all of the time LLMs saved previously.
You're making a mistake assuming that the push for HTTPS-only Web is about protecting the content of your site.
The problem is that mere existence of HTTP is a vulnerability. Users following any insecure link to anywhere allow MITM attackers to inject any content, and redirect to any URL.
These can be targeted attacks against vulnerabilities in the browser. These can be turning browsers into a botnet like the Great Cannon. These can be redirects, popunders, or other sneaky tab manipulation for opening phishing pages for other domains (unrelated to yours) that do have important content.
Your server probably won't even be contacted during such attack. Insecure URLs to your site are the vulnerability. Don't spread URLs that disable network-level security.
The numbers are sent in this peculiar format, because that's how they are stored in the certificates (DER encoding in x509 uses big endian binary), and that's the number format that OpenSSL API uses too.
It looks silly for a small value like 65537, but the protocol also needs to handle numbers that are thousands of bits long. It makes sense to consistently use the same format for all numbers instead of special-casing small values.
They are implemented as pointers, but their role is to give temporary (often exclusive) access that is restricted to a statically know scope, which is pretty specific and fits only some uses of some pointers/C++ references. In C++ pointers typically mean avoiding copying, but Rust references avoid storing/keeping the data. When these goals don't overlap, people get stuck with a dreadful "does not live long enough" whack-a-mole.