Check out Scaleway (France). They have by far the broadest range of managed services with full permissions/IAM integration etc. - it is the closest EU match to AWS. Yes, if your entire existing setup depends on once specialized AWS service (i.e. DynamoDB) you still will need to go with AWS, but when building from scratch it's a different story.
The AWS Sovereign Cloud is still owned 100% by Amazon Inc. in the US. Not saying that rules it out for all use cases, but something that should be mentioned. "Sovereignty" is a somewhat vague term.
Mostly the better documentation (last I checked FastAPI docs felt more like a series of blog posts than actual docs, but maybe that's improved). I also preferred the community-driven approach of Litestar as opposed to FastAPI's BDFL-type development structure.
I think there were also some technical details I liked better about Litestar where it was more explicit about things while FastAPI was more "opaque magic happening in the background", but to be honest I don't remember all of those.
We chose Litestar over FastAPI mostly because they seemed very similar, but Litestar had a more distributed governance; i.e. a larger bus factor.
We are jealous of some FastAPI features though, so it's possible we could migrate, as Litestar's mapping between domain models, database models and API models isn't as flexible as we'd like.
When they talk about detection, they are most likely referring to protocol level detection by ISPs forced to block VPN traffic, hostile local networks, corporate firewalls and such.
The actual service you are connecting to (example: website, game server etc.) most likely uses a IP-based detection service such as https://focsec.com/ or similar. In such cases, the protocol will not make a difference.
Well, under the restatement (and case law in some states) a liquidated damages provision cannot represent a penalty. Instead, it has to represent an estimate of actual damages in breach, more or less.
Absent the actual language in the Agreement between T-Mobile and the intermediary providers like bandwidth.com, we have no idea what the penalty is called - bandwidth.com is calling it a “fine” but we don’t really know.
When we are talking about medium to large enterprise customers, there is significant costs of switching a SaaS provider.
Vendor assessment, legal concerns, data privacy concerns, talks about SLA guarantees, talks about 24/7 support plans and much more. There will likely be several departments involved. Technical folks, legal people, data privacy experts etc.
That new deal could pass easily thru 50 peoples desk before getting signed eventually. For what? A 15% saving that could be wiped out with the next round of price adjustments from the new vendor? Simply not worth it. That is why SaaS revenue tends to be so sticky.
> Because the fact that Copilot is free and ChatGPT is not should be a red flag...
I'd assume that running a model that only needs to deal with a single programming language (the Copilot plugin knows what kind of code base it is working on) is _a lot_ cheaper than running the "full" ChatGPT 4.
Sorry for being so precise, but Microsoft renamed Bing Chat to Copilot yesterday, has already rolled it out to all users of Microsoft Edge, and is rolling out a permanent button on the Windows 11 taskbar to access it.
This is what shouldn't add up: Microsoft is literally adding GPT-4, for free, to the Windows 11 taskbar. Can you imagine how much that costs when you look at the GPT-4 API, or ChatGPT's subscription price? Either Microsoft is burning money, or OpenAI agreed to burn money with them. But why would they do that, when that would compromise $20/mo. subscription sales?
You got me excited that Github Copilot was free. Was going to post to tell you it is, in fact, not free.
I've been using Bing on Edge browser for a while now, it's super useful! Sad that they rebranded it to Copilot though, "I have been a good Bing :)" will be forever in my memory. [1] RIP Bing, you were a good chat mode.
I don't thing there's necessarily anything there. Microsoft might be burning money because they've decided that browser adoption and usage is worth it to them. It doesn't have to involve OpenAI in any way.
> The fact that the CTO is now CEO makes me think it's probably not a lie about their tech.
Agreed
> This makes me think it's either financial or a scandal around Sam himself.
I can't imagine it being about fake financials. This isn't Microsoft's first time doing due diligence on a acquisition. That is both technical and financial due diligence.
And clearly they didn't buy the company because it was super profitable, but for the tech.
Microsoft didn't buy them did they? I thought it was just an investment. Either way though you're right that they probably did their DD.
My first comment wasn't really about them not being profitable, it was more of a question about how close to bankruptcy they are. Again though, you're right that MSFT probably did their DD, so that's unlikely
The "white cis man" stuff isn't an incisive comment, it's an academic's way of trying to get into an insult war with other academics.
Constantly calling out "cis men" is in fact transphobic, which is how you can tell they don't care about it. If you think cis men and trans men behave differently or are always treated differently, this means you don't think they're both men.
Also sama is not white. Although he does appear to have gotten a series of jobs with not a lot of experience by convincing Paul Graham to figuratively adopt him.
I mostly agree with your points but how is he not white? He acts like a textbook white person and I should know because thats also how I and most of the people I associate with act. Everyone of us would say he is white.
> What tasks are you doing that require 128GB RAM?
You’re thinking about it the wrong way.
The more RAM you have, the faster + smoother your experience using an application will be, and this will be much more noticeable if you usually run multiple applications at the same time, like virtually everyone in the world do.
This is especially true now that we have options to run local-only AI models. The next couple of years will be interesting.
> The more RAM you have, the faster + smoother your experience using an application will be,
This is the mentality that MSI targets with the $1300 AORUS Z790 Xtreme X motherboard. More is clearly better, right? Nope. Not at all. In this case, there is an amount of RAM an app will consume and you have that much, the rest will not do much. Even buffering/caching only can consume so much. It's very hard to fill even 64GB in a laptop but 128GB is near impossible.
While I do agree that 8GB is a little dated, even the 8GB MacBook Air from 2020 runs most applications smoother than modern Windows machine with twice as much RAM. Apple have the smoothness part figured out, without the need for more memory (for the average consumer at least).
Your point about local AI is pretty interesting. It does seem highly likely that computers with limited memory could "age" faster than they have done in the past ten years, with the advent of more AI workloads.
reply