Unfortunately, at their stated range of 160nm, you're looking at only getting as far as Big Sur before the entire craft needs a recharge - it's much more aimed at island and port hopping, I suspect, than long distance travel.
Still, I am excited to see ground-effect vehicles/ekranoplans back in vogue!
With today's tech you're spot on. As batteries advance though, we expect ranges closer to 500 miles by the end of the decade, which would indeed enable SF<>LA!
3x increase in battery capacity in the next 7 years seems rather optimistic to me. Are there any specific battery advancements in the development pipeline that you know of?
How are you managing battery degradation, especially given that you're planning for high charge rates with quick turnaround? Do you have an idea of how many pack replacements you'll need over the lifetime of the rest of the aircraft?
IIRC, a lot of the energy use in a plane flight is due to the initial acceleration. So double the capacity is more than double the distance. You can also just use a bigger battery when the weight efficiency is better.
Amprius is planning to offer a silicon nanowire anode sometime in the next decade it seems, which probably won't double the capacity, but it would be a significant improvement.
Author here - it's just to reduce support surface area. I know I'll need PostgreSQL's full text indexing and GIN indexes for hashtags/search eventually, and I probably also want to use some of the upsert and other specialised queries, and it's easier to just target one DB I know is very capable.
For reference, when I say "small to medium", in my head that means "up to about 1,000 people right now".
People were getting priced out of hosting an instance with "only" 10-20k users and the instance hosting services quote <= 4k users with the 4k end being >$US100/month. With the "low end" 1-200 user instances having 4 cores, 5tb of monthly bandwidth, etc.
The general sense I have got is that mastodon - the default software at least - is extremely resource heavy for relatively low user counts. My assumption/hope was that the bulk of this is that the server software hasn't ever really been under sufficient pressure to improve, and takahē seems to indicate that there's at least some room for improvement on the server side (i.e. performance problems aren't entirely protocol/architecture problems)
Is there any advantage to using a traditional db as opposed to a graph db since json-ld is just a text representation of graph nodes?
I was thinking the easiest path would be have the server deal with all the activityPub stuff and expose something like a graphQL interface for a bring your own client implementation. Of all the stuff they shoehorned graphQL into this seems like a valid fit, like they were made for each other.
For better or worse, many servers are targeting Mastodon API compatibility to be able to leverage the existing clients. Adding GraphQL increases surface area without solving the bigger issue of creating the clients.
I didn’t get as far as looking into the mastodon API for clients but that makes perfect sense, I just assumed it was an overlay on the more general API.
Mostly I was thinking how one could implement something in the most efficient way and graph databases/graphQL were literally designed for this stuff.
I'm (worriedly) curious how this will affect people trying to change jobs on a H1-B, as technically you need to file a new petition each time, and I can see them somehow denying those too.
Extremely risky, I’d say. There would be people who have already changed jobs pending H1B transfer authorization (since premium processing was suspending earlier), also called joining “on receipt”. Normal processing takes weeks to months and if their transfer applications end up getting rejected now, they lose visa status.
the entry ban will be for visa issuance at Consulates and entry into the US from abroad.
what you are talking about would require a rule-making process and/or statutory change. they are going to try that, but it will take longer and will be subject to judicial review.
AC21 lets you change employers without waiting for the petition to be approved. that's statutory. much harder to change as it would require Congress.
eventually, your visa label will expire. after that, if you need to travel overseas, you will need a new visa label and under this proposed exec order, if it's still active you will not be able to return. so you will have to avoid overseas travel entirely until the ban is lifted.
I've replied to the post on the forum, but if this is the default way Jupyter runs then we're going to have to figure something out longer-term. Calling the Django ORM from an async thread just isn't safe...
Using django in a notebook is definitely not a typical use case so I wouldn't worry about it too much.
Ideally the ORM would be a standalone package that could be used outside of the web server context. (I know sqlalchemy is an option but then you lose all of the benefits of django)
I'd love to hear your suggestions for changes we could make while keeping it somewhat WSGI-compatible. It took a few years to refine it to where it is now, so it's not like we just threw something at the wall.
There's the problem. WSGI is fundamentally flawed too - it could also be using generators for a two-way communication channel instead of stringly typed callbacks.
In a world where Python has optional static type hints, it would be nice to have concrete objects passed too.
Can you expand on the stringly typed callbacks? Where are they required? Are you referring to string keys in a callback dictionary, in which case stringly-typed seems a bit of an odd choice of words.
It's worth pointing out that the ASGI support in this release is very low level, and doesn't let you write async views or anything yet. We're still working on that.
> Note that as a side-effect of this change, Django is now aware of asynchronous event loops and will block you calling code marked as “async unsafe” - such as ORM operations - from an asynchronous context.
Am I correct to understand this as meaning async views can’t even read from the database yet? I guess the only use cases for ASGI views currently would be interacting with outside-Django backends that implement async support and such?
If you’re in the hacking mode - what do you think of taking the Django orm and grafting it onto fast api - sort of like a stand-alone sqlalchemy but with all the ease and power or django’s querysets...
Django ORM is not async so using it with FastAPI would block the event loop. I guess you could wrap the calls in sync_to_async from asgiref but it wouldn't be pretty.
Another option is using something like Tom Christie's orm project (https://github.com/encode/orm), which is a wrapper on top of sqlachemy with a django like interface.
I would agree with some of the points - speed (which is stretched over two points), concurrency, and the ability not to do magic as easily, but several of the points (compile time, available developers, strong ecosystem) are not a "versus Python" at all but rather against other languages.
Personally, I see a language like Go and Python as solving different spaces. I wouldn't write a lot of website business logic in Go, and I wouldn't write a low-level TCP redirection daemon in Python.
Yeah there are some use cases where Python is a clear winner. For us Python seemed like a good fit initially. Traffic was low and the API didn't provide more advanced features where Python's speed becomes an issue. (Ranking, aggregation)
I can't say much on the subject, at least not without official approval, but understand that the original Lanyrd team have not forgotten about the site; asking us to fix it isn't really giving us any new information (sadly).
If you want to make requests of any kind you're better off reaching out to Eventbrite directly.
There is, actually - the US has a (slightly worse) government data repository, but there's several LIDAR sets of the Bay Area, mostly from coastal surveys.
Still, I am excited to see ground-effect vehicles/ekranoplans back in vogue!