Agreed on the small market, what would move the needle is if it re-charged much more quickly than current batteries.
EV owners really only want 500+ miles because charging the battery takes so long. Charging infrastructure is already changing and becoming more available so charging speed will be the real quest
Hopefully I can phrase this in a non confrontational way but isn't that what "everyone" wants? We keep hearing about porn being exploitative and driving some human trafficking, this seems like a possible solution that makes "everyone" happy? Of course there are issues with starter images and what others have pointed out in that lots of porn has a social interaction draw.
Currently Colorado is solidly a blue state and Democratic lawmakers don't actually need help. I am in Titone's district (the sponser) and the right to repair was broadly supported.
"Across the world, thousands of underground coal fires are burning at any given moment. The problem is most acute in industrializing, coal-rich nations such as China.[7] Global coal fire emissions are estimated to cause 40 tons of mercury to enter the atmosphere annually, and to represent three percent of the world's annual CO2 emissions.[8]"
This was interesting but probably not the way the author intended. Lately I feel like I have been spending a significant part of my development time creating small scripts like this for the sole purpose of convincing sys-admins that the problem is actually theirs.
I absolutely believe that sys-admins are as stressed and worked thin as the rest of us and systems in general are worse off because of it but I have always been fascinated/irritated by the assumption that sys-admins are right until proven wrong.
As someone on the other side of this, I'm sympathetic and genuinely do try to debug problems, but, off the top of my head,
a) I don't actually have the same level of access to our cluster as our users do. There are datasets and even programs with contractual limitations on who can access it. So if you tell me "My job isn't working," I can't run it myself and see what's wrong; you need to send me the error message. Just like with software, if you can get me a minimal, self-contained example (especially one I can run myself), I can try to figure out why it's breaking, but I can't necessarily minimize your code.
b) Somewhat by definition (a system with "sysadmins" necessarily has enough users to justify paying us), there are a whole lot of other users who don't have whatever problem you have. (We notice very quickly if a problem is affecting everyone.) So chances are high that the answer is "You're holding it wrong" instead of "The tool is broken." Yes, a lot of the time that's bad documentation or bad error messages, which we can and should fix, but the common answer to those questions in practice is your teammate shows you how to hold the tool. The point of a sysadmin is to take advantage of economies of scale; it doesn't scale for us to debug everyone's problems. (And there's a very real sense where time spent helping an individual user is time not spent writing docs or improving error messages.)
I think these problems ought to be solvable, and I'm curious what we (culturally) can do to make this better.
At the somewhat deep technical level, I've been sort of wondering about the nature of errors. Some errors - e.g., statting a file that doesn't exist - are fairly common in working software. Others - e.g., statting a file that you don't have permissions to - ought to be pretty rare. Suppose we had a kernel that could distinguish those, somehow, and sample backtraces or error contexts in some fashion. Would that help us identify problems like this faster, and narrow down quicker on the fact that the system actually isn't working right?
All of those are great points and I agree, I just find myself, more often lately, exhaustively trying to prove my bug is real before something gets fixed.
I wish there were some sort of badges I could acquire, like "You have earned 5 bugs to be fixed, without being a dumbass" badge. And then my 6th one might get escalated earlier.
Like I said, I really appreciate both sides of the issue and am also not certain how to make it better.
I want to echo this. I am stunned anything productive gets done in Cairo. Flip a coin whether the traffic nightmare lets you get to a meeting on time or even the office in less than 4 hours.
I, probably naively, took this as more of an efficiency move rather than political.
I work at one of the labs mentioned and get paid for running not only the climate models but mesoscale models as well, which are also written in Fortran.
The premise of the article is that Fortran, 70 years later is still an appropriate tool to use for crunching numbers which it absolutely is but it neglects one major problem.
Like the COBOL issue that was all the rage 20 years ago, it is difficult to hire younger generation programmers that want to and are excited to develop in Fortran.
> ...it is difficult to hire younger generation programmers that want to and are excited to develop in Fortran.
How much are you paying? Most often times I see this kind of reasoning, digging deeper shows that the salaries are not competitive. There's a large number of us that just want to work on interesting problems for adequate money and don't care what the toolset is. I'm fully on board with the idea of being paid to write Fortran.
Also, COBOL's problem isn't so much that younger generations aren't excited about it, but that the problems in the domain solved by COBOL all require highly specialized domain knowledge about an obtuse set of systems said code runs on (with most of their documentation paywalled, at least until recently). The barriers to entry are much, much higher and few companies are willing to train at the rates the language demands.
My understanding is that they're mostly fortran programs linked together with unix scripts which are run on HPCs - could the models run in a more distributed way like high quality grid computing setup? Lastly, what's the best way to find and learn more about the models?
Switching to any sort of commercial grid or cloud computing setup would be rather complicated by the fact that climate models are critically dependent on the fast, low-latency interconnects (e.g., infiniband) of a proper HPC system to achieve good performance at scale. This is usually coordinated with hand-written message passing via MPI directly in the relevant top-level Fortran (or C/++) program.
There are some other (i.e, “embarrassingly parallel”) scientific computing problems where a higher-latency distributed setup would be fine, but in climate models, as in any finite-element model, each grid cell needs to be able to “talk to” its neighbors at each timestep, leading to quite a lot of inter-process communication.
Yes, they run in the cloud, see e.g. https://cloudrun.co (disclaimer: my side-business), but others have done it as well, for a few years now. On dedicated, shared-memory nodes, it's no different from HPC performance-wise. It can be even better because cloud instances tend to have later generation CPUs, whereas large HPC systems are typically updated every ~5 years or so. But for distributed-memory parallel runs (multi-nodes), latency increases considerably on commodity clouds which kills parallel scaling for models. Fortunately, major providers (AWS, GCP, Azure) have recently started offering low-latency interconnects for some of their VMs, so this problem will soon go away as well.
Indeed, basically, though you may lose from lack of direct access to the hardware. But it's typically expensive.
Do AWS and GCP actually have RDMA fabrics now? The AWS "low latency" one of a year or so ago had a similar latency to what I got with 1GbE at one time.
I was part of a project looking at the feasibility of migrating some of the EPA's air pollutant exposure models from Fortran to R/Python. While Fortran was decisively faster, I think the project lead recommended migrating the model to R since not many people used Fortran anymore. It was also harder to share Fortran code for collaboration as well.
I wrote an addon for the MBS application Simpack[1] in Fortran as part of my master thesis and I have to say except for the stupid line length limit I enjoyed using Fortran (was first contact with Fortran then). My educational background is Mechatronics, so my cs background is not web/gui applications, but rather embedded systems.
EV owners really only want 500+ miles because charging the battery takes so long. Charging infrastructure is already changing and becoming more available so charging speed will be the real quest