Hacker Newsnew | past | comments | ask | show | jobs | submit | more buildbot's commentslogin

Which amusingly, also serves as a stable API for Windows now too.


How so? I also find randomness profound but not sure what you mean but not belonging in the materialized world. Particle decay/Radiation is a pretty random process I believe?


Possibly connecting random events to time, which is not material.


The transformation of a particle into two more basic particles is absolutely material.


Some confusion? I was saying "time is not material".

In my conception time is made out of events, and the events are I suppose all material, and all have probabilities. So maybe time follows inevitably from matter. But I think it exists in its own right as a phenomenon that isn't material. There are such things. Knowledge is another one.


I have this issue intermittently just when walking around, at sea level! I can make them start screaming just by holding them even - The Airpods 2 do not have the same issue.


AirPods Pro 1 had that issue


I did not realize xorshift no longer as favored! Permuted Congruential Generators seems very cool. https://en.wikipedia.org/wiki/Permuted_congruential_generato...


Xorshift variations (specifically xoshiro256++ and friends) are still really good.


I have had similar experience, with ZFS zstd dropped IOPs and throughput by 2-4x compared to lz4! On a 64 core Milan server chip…


ZFS lz4 in my experience is faster in every metric than no compression.


Only if the data in question is at least somewhat compressible


Not really, it goes so fast through the CPU that the disk speed is at worst the same and the CPU overhead is tiny (in other words it's not fast while saturating the CPU, it's fast while consuming a couple percent of the CPU)

technically sure you're correct but the actual overhead of lz4 was more or less at the noise floor of other things going on on the system to the extent that I think lz4 without thought or analysis is the best advice always.

Unless you have a really specialized use case the additional compression from other algorithms isn't at all worth the performance penalty in my opinion.


Makes sense, given that allegedly Windows NT is very much next gen OpenVMS... https://www.itprotoday.com/server-virtualization/windows-nt-...


XSCF - Fujitsu/Oracle SPARC

(I wonder how they got that acronym)


Possibly eXtended System Control Facility.


It looks like they generated random four-letter combinations until they found one that wasn't used.


Ooo also, there is the RMC in some HP Alphaservers


eXtended System Configuration Facility?


You can hear some modern GPU/CPUs (well really their power electronics) when they get heavily loaded!

With training runs it makes a little beat and you can tell when it checkpoints because there’s a little skip. Or a GPU drops off the bus…


I live in an old house. When weather permits, I work in the detached garage.

When doing some AI stuff on my garage PC (4060 Ti; nothing crazy) the overhead lights in the garage slightly but noticeably dim. This doesn't occur when gaming.

It's most easily noticeable with one of nVidia's demo apps -- "AI Paintbrush" or something like that, I forget. It's a GUI app where you can "paint" with the mouse cursor. When you depress the mouse button, the GPU engages... and the garage lights dim. Release the mouse button, and the lights return to normal.


> You can hear some modern GPU/CPUs (well really their power electronics) when they get heavily loaded!

I'd hope you hear their fans too...


Sure, but that has a resolution of seconds at best. The coil whine in the power electronics is milliseconds-accurate.


Yeah I get a nice reminder to limit frame rates when I hear the atrocious coil whine from my 4090 as it renders 1500fps of a static loading screen.

First world problems.


It makes sense when applied across multiple instances of a test, if one cohort does terribly curve up, one really well curve them down relative to the overall distribution of scores.

But yeah within a single assignment it makes no sense to force a specific distribution. (People do this maybe because they don’t understand?)


Even in that case it doesn't make sense. Why should the underperforming cohort be rewarded for doing poorly?


Depends on the rigor. The typical grade school curriculum is expecting you to keep up and get 80-90% of the content on a first go. Colleges can experiment with a variety of other kinds of methods. It's college, so there's no sense of "standaridized" content at this point.

For some, there's the idea of pushing a student to their limit and breaking their boundaries. A student getting 50% on a hard course may learn more and overall perform better in their career than if they were an A student in an easy course. Should they be punished because they didn't game the course and try to get the easy one?

And of course, someone getting 80% in such a course is probably truly the cream of the crop which would go unnoticed in an easy course.


I think the prior probability in the bayesian sense is that the two entering cohorts are equally skilled (assuming students were randomly split into two sections as opposed to different sections being composed of different student bodies). If this were the case, the implication is that performance differences in standardized tests between cohorts are due to the professor (maybe one of the profs didn't cover the right material), so then normalization could be justified.

However if that prior is untrue for any reason whatsoever, the normalization would penalize higher performing cohorts (if it were a math course, maybe an engineering student dominated section vs an arts dominated cohort).

So I guess.. it depends


Right, and if it depends, maybe we just don't do it then?

Intuitively and in my experience, course content and exams are generally stable over many years, with only minor modifications as it evolves. Even different professors can sometimes have nearly identical exams for a given course, precisely so as to allow for better comparison.


Did the cohort due poorly or were the tests given to that cohort harder than in previous years? Or was the teacher a more difficult grader than others? You're jumping to the conclusion that the cohort was underperforming just because the grades were lower when other things out of their control could have been involved.


Tests are generally almost identical YoY where as humans are all very different! I think I'm making the simpler argument here


The university I went to had student run test banks of previous exams that the administration sanctioned. If the following year you get the same question as the previous year, then you’re going to do better than the year that got the first version of that question.

You’re also ignoring the human element of grading particularly in subjective parts of an exam.


The idea is to identify if there is a particularly easy/hard exam and the average score of the cohort is significantly different to how they perform in other classes. "Doing poorly" is quite hard to define when none of the tests, perhaps outside of the core 1st and 2nd year modules, are standard.


Tests can be consistent over time without being a true standard. Student competency can vary much more greatly than test content.


Not really since then all students can learn the exam as a template after 2-3 exams leak.

The curving I know at uni was targeting to exmatriculate 45% by the 3rd semester and another 40% of that by the end so the grades were adjusted to where X% would fail each exam. Then your target wasn't understanding the material but being better than half of the students taking it. The problems were complicated and time was severely limited so it wasn't like you could really have a perfect score. Literally 1-2 people would get a perfect score in an exam taken by 1000 people with many exams not having a perfect score.

I was one of the exmatriculated and moving to more standard tests made things much easier since you can learn templates with no real understanding. For example an exam with 5 tasks would have a pool of 10 possible tasks, each with 3-4 variations and after a while the possibilities for variation would become clear so you could make a good guess on what this semesters slight difference will likely be.


Oh this is really soothing - and my cat is really intrigued by all the bird and other noises!

Edit/warning - Wow the thunder is loud!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: