Even if Career Technical Education (CTE) classes are offered, there is a large variation in their quality. For me, the question would be whether a graduate from a CTE program is more likely to be hired and receives higher wages (initially) than a non-CTE program completer. My 2-minute Google Scholar search hasn't found anything on the topic.
At the end of the day, a 3-course sequence in a CTE pathway (which is the CA requirements for a high school CTE certificate in California) doesn't prepare you for a career in the same way as being in journalism class prepares you to be a journalist or being in theater prepares you to be an actor. Students will most likely need to pursue some form of post-secondary training (either through a community college or on-the-job) to become somewhat competent in their field.
The most likely explanation for this phenomenon is that there isn't a change in the population average for variable X, but that the decrease in college students' average X is due to an increase in population college going rates.
Looking at the statistics[1], the US went from a 23.2% college completion rate in 1990 to 39.2% completion rate in 2022, or a 67% increase in college degree completions. If you assume that X in the population is constant over time, mechanically you will need to enroll and graduate students from lower percentiles of X in order to increase the overall college completion rate in the whole population.
This process might be particularly acute at "lower tier" institutions that cannot compete with "top tier" institutions for top students.
I don't think the increase is big enough. A 67% increase means the "new" students are 41% of the population. But these reports are coming from all over the place and describing the majority of their class.
You can also see it in the whole pipeline. Everything he described is true (age adjusted) for K-12 as well.
This particular professor has been teaching for 30 years. I'm not sure I find your explanation all that convincing in light of that, especially since this isn't an isolated opinion.
I'm much more interested in how much the average student has had a phone to distract them during their lifetime. For the incoming 2025 class of 18 year olds, the iPhone came out the year they were born. So potentially 100%. I expect that plus the availability of LLMs is a deadly combo on an engaged student body.
Based on the intro of the article, the university where this professor works is likely below median. Each year the typical student at his/her university is worse because the best students go to better schools
That most likely explains the slow creep of grade inflation, remedial courses, etc. which has been going on for decades. This article touches on that but mostly describes an entirely different phenomenon.
Apache Iceberg builds an additional layer on top of Parquet files that let's you do ACID transactions, rollbacks, and schema evolution.
A Parquet file is a static file that has the whole data associated with a table. You can't insert, update, delete, etc. It's just it. It works ok if you have small tables, but it becomes unwieldy if you need to do whole-table replacements each time your data changes.
Apache Iceberg fixes this problem by adding a metadata layer on top of smaller Parquet files (at a 300,000 ft overview).
I knot you’re not OP, but and while this explanation is good, it doesn’t make sense to frame all this as a “problem” for parquet. It’s just a file format, it isn’t intended to have this sort of scope.
The problem is that the "parquet is beautiful" is extended all the time to pointless things - pq doesn't support appending updates so let's merge thousands of files together to simulate a real table - totally good and fine.
Well… when Parquet came out, it was the first necessary evolutionary step required to solve the lack of the metadata problem in CSV extracts.
So, it is CSV++ so to speak, or CSV + metadata + compact data storage in a singular file, but not a database table gone astray to wander the world on its own as a file.
What's interesting to me is that these budget cuts are coming at a time when the VA is also trying to implement a new Electronic Health Record (EHR) system (Oracle CERNER) which is having substantial issues with rollouts (see the Google News page for Oracle CERNER or read the transcripts from the congressional hearings about it).
This could become a two pronged problem where you have fewer people to provide care while they are trying to implement a new EHR which results in a decrease in productivity because of the need of learning a new system.
It seems to me that going from "F1 and F2 generations of mice respond differently to the smell of acetophenone if their parents were exposed to it" to "well, human trauma is inherited and there isn't anything we can do about violent behavior" is somewhat far-fetched and smells like neo-eugenics.
If the bait wasn't too traumatizing, could I interest you in a dessert? No acetophenone flavor, I promise.
> In summary, we have begun to explore an under-appreciated influence on adult behavior—ancestral experience before conception. From a translational perspective, our results allow us to appreciate how the experiences of a parent, before even conceiving offspring, markedly influence both structure and function in the nervous system of subsequent generations. Such a phenomenon may contribute to the etiology and potential intergenerational transmission of risk for neuropsychiatric disorders, such as phobias, anxiety, and post-traumatic stress disorder. To conclude, we interpret these results as highlighting how generations can inherit information about the salience of specific stimuli in ancestral environments so that their behavior and neuroanatomy are altered to allow for appropriate stimulus-specific responses.
I’ll take the bait: junk DNA, arguments against epigenetic expression, the irrelevance of the guy microbiome. Wrong and wrong and wrong NYT biosciences editor!
Mainstream consensus on this as reported in the popular press is nothing like the actual credence of the guys in lab coats. I know serious biotech people at serious schools who won’t fuck with mMRNA vaccination personally. As long as they’re not quoted on it.
When you get your bioscience from The Atlantic? Be ready to be wrong soon.
Real scientists don’t mouth off like this. They choose an emphasis when writing a grant application like a cover letter.
You could assign ELO at the group level. It will increase the number of possible groups by C(N, g) where N is the number of players and g is the group size. It could work if the groups are stable enough.
If you have individual ELOs, you could pre-seed the ranking using averages and then let the algorithm take over.
My guess is that they had prior experience with asking students not to use cellphones and they observed students start using wearables to skirt the ban, which led them to just ban all of them.
An interesting consequence about the wearable ban is that they also banned medical electronic devices (e.g., heart monitors or blood sugar monitors) and parents are required to meet with administration to approve their use [1].
We have videos of kids attacking teachers for taking away devices. These things, whether we like it are not, are ingrained in society. Taking a phone away can be akin to saying you are going to cut a part of my brain out.
Kids fe still going to bring them, and kids are going to still feel as it’s their right to have them. On top of that only a few people in the sv bubble would agree with taking phones away.
The rich SV bubble is actually the most restrictive because they build the dangr and understand it. The C-suite at FAANG and friends don't let their kids have nearly as much time on screens as Alabama trailer park families do.
> We have videos of kids attacking teachers for taking away devices
I’ve seen kids screaming for iPads on flights, too. I don’t have high expectations for them in life.
Also, when did we normalise violent kids? If a kid attacks a teacher for any reason, they should be automatically suspended at a minimum.
> only a few people in the sv bubble would agree with taking phones away
I live in a rich town in the Rockies. Most parents don’t let their kids take phones to school. (None of the private ones permit them.)
There is an emerging class divide on phones is schools. When you meet them, you can clearly tell which kids need their phones to function and which do not. (Eye contact during conversation.)
Local mapping is surprisingly difficult. I believe that the commercial products (i.e., Google Maps) are viable only because there are strong incentives for people (e.g., business owners, property owners) to submit edits as they are the main way that people search for them. Without that, you get into a limbo where you have data but it's not the most updated one.
By the way, not even government agencies have good geo data, even when they should. I needed up to date address information for work, so I bought a map from my local county assessor's office. In my mind, the assessor should have the most recent data on properties, as their main mission is to collect taxes annually. I was wrong. Their data is about 4 to 5 years wrong, with whole "new" subdivisions missing from their inventory. Google Maps kind of has them on the map; I believe that their geolocation data comes from real estate platforms when new houses are on the market. OSM is about 10 years behind in my area. I am submitting edits as I find them.
If someone has a better idea on where to find address data, please let me know.
The assessor's mission is all about parcels and tax lots though. For that purpose, it's not 4 to 5 years wrong, it is current, but they don't care what the "address" is. Not all parcels have an address, or are on a street. The only addresses they care about are where to send the bill.
OpenStreetMap barely has any users in many areas. It seems likely enough that a modest amount of traction would lead to people noticing out of date information much more quickly.
If you want to take this a step further, quantitative methods are about efficient data reduction. As part of this data reduction, the model’s assumptions and mathematical form take center stage in describing how you got to your “number”.
This is different from qualitative analysis because the data reduction is done “by hand” by the researcher.
The difference between the “automatic”, model-based data reduction in quantitative research and the “subjective” reduction in qualitative research is then amplified when people say that quant is more objective than qual analysis. The discussion, instead, should be about the quality of the work and whether the final conclusions are warranted by the methods instead of the method itself.
Yep. There’s unfortunately a large contingent of people, usually the people that haven’t conducted quantitative research themselves, but have maybe read some, that are just impressed by numbers. It’s like the next level up from people that say “science says that …”.
It's far worse than that. People using numbers as supportive arguments often generated said numbers themselves, from scratch (meaning, they also collected the data). I'm currently redoing a study from an old prof. The old study was of a more trusted design, and found a massively positive effect. The new study (including old data) has an effect firmly grounded on zero. Those people aren't even always dishonest. They're just incompetent.
It looks like the authors didn't properly handle missing values in Stata, leading to marking people with missing health information being marked as being "severely ill" instead of being excluded from the analysis.
Not OP, but to me it sounds line p-hacking aka bad science as well: If you slice a dataset en enough subsamples you will very likely find random correlations. That’s the nature of these kinds of analyses and we should be sceptical of conclusions that are based on suce analyses.
I think you and p51-remorse are discussing different parts of the article. They're saying the updated analysis is suspect because of the risk of false discoveries. I believe that's probably true in the usual way--if we study 20 subgroups with no actual effect, then we expect one to show an effect with p < 0.05. There's no mention of preregistration or anything like a Bonferroni correction to manage that risk.
You're saying the original analysis was wrong due to a coding error. I believe that's also true, but that's not what they were discussing. The variable names are inscrutable, but the article text also seems to imply that line (mis)codes divorce, not severe illness:
> People who left the study were actually miscoded as getting divorced.
So they actually found a correlation between severe illness and leaving the study. That's perhaps intuitive, if those people were too busy managing their illness to respond.
At the end of the day, a 3-course sequence in a CTE pathway (which is the CA requirements for a high school CTE certificate in California) doesn't prepare you for a career in the same way as being in journalism class prepares you to be a journalist or being in theater prepares you to be an actor. Students will most likely need to pursue some form of post-secondary training (either through a community college or on-the-job) to become somewhat competent in their field.