First, see below for Toll et al 2020 and I used autocorrect for grammar. Sorry you were dismissive before looking it up, is more a reflection of your bias.
Second, I noted all caveats with an LLM counting that - I actually presumed I undercounted, but it had been noted that a simple ctrl-f found 3.8 per page rather than 9.8 per page (counting only single emdashes not double). The actual number doesn’t matter so much, since low bound is absurd difference from baseline bills I checked from earlier this year and 2024, where they do not exist outside of the table of contents.
4.x emdashes per page (low bound) is absurd, and the implication of this is the point you (respectfully) missed.
And how do you know it wasn't just edited by someone who loves em-dashes?
comparing it to the average doesn't matter too much. Better evidence would be proving that there has never been a bill with anywhere close to the number of em-dashes used in this bill.
Yh I get your point - post is not necessarily designed to prove AI use (it's already highly probable, and not necessarily bad by itself in theory) it's the implications of it that are more interesting than deterministic evidence of it, but by showing evidence of it being likely - updated the post to reflect a better baseline.
Wow. I normally don't like to pile on, but check out this user profile:
>Chester Hunt is the growth manager at Legitt AI, where he oversees all organic channels. With several years of experience in tech startups and scaleups focusing on B2B, Chester brings a wealth of knowledge and expertise to her role. Outside of work, you can find her socializing, traveling, engaging in extreme sports, and savoring local desserts.
That almost seems too bad, like it's a false flag or something.
This shows just how completely detached from reality this whole "takeoff" narrative is. It's utterly baffling that someone would consider it "controversial" that understanding the world requires *observing the world*.
The hallmark example of this is life extension. There's a not insignificant fraction of very powerful, very wealthy people who think that their machine god is going to read all of reddit and somehow cogitate its way to a cure for ageing. But how would we know if it works? Seriously, how else do we know if our AGI's life extension therapy is working besides just fucking waiting and seeing if people still die? Each iteration will take years (if not decades) just to test.
Last year went for a walk with a fairly known AI researcher, I was somewhat shocked that they didn't understand the difference between thoughts, feelings and emotions. This is what I find interesting about all these top someones in AI.
I presume the teams at the frontier labs are interdisciplinary (philosophy, psychology, biology, technology) - however that may be a poor assumption.
What do you think is the difference, and why are you certain it must apply to AI? Why do you think human thought/emotion is an appropriate model for AI?
If it's all just information in the end, we don't know how much of all this is implementation detail and ultimately irrelevant for a system's ability to reason.
Because I am pretty sure AI researchers are first and foremost trying to make AI that can reason effectively, not AI that can have feelings.
Let's walk first before we run. We are no where near understanding what is qualia to even think we can do so.
It's been very very throughly research, in fact my father was a (non-famous, Michigan U, 60s era) researcher on this. Recommended reading: Damasio, A. R. (1994), Lazarus, R. S. (1991), LeDoux, J. E. (1996).
Why do I think it's appropriate, not to be rude but I'm surprised that isn't self evident. As we seek to create understanding machines and systems capable of what we ourselves can do, understanding how the interplay works in the context of artificial intelligence will help build a wider picture and that additional view may influence how we put together things like more empathetic robots, or anything driven by synthetic understanding.
AI researchers are indeed aiming to build effective reasoners first and foremost, but effective reasoning itself is deeply intertwined with emotional and affective processes, as demonstrated by decades of neuroscience research... Reasoning doesn’t occur in isolation...human intelligence isn't some purely abstract, disembodied logic engine. The research I provided shows it's influenced by affective states and emotional frameworks. Understanding these interactions should show new paths toward richer more flexible artificial understanding engines, obvs this doesn't mean immediately chasing qualia or feelings for their own sake, it's just important to recognize that human reasoning emerges from an integrated cognitive/emotional subsystems.
Surly ignoring decades of evidence on how emotional context shapes human reasoning limits our vision, narrowing the scope of what AI could ultimately achieve?
I think it’s still difficult to conceive of this branch of computer science as a natural science, where one observes the behaviour of non-understood things in certain conditions. Most people still think of computer science as successively building on top of first principles and theoretical axioms.
I keep hearing the “the killing would end if Hamas would just release the hostages”. But the Israelis keep offering ceasefire terms that include full release of hostages but no permanent cessation of hostilities, only 60 days and not even temporary full withdrawal from Gaza.
Why do you think the Israelis want to keep their tanks in Gaza even after all the hostages come back? Why won’t they offer a full and permanent ceasefire? I think this hostage justification is just Israelis buying time so they can keep on doing what they actually always wanted, full ethnic cleansing of Gaza.
Nonsense. Anything less then a full military occupation of gaza for the medium term at least would be unacceptable. This is so blindingly obvious that it doesn't even need to be explained. Anyway it would benefit the gazans far more then a war
every 5 years.
If they wanted to ethnically cleanse gaza they would have done so long before October 7. You don't seem to understand the reality of war and the consequences of being on the losing side. Nor of the constraints Israel would be forced to work with if they had total control of Gaza.
You didn’t answer the question. Why won’t Israel commit to a permanent ceasefire if “the war ends when the hostages are released”? Why do they insist on being able to start the war again in 60 days if the hostages are all they want?
The hostages aren't all they want. from wikipedia:
>Israel's campaign has four
stated goals: to destroy Hamas, to free the hostages, to ensure Gaza no longer poses a threat to Israel, and to
return displaced residents of Northern Israel.
Pretty clear and i never suggested otherwise. I'm not sure where you got that idea from
Please, I really don’t think you’re discussing this in good faith.
“And the fact is that the people of Gaza could end the conflict whenever they want. All they need to do is surrender and hand over the hostages”
So no, Israel decides how and when the killing ends and apparently that’s when “Gaza no longer poses a threat”. Who knows what that means but apparently it involves mass starvation, firing tank rounds into crowds, and destroying every hospital.
I can't comment on specific actions, but it would definitely mean the destruction of hamas and islamic jihad as well as systematically removing all weapons from the gaza strip and the destruction of all tunnels and terror infrastructure. If they surrender the process can happen without loss of life (even the death of all militants can be avoided with a negotiated surrender)
Gaza was a pretty enormous threat, so neutralising it takes an enormous amount of effort. If you cared about the death and destruction of gaza you would be calling for the end of Hamas. It's not like Israel wants to be stuck in an endless conflict in Gaza i think it has shown many times in the recent past that it is prefers peace to war.
The word surrender is carrying quite a lot of meaning but it's still good faith on my part.
Molecular dynamics describes very short, very small dynamics, like on the scale of nanoseconds and angstroms (.1nm)
What you’re describing is more like whole cell simulation. Whole cells are thousands of times larger than a protein and cellular processes can take days to finish. Cells contain millions of individual proteins.
So that means that we just can’t simulate all the individual proteins, it’s way too costly and might permanently remain that way.
The problem is that biology is insanely tightly coupled across scales. Cancer is the prototypical example. A single mutated letter in DNA in a single cell can cause a tumor that kills a blue whale. And it works the other way too. Big changes like changing your diet gets funneled down to epigenetic molecular changes to your DNA.
Basically, we have to at least consider molecular detail when simulating things as large as a whole cell. With machine learning tools and enough data we can learn some common patterns, but I think both physical and machine learned models are always going to smooth over interesting emergent behavior.
Also you’re absolutely correct about not being able to “see” inside cells. But, the models can only really see as far as the data lets them. So better microscopes and sequencing methods are going to drive better models as much as (or more than) better algorithms or more GPUs.
Scales can also decouple from each other. Complex trait genetic variation at the whole genome level acts predominantly in an additive fashion even though individual genes and variants have clearly non-linear epistatic interactions.
Don Pinkel is not well known but he was a pioneer in the 60’s at St. Jude in Memphis in developing the first combination treatments that pushed the childhood acute lymphoblastic leukemia cure rate from effectively zero to about 50%.
The problem with this machine-learned “predictive biology” framework is that it doesn’t have any prescription for what to do when your predictions fail. Just collect more data! What kind of data? As the author notes, the configuration space of biology is effectively infinite so it matters a great deal what you measure and how you measure it. If you don’t think about this (or your model can’t help you think about it) you’re unlikely to observe the conditions where your predictions are incorrect. That’s why other modeling approaches care about tedious things like physics and causality. They let you constrain the model to conditions you’ve observed and hypothesize what missing, unobserved factors might be influencing your system.
It’s also a bit arrogant in presuming that no other approaches to modeling cells cared about “prediction”. Of course, systems and mathematical biologists care about making accurate predictions, they just also care about other things like understanding molecular interactions *because that lets you make better predictions*
Not to be cynical but this seems like an attempt to export benchmark culture from ML into bio. I think that blindly maximizing test set accuracy is likely to lead down a lot dead end paths. I say this as someone actively doing ML for bio research.
Also predictions in biology take months or years to validate, so they lack the fast feedback loop of the vision and NLP world where the feedback is almost instant.
Combine this with the fact that In vivo data in biology is extremely limited, and we see copying the NLP and vision playbook into biology is challenging
This. Many of the predictions we're talking about are potentially years in the making, involve expensive data collection to validate, suffer from a lot of stochastic noise, etc.
Honestly even if a prediction comes an experiment, and they know exactly how the experiment was done, it takes month to years to follow up and verify.
Generative AI is basically going to flood the field with more predictions, but with little explanation of how, and doing nothing to alleviate the downstream verification process.
And when it's off in its prediction, without an explanation of how, you have no chance to revise your prediction, it's just all the way back to square one.
You are part of the universe. You can create meaning. So, the universe has meaning if you create it. Meaning is an emergent property of letting hydrogen sit around in a gravity well for a long time.
I guess I just don't see the point of aggrandizing my personal goals and desires as "meaning," at least in the sense that people usually mean it. What I want is just what I want and the world would be a better place if people could just accept that about themselves as well.
Apparently, the article for David Woodard, an American composer and conductor has been translated into 333 languages, including Seediq, a language spoken in Northern Taiwan by about 20 thousand people.
I am absolutely baffled as to why this is the case. I have to imagine some kind of "astroturfed" effort by Woodard or a fan to spread his name?
I don't know what "mods" are but perhaps you mean "modifiers" as in "editors".
One main aspect in play here is that we're dealing with over 300 sets of Wikipedia editors in different projects. Each Wikipedia language-based project is siloed, with its own complement of editors, admins, policies and guidelines. Sure, you can edit more than one Wikipedia from a single account, but there is typically a true community that coalesces in each one, and they set the culture and the rules of behavior.
I have found that many are less deletionist and less vigilant and more welcoming of new content in general. The majority of these articles may be under the radar for them. They may not detect anything wrong with the articles. They may not care. They may have too few editors patrolling in general, to clean up minor issues like this.
Another thing about the small communities that have formed, they often understandably do not always enjoy when an editor comes cross-wiki to combat some perceived abuse or vandalism. It is not what some user did on another wiki that matters to them, if a local user is not being disruptive, per se, then they should not be subject to any disciplinary action.
So if anyone were to pursue this seemingly minor issue of single-article spam, they'd need to pursue it more than 300 times in 300 different ways, subject to 300 separate policies and guidelines interpreted by that many communities of editors and admins. That's sort of a radioactive task for anyone there.
Wikipedia doesn't have anyone called "moderators", at least not in English nor in any sense of userrights. "Mods and admins" is usually something an ignorant non-community-member resorts to appealing when something is wrong there.
The truth is that Wikipedia content is not governed by a hierarchical administration, and all ordinary editors collaborate there to achieve consensus.
Administrators on Wikipedia have the responsibility for administrative tasks, privileged things, and disciplinary actions. Not content, not choosing what sort of articles to delete, not cleaning up articles.
See, I tried to give benefit of the doubt and a favorable interpretation, and someone who isn't the GP chimes in to perpetuate the myth. I'm curious about the myth: where does it come from and how do so many people sincerely just believe this is how Wikipedia works? Is "mods and admins" the default "go-to group" that cleans up other websites? Is it a specific meme from Reddit or some other forum type place? "Moderators" are usually the ones who execute discipline on forum discussions and users. That's not even a relevant role in terms of Wikipedia. But even within the noticeboards and talk pages, newbies come in all the time to appeal to "mods and admins", please address our problem. It's interesting how uniform the myth becomes!
The really amazing thing is that there are more articles about this guy than about Wikipedia. You'd think the first thing the editors of any Wikipedia would do is make an article about Wikipedia.
I checked the Malayalam page for David Woodard as I have native proficiency and also when it comes to translation to Malayalam, even the finer engines are patchy at best. Firstly, there is an alert at the top which says that the article seems to be translated automatically and needs improvement, and frankly, this is quite self evident too. Which makes me wonder, whether someone tried to script/automate the translation (of this article) to a large number of languages?
That's what it looks like. Same for Spanish, weird automatic translation.
I've also seen that they've uploaded "name pronunciations" to Wikimedia that are done via TTS engines that are not, precisely, last generation. [0] Looks like some sort of automation exercise. Edited in a bunch of languages, but mostly in English. [1]
A lot of them seem to be stubs with only one line of content. Not very hard to translate "David James Woodard (born April 6, 1964) is an American conductor and writer" passably into 300-some languages.
Though, I'm not sure if the Good Article assessment is used in many languages. Maybe someone could slap some LLM on it to do a quick assessment of which are likely to be GA.
“Overall, a more nuanced view of AI in government is necessary to create realistic expectations and mitigate risks (Toll et al., 2020)”
What a unique and human thought for a personal blogpost. Also who the fuck is Toll et al, there’s no bibliography.
Second the authors used Gemini to count em dashes. I know parsing PDF’s is not trivial but this is absurd.