Hacker Newsnew | past | comments | ask | show | jobs | submit | virtualritz's favoriteslogin

Hi, i am not who you asked, but i feel like i've done enough research and have some warnings. UV-C light itself is antimicrobial, but only for surfaces that the light touches, and in the case of cloth it needs to penetrate a bit.

There are at least two types of UV-C light bulbs, as well as literal ozone generators that use ceramic platen and a fan. The type of UV-C bulb that is most common on Amazon and Ali is ~254 nanometers, and _does not_ produce Ozone. It does leave a smell, but it's more like an oldschool hospital antiseptic smell. probably the smell of the dead germs, yay.

Now 185nm is actually the correct size to turn O2 around the bulb into O3 (and more oxygens too, i once read, i think, kinda like cracking hydrocarbons to make longer chains or something).

UV-C bulbs (not base, which is an edison base) that can sterilize a room in 5-15 minutes are about 15-20 CM tall, with four crystal tubes that are connected together standing up on the base. image here [0]

you must run a fan over them if you want your money's worth. they get hot, the bases get hot, it makes the most sense in non-carpeted rooms to aim the crystal down and the base up, so that is real rough on them. that took me 2 bulbs to figure out.

If you can find a reputable place to get the box with ceramic and a fan that lasts more than 5 minutes, let me know, because that's closer to what i want for bedrooms and stuff.

The UV-C 185nm bulbs work great to make a car stop stinking, too! completely removes cigarette smells, if the car hasn't been smoked in for a while. run the A/C full blast and run the bulb for 15 minutes, open the windows for 5 minutes, roll em, sniff. Still smell? another 10 minutes, in the back seat, full A/C blasting. vent, sniff. Faint smell? replace the cabin air filter. Charge customer(?)

and i'm going to respond to your followup question to the GP as well: Covid. Obviously. They were telling us it would live on groceries and deliveries and that, so i put all deliveries in my laundry room and dosed em with UV-C for a minute. CDC or whatever studies said that 10-60 seconds was more than enough to kill sars-ncov-2.

I only use it for freshening cars, rooms, bathrooms, etc now.

WARNING: Do not be in the room with any UV-C light for more than a few seconds. Do not look at the bulb for literally any more than necessary to ensure it is on and safe. they make safety goggles that wrap your entire eye sockets to protect from UV, too. if you get a 185nm bulb, either completely ventilate the room with fresh air, or leave it sealed for 60 minutes then open it up for a few minutes, all the ozone reacts and goes away or something.

UV-C hurts your skin, yes, but it will make your eyeballs literally itch. so don't, don't don't look at it. they are not blacklights.

[0] https://m.media-amazon.com/images/I/71LgjON7J+L._AC_.jpg


Right? It's infuriating. Nearly all of the agentic coding best practices are things that we should have just been doing all along, because it turns out humans function better too when given the proper context for their work. The only silver lining is that this is a colossal karmic retribution for the orgs that never gave a shit about this stuff until LLMs.

>You seem to think it's not 'just' tensor arithmetic.

If I asked you to explain how a car works and you responded with a lecture on metallic bonding in steel, you wouldn’t be saying anything false, but you also wouldn’t be explaining how a car works. You’d be describing an implementation substrate, not a mechanism at the level the question lives at.

Likewise, “it’s tensor arithmetic” is a statement about what the computer physically does, not what computation the model has learned (or how that computation is organized) that makes it behave as it does. It sheds essentially zero light on why the system answers addition correctly, fails on antonyms, hallucinates, generalizes, or forms internal abstractions.

So no: “tensor arithmetic” is not an explanation of LLM behavior in any useful sense. It’s the equivalent of saying “cars move because atoms.”

>It's [complex] pattern matching as the parent said

“Pattern matching”, whether you add [complex] to it or not is not an explanation. It gestures vaguely at “something statistical” without specifying what is matched to what, where, and by what mechanism. If you wrote “it’s complex pattern matching” in the Methods section of a paper, you’d be laughed out of review. It’s a god-of-the-gaps phrase: whenever we don’t know or understand the mechanism, we say “pattern matching” and move on, but make no mistake, it's utterly meaningless and you've managed to say absolutely nothing at all.

And note what this conveniently ignores: modern interpretability work has repeatedly shown that next-token prediction can produce structured internal state that is not well-described as “pattern matching strings”.

- Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task (https://openreview.net/forum?id=DeG07_TcZvT) and Emergent World Models and Latent Variable Estimation in Chess-Playing Language Models (https://openreview.net/forum?id=PPTrmvEnpW&referrer=%5Bthe%2...

Transformers trained on Othello or Chess games (same next token prediction) were demonstrated to have developed internal representations of the rules of the game. When a model predicted the next move in Othello, it wasn't just "pattern matching strings", it had constructed an internal map of the board state you could alter and probe. For Chess, it had even found a way to estimate a player's skill to better predict the next move.

There are other interpretability papers even more interesting than those. Read them, and perhaps you'll understand how little we know.

On the Biology of a Large Language Model - https://transformer-circuits.pub/2025/attribution-graphs/bio...

Emergent Introspective Awareness in Large Language Models - https://transformer-circuits.pub/2025/introspection/index.ht...

>That said, you claim the parent is wrong. How would you describe LLM models, or generative "AI" models in the confines of a forum post, that demonstrates their error? Happy for you to make reference to academic papers that can aid understanding your position.

Nobody understands LLMs anywhere near enough to propose a complete theory that explains all their behaviors and failure modes. The people who think they do are the ones who understand them the least.

What we can say:

- LLMs are trained via next-token prediction and, in doing so, are incentivized to discover algorithms, heuristics, and internal world models that compress training data efficiently.

- These learned algorithms are not hand-coded; they are discovered during training in high-dimensional weight space and because of this, they are largely unknown to us.

- Interpretability research shows these models learn task-specific circuits and representations, some interpretable, many not.

- We do not have a unified theory of what algorithms a given model has learned for most tasks, nor do we fully understand how these algorithms compose or interfere.


NotJustBikes just put out a video about this issue - https://www.youtube.com/watch?v=--832LV9a3I

A couple years ago he also made a video about these trucks more broadly - https://www.youtube.com/watch?v=jN7mSXMruEo

What's truly maddening is how many of these vehicles which _do not_ meet European safety standards are _already_ in Europe. Walk around Hilversum in the Netherlands and you will see plenty of Dodge Rams (mostly 1500's, but there's even a 2500 Dually usually parked on the sidewalk ("pavement "for Brits) where my kids used to go to school). They're imported under "Individual Vehicle Approval" rules, exempting them from type safety requirements, and on top of that are almost always registered as "business vehicles" (you can tell from the V plate) which means they pay an absolute pittance in tax.

I moved here to get away from American kindercrushers (among other reasons) and I am profoundly concerned that Europe is being invaded by these machines.

(Edit) Worth noting is that a lot of Dutch street design is based on the idea that people _can_ share space with cars in dense, low speed environments, but that assumption flies out the window when the vehicles are so large you can't even see a kid walking or biking to school.

Further edit - source - https://www.motorfinanceonline.com/news/dodge-ram-registrati... 5,000 Dodge Rams imported in to Europe in 2023 alone.


This is gonna be game-changing for the next 2-4 weeks before they nerf the model.

Then for the next 2-3 months people complaining about the degradation will be labeled “skill issue”.

Then a sacrificial Anthropic engineer will “discover” a couple obscure bugs that “in some cases” might have lead to less than optimal performance. Still largely a user skill issue though.

Then a couple months later they’ll release Opus 4.7 and go through the cycle again.

My allegiance to these companies is now measured in nerf cycles.

I’m a nerf cycle customer.


No I am using my own workflows and software for this. I made nano-banana accept my bounding boxes. Everything is possible with some good prompting: https://edwin.genego.io/blog/lpa-studio < there are some videos of an earlier version there while I am editing a story. Either send the coords and describe the location well, or draw a box around the bb and tell it to return the image without the drawn bb, and only the requested changes.

It also works well if you draw a bb on the original image, then ask Claude for a meta-prompt to deconstruct the changes into a much more detailed prompt, and then send the original image without the bbs for changes. It really depends on the changes you need, and how long you're willing to wait.

- normal image editing response: 12-14s

- image editing response with Claude meta-prompting: 20-25s

- image editing response with Claude meta-prompting as well as image deconstructing and re-constructing the prompt: 40-60s

(I use Replicate though, so the actual API may be much faster).

This way you can also go into new views of a scene by zooming in and out the image on the same aspect-ratio canvas, and asking it to generatively fill the white borders around. So you can go from an tight inside shot, to viewing the same scene from outside of an house window. Or from inside the car, to outside the car.


Everyone is sleeping on Gemini 2.5 Flash Image / Nano Banana. As shown in the OP, it's substantially more powerful than most other models while at the same price-per-image, and due to its text encoder it can handle significantly larger and more nuanced prompts to get exactly what you want. I open-sourced a Python package for generating from it with examples (https://github.com/minimaxir/gemimg) and am currently working on a blog post with even more representative examples. Google also allows generations for free with aspect ratio control in AI Studio: https://aistudio.google.com/prompts/new_chat

That said, I am surprised Seedream 4.0 beat it in these tests.


This is based off of Paul Merrell's Model Synthesis work [0]. Boris The Brave had a good writeup of the core of the algorithm [1].

Max Gumin focused on just the constraint solver and added a "minimum entropy heuristic", popularized the work and coined the term "wave function collapse", as the way the solver worked was evocative of (a naive view) of how quantum mechanics solves systems [2]. Gumin's repo also has many other resources of implementations and descriptions [3].

I've published a paper on an extension that adds in a type of backtracking to both the "WFC" portion of the solver and the modify in blocks portion of the solver, which can be found in [4], for those interested.

[0] https://paulmerrell.org/model-synthesis/

[1] https://www.boristhebrave.com/2021/10/26/model-synthesis-and...

[2] https://github.com/mxgmn/WaveFunctionCollapse

[3] https://github.com/mxgmn/WaveFunctionCollapse?tab=readme-ov-...

[4] https://zzyzek.github.io/PunchOutModelSynthesisPaper/


I am actually much more pessimistic about Profiles than Simone.

Regardless of the technology the big thing Rust has that C++ does not is safety culture, and that's dominant here. You could also see at the 2024 "Fireside chat" at CppCon that this isn't likely to change any time soon.

The profiles technology isn't very good. But that's insignificant next to the culture problem, once you decided to make the fifteen minute bagpipe dirge your lead single it doesn't really matter whether you use the colored vinyl.


I work at Google on these systems everyday (caveat this is my own words not my employers)). So I simultaneously can tell you that its smart people really thinking about every facet of the problem, and I can't tell you much more than that.

However I can share this written by my colleagues! You'll find great explanations about accelerator architectures and the considerations made to make things fast.

https://jax-ml.github.io/scaling-book/

In particular your questions are around inference which is the focus of this chapter https://jax-ml.github.io/scaling-book/inference/

Edit: Another great resource to look at is the unsloth guides. These folks are incredibly good at getting deep into various models and finding optimizations, and they're very good at writing it up. Here's the Gemma 3n guide, and you'll find others as well.

https://docs.unsloth.ai/basics/gemma-3n-how-to-run-and-fine-...


If you are a Senior Developer, who is comfortable giving a Junior tips, and then guiding them to fixing them (or just stepping in for a brief moment and writing where they missed something) this is for you. I'm hearing from Senior devs all over thought, that Junior developers are just garbage at it. They product slow, insecure, or just outright awful code with it, and then they PR the code they don't even understand.

For me the sweet spot is for boilerplate (give me a blueprint of a class based on a description), translate a JSON for me into a class, or into some other format. Also "what's wrong with this code? How would a Staff Level Engineer white it?" those questions are also useful. I've found bugs before hitting debug by asking what's wrong with the code I just pounded on my keyboard by hand.


That the Lancet would publish this, citing hard numbers to underscore just how far their argument is from any sort of purely ideological hand-waving, is telling of how seriously the Gaza campaign has turned into a crime against humanity. I've spent years arguing in favor of Israel's right to defend itself as a country and also pointing out just how hypocritically nasty the very same political organizations that lay claim to representing the Palestinian people can be, but the current situation has become completely indefensible by any normal moral standard.

What I don't understand is how Israel manages to maintain a base of such staunch political defenders in the world's major governments. Can sheer stupidity and indifference be so deeply rooted through ideological posturing? The answer is that obviously, yes they can. It wouldn't be the first time a monstrous political game has been blindly justified by those too biased to see better. However, if you were a conspiracy type, you'd be very tempted to think the Israeli government has some sort of enormous hidden leverage over a great number of powerful people for them to turn such a blind eye.


Not sure if the article mentioned it (writing style was getting on my nerves about half way through), but endometriosis is also highly hereditary. My wife’s mother has 2 sisters. One sister had endometriosis and the other two had daughters (including my wife) who had it.

It has also been known to degrade egg quality, resulting in total infertility in some (including my wife and her aunt—jury is still out on the cousin).

I don’t think most reproductive surgeons think of endometriosis as untreatable. 1% rate of having some kind of complication from the surgery doesn’t sound crazy high to me. The recurrence rate I think depends pretty heavily on how pervasive the case was and how soon it was caught.

My wife was in her late 20s when she had surgery to have it cut out via laparoscopy, and it wasn’t really a big deal. She had a hysterectomy this year for other reasons, over 15 years later and had no signs of recurrence.


The article sort of glosses over a major distinction regarding the surgical approach to endometriosis -- 90+% of OB/GYNs are trained to ablate (burn-to-destroy) the affected tissue, whereas more recently, a crop of surgeons have begun to specialize in excising a wide area of tissue surrounding the affected tissue. Many times, the tissue to destroy is not on the surface - it is deeply infiltrating what it has adhered to. Burning it is just like cutting grass, it'll come right back. Success rates with excisional surgery are markedly better, but not a silver bullet.

As I've learned from years of reading Hacker News, people who program computers are experts in _everything_!

> It costs billions of dollars to bring a new drug to market

Have you ever wondered why this is the case?

In the "Golden Age" of pharmaceutical development -- from roughly 1930 through 1962 -- it cost orders of magnitude less, adjusted for inflation, was much faster, and outcomes were no worse.

I'd argue that the new (post-1962) Phases II and III -- drug efficacy studies -- are wholly unnecessary and are indeed unethical.

First, because you can't even presume to gauge a drug's "efficacy" without huge and exorbitantly expensive trials -- and, even then, trials can be inconclusive.

Second, because the FDA selectively ignores efficacy data. See, e.g,. flibanserin and certain Alzheimer's drugs that have been approved recently. This was scandalous, but not nearly scandalous enough.

Third, the statistically sound way to find both rare harms and true-to-life benefits is to watch how a drug behaves in millions of real patients, not in a few thousand volunteers. Modern electronic-health-record feeds, claims databases, and wearable data streams let regulators run near-real-time Bayesian safety signals across populations that dwarf any Phase III cohort; the surveillance network that caught rofecoxib’s cardiovascular signal in 2004 had orders-of-magnitude more patient exposure than the approving trials, and today’s infrastructure is far denser.

It would literally be 10x faster, 100x cheaper, and 100x easier to grant conditional approval after Phase I and then run postmarketing surveillance. You'd better signal-to-noise on both efficacy and harm at a small fraction of the cost and time. It's downright perverse to outlaw early access to promising investigational drugs, forcing patients to wait 8-12 years, and then foot them with a multibillion-dollar bill for ritualized trials that are unnecessary -- for the same ends could have been attained in far better ways.


> Celgene said it spent $800 million to develop Revlimid and spent several hundred million more on additional trials to study the use of the drug in other cancers. Those combined figures represent about 2% to 3% of Revlimid sales through 2018.

These guys are making 300% ROI in annual revenue from their one-time investment into R&D. In order for that to be fair (in line with your 20% average ROI target), they would need to be testing 15 new drugs per year, and if all of the others had failed, then it would be fair to markup the one that succeeded by this amount. But according to the Wikipedia page, the company has investigated less than 10 drugs in their entire existence, much less 15 drugs per year.


Explicit allocators do work with Rust, as evidenced by them already working for libstd's types, as I said. The mistake was to not have them from day one which has caused most code to assume GlobalAlloc.

As long as the type is generic on the allocator, the lifetimes of the allocator don't appear in the type. So eg if your allocator is using a stack array in main then your allocator happens to be backed by `&'a [MaybeUninit<u8>]`, but things like Vec<T, A> instantiated with A = YourAllocator<'a> don't need to be concerned with 'a themselves.

Eg: https://play.rust-lang.org/?version=nightly&mode=debug&editi... do_something_with doesn't need to have any lifetimes from the allocator.

If by Zig-style allocators you specifically mean type-erased allocators, as a way to not have to parameterize everything on A:Allocator, then yes the equivalent in Rust would be a &'a dyn Allocator that has an infectious 'a lifetime parameter instead. Given the choice between an infectious type parameter and infectious lifetime parameter I'd take the former.


It is not. The precedent is around ninety years old, but firmly within modern history. As Franklin Roosevelt attempted to enact his New Deal agenda, the so-called "Four Horsemen", often along with Justice Roberts and/or Chief Justice Hughes, repeatedly ruled against his alphabet agencies and sweeping social bills.

So, Roosevelt worked to find the necessary votes to pack the court. His "Judicial Procedures Reform Bill" was not so different from any of the other "judicial reforms" we've condemned when other strongmen around the world used them to strengthen their power. Politicians from Roosevelt's bloc spewed vitriol at the justices who were simply trying to do their jobs. In Iowa, effigies of the six justices who had opposed any of Roosevelt's actions were found hanged by nooses. Once Roosevelt secured political support and signaled his willingness to push for packing the court, the justices backed down and began ruling in his favor, repeatedly. Where months before it had struck down a New York minimum wage law, a nearly-identical law in Washington was deemed perfectly constitutional. The National Labor Relations Act was fine and dandy. Coal mining was suddenly interstate commerce after all. In fact, the Commerce Clause now covered everything; one clothing factory in Virginia was enough commerce to quality. Later, the court would rule that a man growing wheat for his own consumption was sufficient "commerce" to warrant near-limitless federal rule-making. After all, it meant he bought less wheat from someone else, so clearly it was within the feds' purview. Wickard v. Filburn stands as precedent today. The justices on that court kept their seats but gave up their power.

Now, another populist has shown up. One who has a different vision for the nation than that laid out by the neoliberal technocrats who have dominated American politics since Clinton. Trump has explicitly called out FDR's new deal coalition, the coalition emplaced by vaguely authoritarian means quite similar to those he is using, that was the underlying basis for politics for almost the past century. I don't care for Trump's vision much more than I care for Roosevelt's or Clinton's. But claiming that this is "unprecedented" only serves to point people away from the prior time in history when this happened, when we utterly failed to stop Roosevelt and remove his power. Perhaps learning from history is the better choice so this time we can do a better job of it.


Please also mention how easy those exercises are:

Once per day, when peeing, do it differently. 1. Release the stream during the in-breath. 2. Stop and hold the stream on the outbreath. 3. If not yet bored or tired go back to 1. Else - finish peeing normally. That's it.

And note that for most people, a week to few weeks of the exercise give stronger orgasms and ability to delay the ejaculation.


Anyone who wants to demystify ML should read: The StatQuest Illustrated Guide to Machine Learning [0] By Josh Starmer. To this day I haven't found a teacher who could express complex ideas as clearly and concisely as Starmer does. It's written in an almost children's book like format that is very easy to read and understand. He also just published a book on NN that is just as good. Highly recommend even if you are already an expert as it will give you great ways to teach and communicate complex ideas in ML.

[0]: https://www.goodreads.com/book/show/75622146-the-statquest-i...


This. Go to chrome://flags and disable “Enable OCR For Local Image Search” and I bet the problem goes away.

It’s a stupid feature for Google to enable by default on systems that are generally very low spec and badly made, but it’s not some evil data slurp. One of the most obnoxious things about enshittification is the corrosive effect it seems to have had on technical users’ curiosity: instead of researching and fixing problems, people now seem very prone to jump to “the software is evil and bad” and give up at doing any kind of actual investigation.


The issue is that programming a discrete GPU feels like programming a printer over a COM port, just with higher bandwidths. It's an entirely moronic programming model to be using in 2025.

- You need to compile shader source/bytecode at runtime; you can't just "run" a program.

- On NUMA/discrete, the GPU cannot just manipulate the data structures the CPU already has; gotta copy the whole thing over. And you better design an algorithm that does not require immediate synchronization between the two.

- You need to synchronize data access between CPU-GPU and GPU workloads.

- You need to deal with bad and confusing APIs because there is no standardization of the underlying hardware.

- You need to deal with a combinatorial turd explosion of configurations. HW vendors want to protect their turd, so drivers and specs are behind fairly tight gates. OS vendors also want to protect their turd and refuse even the software API standard altogether. And then the tooling also sucks.

What I would like is a CPU with a highly parallel array of "worker cores" all addressing the same memory and speaking the same goddamn language that the CPU does. But maybe that is an inherently crappy architecture for reasons that are beyond my basic hardware knowledge.


"The years that pass eat up your margin for error until there is no margin left. The mistakes you make are no longer flaws of inexperience, they are flaws of character. To be young is to be constantly on the precipice of perfection – just a little further and you’ll get there – but you never get there, and suddenly you’re old, and find yourself in a permanent state of imperfection, which you must reckon with."

What a powerful observation.


The actual real world impact of wokism is that the left-leaning part of the elite is distracted into performative games outdoing one another in verbal righteousness, instead of actually doing something for the people, which should be the defining part of being left.

Woke is all rituals, no substance. If anyone profits off it, it is highly educated individuals that belong to the visible minorities = precisely the people that don't need so much support.

Woke is deeply uninterested in actual problems of the poor non-academic population. High cost of living? Food deserts? Meh. That doesn't register on the high-brow radars.



That is a massive over-simplification and that invites patently false characterizations like it was a "stupid mistake" that would have been fixed if they were not stupid (i.e. adopted average development process). That is absolutely not the case. They were really capable, but aerospace problems are really, really hard, and their safety capability regressed from being really, really capable.

They modified the flight characteristics of the system. They tuned the control scheme to provide the "same" outputs as the old system. However, the tuning relied on a sensor that was not previously safety-critical. As the sensor was not previously safety-critical, it was not subject to safety-critical requirements like having at least two redundant copies as would normally be required. They failed to identify that the sensor became safety critical and should thus be subject to such requirements. They sold configurations with redundant copies, which were purchased by most high-end airlines, but they failed to make it mandatory due to their oversight and purchasers decided to cheap out on sensors since they were characterized as non-safety-critical even if they were useful and valuable. The manual, which pilots actually read, has instructions on how to disable the automatic tuning and enable redundant control systems and such procedures were correctly deployed at least once if not multiple times to avert crashes in premier airlines. Only a combination of all of those failures simultaneously caused fatalities to occur at a rate nearly comparable to driving the same distance, how horrifying!

A error in UX tuning dependent on a sensor that was not made properly redundant was the "cause". That is not a "stupid mistake". That is a really hard mistake and downplaying it like it was a stupid mistake underestimates the challenges involved designing these systems. That does not excuse their mistake as they used to do better, much better, like 1,000x better, and we know how to do better and the better way is empirically economical. But, it does the entire debacle a disservice to claim it was just "being stupid". It was not, it was only qualifying for the Olympics when they needed to get the gold medal.


Wow, stealing my own comment from last week’s Grokking at the edge of linear separability because it applies here even more so: this paper is so simple, dumb, and absolutely breathtakingly interesting. Thanks for sharing! Never would I have thought that “mycelium doesn’t explore the center of a circle” would hold such profound insight…

For those interested, heres the paper itself: https://www.sciencedirect.com/science/article/pii/S175450482... two interesting things to me:

1. Based on my silly American reading of citation names, it seems Japanese researchers have been leading the charge on basal cognition - a great cultural diversity win! Obviously American and European cognitive scientists are involved, but I get the impression most would dismiss this as misguided.

2. The intro has some of the best philosophy I’ve ever seen in an empirical paper. No citations to philosophers of course because they’d be laughed at, but it’s spot on:

  This evidence led to a formal framework called “basal cognition” for reframing the definition of cognition as “fundamental processes, such as memory, learning, decision-making, and anticipation, and mechanisms that enabled organisms to track some environmental states and act appropriately to ensure survival and reproduction” which existed long before nervous systems evolved. On the contrary, recent studies considering neuroscience hypothesize that the cognition of humans, as a brained animal, emerges from the patterns of interconnections and information transfer across numerous neurons… 
  In this context, the brain exhibits at least two levels of cognition. One is the basal cognition at the cellular level of each neuron, and the other is the classical means of cognition, which emerges from the activities and interconnections of the neural networks. This classical cognition is crucial for brained organisms to “recognize” the external world.
Preach! I’ll do them the favor of providing IMO the clearest exploration of this idea from premodern cogsci (aka philosophy), Schopenhauer’s “fourfold” theory of life:

  Thus causality, this director of each and every change, now appears in nature in three different forms, namely *as cause* in the narrowest sense, *as stimulus*, and *as motive*. It is precisely on this difference that the true and essential distinction is based between inorganic bodies, plants, and animals, and not on external anatomical, or even chemical characteristics.
  The cause in the narrowest sense is that according to which alone changes ensue in the inorganic kingdom… Newton's third fundamental law: "Action and reaction are equal to each other." applies exclusively to cause…
  The second form of causality is the stimulus; it governs organic life as such and hence the life of plant, and the vegetative and thus unconscious part of animal life, which is in fact just a plant life. This second form is characterized by the absence of the distinctive signs of the first. Thus here action and reaction are not equal to each other, and the intensity of the effect through all its degrees by no means corresponds to the intensity of the cause: on the contrary, by intensifying the cause the effect may even be turned into its opposite.
  The third form of causality is the motive. In this form causality controls animal life proper and hence conduct, that is, the external, consciously performed actions of all animals. The medium of motives is knowledge. 
I think this is a direct rephrasing of the above, putting fungus/“basal cognition” in the “vegetative” category.

As Cladistics slowly erodes all of our taxonomic distinctions, I think we could all stand to incorporate more of similarly functional divisions in our intuitive paradigms/standpoints/worldviews. Schopenhauer doesn’t mention “fungus” or “mushrooms” once (much less “slime molds”!), but I think he would happily call them “vegetative” nonetheless, and be thrilled to see this paper!

TL;DR: cognition is graduated, which means it’s neither uniquely homogenous nor uniformly gradual.


It also has one of most spectacular multi week hikes in the world, if not #1, the Annapurna circuit. I was lucky to take it in 2008 with almost nobody on the trail (just right after monsoons in second half of September), took 16 days, around 220km IIRC. No roads built back then although we saw few CAT bulldozers dropped in the impossible places in gorges, I guess part of construction.

From absolute tropical jungle, through all other climates to frozen high altitude icy desert and all the way back, top point is 5,400m high. Also at one point hinduism of lowlands switches to buddhism. Wild marihuana growing everywhere. Muktinath just after(before) the highest pass is an important sacred and pilgrimage place for 3 different religions.

There is one point in Kali Gandaki gorge where you look left and there is absolute tibetan-style desert with 0 plants, just rocks and dirt, going up to Manang region. You look right and its typical tropical jungle. And in between, in span of maybe 7-8km the whole gradient happens continuously on Annapurna western slopes. Another nice spot is IIRC Marpha where you are looking at almost 5000m pretty much vertical drop on cca top of Dhaulaghiri, its sister 8000m+ peak. Even after 2 weeks of constant exposition to himalayan giants I just stood there in awe.

A life changing experience for me due to various factors and also people met. It has a special place in my heart.


I have extracted this (and other) nerves from cadavers, and this circuitous cranial nerve (XII?) is as beautiful and complex as the multibranch plexus-es (e.g. brachial). So delicate.

Human anatomy, at first glance, often seems wrongly-engineered. After you've worked inside dozens of people, you begin to realize that everybody is unique — and nobody is "textbook" anatomy. Who knows what all this goop even does?!

If you ever get the chance, I highly recommend this humbling human experience. My hope is that my own cadaver is ripped apart by somebody as crazy/appreciative as me =D


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: