Hacker Newsnew | past | comments | ask | show | jobs | submit | highlightslogin
If you run across a great HN comment or subthread, please tell us at hn@ycombinator.com so we can add it here!

Weird! Sorry to hear that commenting (including on HN) didn't make this person any friends. It has made me a bunch of friends, including some very close in-person ones. I don't think I'm an oddball in that regard!

Of particular note: comment culture is how I managed to engage with local politics here in Chicagoland, through which I met a ton of my neighbors and got actively involved with campaigns and the local government itself. Those are all in-person relationships that were (and remain) heavily mediated by comments.


I'm going to miss his waffle-free 'Hello' intro. It is/was like stepping on to an airport travelator that's moving 30% faster than you were expecting.

Back in the 90s, I implemented precompiled headers for my C++ compiler (Symantec C++). They were very much like modules. There were two modes of operation:

1. all the .h files were compiled, and emitted as a binary that could be rolled in all at once

2. each .h file created its own precompiled header. Sounds like modules, right?

Anyhow, I learned a lot, mostly that without semantic improvements to C++, while it made compilation much faster, it was too sensitive to breakage.

This experience was rolled into the design of D modules, which work like a champ. They were everything I wanted modules to be. In particular,

The semantic meaning of the module is completely independent of wherever it is imported from.

Anyhow, C++ is welcome to adopt the D design of modules. C++ would get modules that have 25 years of use, and are very satisfying.

Yes, I do understand that the C preprocessor macros are a problem. My recommendation is, find language solutions to replace the preprocessor. C++ is most of the way there, just finish the job and relegate the preprocessor to the dustbin.


I may be the only person who ever understood every detail of C++, starting with the preprocessor. I can make that claim because I'm the only person who ever implemented all of it. (You cannot really know a language until you've implemented it.) I gave up on that in the 2000's. Modern C++ is simply terrifying in its complexity.

(I'm not including the C++ Standard Library, as I didn't implement it.)


"What am I doing wrong?"

Your first prompt is testing Claude as an encyclopedia: has it somehow baked into its model weights the exactly correct skeleton for a "Zephyr project skeleton, for Pi Pico with st7789 spi display drivers configured"?

Frequent LLM users will not be surprised to see it fail that.

The way to solve this particular problem is to make a correct example available to it. Don't expect it to just know extremely specific facts like that - instead, treat it as a tool that can act on facts presented to it.

For your second example: treat interactions with LLMs as an ongoing conversation, don't expect them to give you exactly what you want first time. Here the thing to do next is a follow-up prompt where you say "number eight looked like zero, fix that".


This isn't a failure of PowerPoint. I work for NASA and we still use it all the time, and I'll assure anyone that the communication errors are rife regardless of what medium we're working in. The issue is differences in the way that in-the-weeds engineers and managers interpret technical information, which is alluded to in the article but the author still focuses on the bullets and the PowerPoint, as if rewriting similar facts in a technical paper would change everything.

My own colleagues fall victim to this all the time (luckily I do not work in any capacity where someone's life is directly on the line as a result of my work.) Recently, a colleague won an award for helping managers make a decision about a mission parameter, but he was confused because they chose a parameter value he didn't like. His problem is that, like many engineers, he thought that providing the technical context he discovered that led him to his conclusion was as effective as presenting his conclusion. It never is; if you want to be heard by managers, and really understood even by your colleagues, you have to say things up front that come across as overly simple, controversial, and poorly-founded, and then you can reveal your analyses as people question you.

I've seen this over and over again, and I'm starting to think it's a personality trait. Engineers are gossiping among themselves, saying "X will never work". They get to the meeting with the managers and present "30 different analyses showing X is marginally less effective than Y and Z" instead of just throwing up a slide that says "X IS STUPID AND WE SHOULDN'T DO IT." Luckily for me, I'm not a very good engineer, so when I'm along for the ride I generally translate well into Managerese.


My dad headed up the redesign effort on the Lockheed Martin side to remove the foam PAL ramps (where the chunk of foam that broke off and hit the orbiter came from) from the external tank, as part of return-to-flight after the Columbia disaster. At the time he was the last one left at the company from when they had previously investigated removing those ramps from the design. He told me how he went from basically working on this project off in a corner on his own, to suddenly having millions of dollars in funding and flying all over for wind tunnel tests when it became clear to NASA that return-to-flight couldn't happen without removing the ramps.

I don't think his name has ever come up in all the histories of this—some Lockheed policy about not letting their employees be publicly credited in papers—but he's got an array of internal awards from this time around his desk at home (he's now retired). I've always been proud of him for this.


Funny to see this pop up again (I'm the author). The year is now 2025 and I still use Chase as a personal bank and I'm now discovering new funny banking behaviors. I'll use this as a chance to share. :)

My company had an exit, I did well financially. This is not a secret. I'm extremely privileged and thankful for it. But as a result of this, I've used a private bank (or mix) for a number of years to store the vast majority of my financial assets (over 99.99% of all assets, I just did the math). An unfortunate property of private banks is they make it hard to do retail-like banking behaviors: depositing a quick check, pulling cash from an ATM, but ironically most importantly Zelle.

As such, I've kept my Chase personal accounts and use them as my retail bank: there are Chase branches everywhere, its easy to get to an ATM, and they give me easy access to Zelle! I didn't choose Chase specifically, I've just always used Chase for personal banking since I was in high school so I just kept using them for this.

Anyways, I tend to use my Chase account to pay a bunch of bills, just because it's more convenient (Zelle!). I have 3 active home construction projects, plus pay my CC, plus pretty much all other typical expenses (utilities, car payments, insurance, etc.). But I float the money in/out of the account as necessary to cover these. We do accounting of all these expenses at the private bank side, so its all tracked, but it settles within the last 24-48 hours via Chase.

Otherwise, I keep my Chase balance no more than a few thousand dollars.

This really wigs out automated systems at Chase. I get phone calls all the time (like, literally multiple times per week) saying "we noticed a large transfer into your account, we can help!" And I cheekily respond "refresh, it's back to zero!" And they're just confused. To be fair, I've explained the situation in detail to multiple people multiple times but it isn't clicking, so they keep calling me.

I now ignore the phone calls. Hope I don't regret that later lol.


Presenting information theory as a series of independent equations like this does a disservice to the learning process. Cross-entropy and KL-divergence are directly derived from information entropy, where InformationEntropy(P) represents the baseline number of bits needed to encode events from the true distribution P, CrossEntropy(P, Q) represents the (average) number of bits needed for encoding P with a suboptimal distribution Q, and KL-Divergence (better referred to as relative entropy) is the difference between these two values (how many more bits are needed to encode P with Q, i.e. quantifying the inefficiency):

relative_entropy(p, q) = cross_entropy(p, q) - entropy(p)

Information theory is some of the most accessible and approachable math for ML practitioners, and it shows up everywhere. In my experience, it's worthwhile to dig into the foundations as opposed to just memorizing the formulas.

(bits assume base 2 here)


The Kodak Research Labs (like Bell Labs) let their researchers play. In the 1960's my father (who later devised the Bayer filter for digital cameras) coded this algorithm for "Jotto" the 5 letter word version of Mastermind.

Computers were so slow that one couldn't consider every word in the dictionary as a potential guess. He decided empirically on a sample size that played well enough.

I became a mathematician. From this childhood exposure, entropy was the first mathematical "concept" beyond arithmetic that I understood.


Oh hey, I wrote this. Been a long time. I had the lucky of break of working in machine translation / parsing when the most important invention of the century happened in my niche field.

I'm pretty interested in the intersection of code / ML. If that's your thing here are some other writing you might be interested in.

* Thinking about cuda: http://github.com/srush/gpu-puzzles

* Tensors considered harmful: https://nlp.seas.harvard.edu/NamedTensor

* Differentiating SVG: https://srush.github.io/DiffRast/

* Annotated S4: https://srush.github.io/annotated-s4/

Recently moved back to industry, so haven't had a chance to write in a while.


I just realized that my most famous comment on HN is the same age as I was when I won the Putnam.

I believe it absolutely should be, and it can even be applied to rare disease diagnosis.

My child was just saved by AI. He suffered from persistent seizures, and after visiting three hospitals, none were able to provide an accurate diagnosis. Only when I uploaded all of his medical records to an AI system did it immediately suggest a high suspicion of MOGAD-FLAMES — a condition with an epidemiology of roughly one in ten million.

Subsequent testing confirmed the diagnosis, and with the right treatment, my child recovered rapidly.

For rare diseases, it is impossible to expect every physician to master all the details. But AI excels at this. I believe this may even be the first domain where both doctors and AI can jointly agree that deployment is ready to begin.


> "When you consider that classical engineers are responsible for the correctness of their work"

Woah hang on, I think this betrays a severe misunderstanding of what engineers do.

FWIW I was trained as a classical engineer (mechanical), but pretty much just write code these days. But I did have a past life as a not-SWE.

Most classical engineering fields deal with probabilistic system components all of the time. In fact I'd go as far as to say that inability to deal with probabilistic components is disqualifying from many engineering endeavors.

Process engineers for example have to account for human error rates. On a given production line with humans in a loop, the operators will sometimes screw up. Designing systems to detect these errors (which are highly probabilistic!), mitigate them, and reduce the occurrence rates of such errors is a huge part of the job.

Likewise even for regular mechanical engineers, there are probabilistic variances in manufacturing tolerances. Your specs are always given with confidence intervals (this metal sheet is 1mm thick +- 0.05mm) because of this. All of the designs you work on specifically account for this (hence safety margins!). The ways in which these probabilities combine and interact is a serious field of study.

Software engineering is unlike traditional engineering disciplines in that for most of its lifetime it's had the luxury of purely deterministic expectations. This is not true in nearly every other type of engineering.

If anything the advent of ML has introduced this element to software, and the ability to actually work with probabilistic outcomes is what separates those who are serious about this stuff vs. demoware hot air blowers.


A few years ago, on my birthday, I quickly checked the visitor stats for a little side project I had started (r-exercises.com). Instead of the usual 2 or 3 live visitors, there were hundreds. It looked odd to me—more like a glitch—so I quickly returned to the party, serving food and drinks to my guests.

Later, while cleaning up after the party, I remembered the unusual spike in visitors and decided to check again. To my surprise, there were still hundreds of live visitors. The total visitor count for the day was around 10,000. After tracking down the source, I discovered that a really kind person had shared the directory/landing page I had created just a few days earlier—right here on Hacker News. It had made it to the front page, with around 200 upvotes and 40 comments: https://news.ycombinator.com/item?id=12153811

For me, the value of hitting the HN front page was twofold. First, it felt like validation for my little side project, and it encouraged me to take it more seriously (despite having a busy daily schedule as a freelance data scientist). But perhaps more importantly, it broadened my horizons and introduced me to a whole new world of information, ideas, and discussions here on HN.

Thank you HN for this wonderful birthday gift!


I met Woz briefly at a Coffee Shop in NYC, many years ago. I was pitching investors for funding and had no idea what I was doing.

He gave me some quick feedback that made a world of difference in SF (years later, when I tried raising angel money): get closer to people first. You don’t have to be their friends - just show up to where they show up (bars, meetups, library - anything), then tell them what you’re doing, and they’ll write you checks without you asking.

This is still the key reason why raising money in the Bay Area is so much easier than anywhere else.

I didn’t get any check from him that night, but he came across as such a human person - like someone you know for years - even to a complete stranger trying to go straight for his wallet.

Tech would be amazing if we had 1000x more Wozes around.


You reminded me of an awesome Google engineer I met at BSidesSF last year who tirelessly answered my questions, and when I clicked on the video, it was you! That was a really inspiring moment for me, thank you.

Key quote:

> My peers in tech who are reluctant to have children often express fear that it will interrupt the arc of the careers they've worked so hard to build.

> That, I think, is the primary tension: not between the family and the state, as Boyle argues, but between individual and collective ambitions. Both the state and the family ask us to make sacrifices for something bigger than ourselves — and this, perhaps, is why they have historically fought each other for mindshare. What tech offers is the opposite: a chance to realize a vision that is entirely one's own. Tech worships individual talent, and it's a unique thrill to live and work among peers who don't shy away from greatness. But it also means that tech has to work harder than other industries to demonstrate that starting a family doesn't require giving up these ambitions.

I'm the breadwinner in my family, and my husband is a SAHD. I have a 2yo and I'm 6 months pregnant with our second. Stereotypically, having a kid made me care less about professional ambitions — but I don't care zero. And as the breadwinner, earning money, ideally more money every couple years or so, is a high priority. God, the pressure to keep up. It's hard to balance with being a present mom.

I live in the SF Bay Area and being able to attend events and network in person has been a huge boon to my career. Being "in the scene" pays off. I can't really do that anymore, not without losing time with my kid, and I'm just not willing to make the sacrifice. Traveling to conferences, etc., is even more off the table. Don't even talk to me about commuting. But I know these lifestyle changes will have repercussions next time I need to find a job.

To secure jobs with the kind of flexibility I require as a mom, I need to be a high performer, an impressive candidate with plenty of connections. Being a mom makes it harder — more expensive, let's say — to be that kind of exceptional worker bee. Oy vey!


How can you tell that any Windows or Mac clone UI is a re-implementation? Easy: try to move your mouse diagonally into the Send To menu after letting it pop up. If the send-to menu closes as you mouse over the item into the submenu, it's a clone. If the menu stays up even if you brush over another menu item, it's either real or a Good Clone. :)

For the fun history, @DonHopkins had a thread a few years back:

https://news.ycombinator.com/item?id=17404345


Hey, I made this a few years ago. I'm suprised to see it posted here today.

It was never finished and I was meaning to add a polyfill for the missing cancelAndHoldAtTime function for Firefox.

Edit: I've just hacked in a quick polyfill


Awesome project! I built a somewhat similar 30-pixel display: https://www.chrisfenton.com/the-pixelweaver/

Mine was entirely mechanical (driven by punch cards and a hand-crank), and changed all of the pixels in parallel, but a lot of the mechanism development looked extremely familiar to me.


Let me share a personal story. Back in 2014 when I was working at Cloudflare on DDoS mitigation I collaborated a lot with a collage - James (Jog). I asked him loads of questions, from "how to login to a server", via "what is anycast" to "tell me how you mitigated this one, give me precise instructions you've run".

I quickly realised that these conversations had value outside the two of us - pretty much everyone else onboarded had similar questions. Some subjects were about pure onboarding friction, some were about workflows most folks didn't know existed, some were about theoretical concepts.

So I moved the questions to a public (within company) channel, and called it "Marek's Bitching" - because this is what it was. Pretty much me complaining and moaning and asking annoying questions. I invited more London folks (Zygis), and before I knew half of the company joined it.

It had tremendous value. It captured all the things that didn't have real place in the other places in the company, from technical novelties, through discussions that were escaping structure - we suspected intel firmware bugs, but that was outside of any specific team at the time.

Then the channel was renamed to something more palatable - "Marek's technical corner" and it had a clear place in the technical company culture for more than a decade.

So yes, it's important to have a place to ramble, and it's important to have "your own channel" where folks have less friction and stigma to ask stupid questions and complain. Personal channels might be overkill, but a per-team or per-location "rambling/bitching" channel is a good idea.


I'm reminded of the famous story of (I think) the central beam in a building at Oxford. The story goes something like:

The central beam was beginning to fail and the Oxford administration knew they needed to replace it. When they went around for quotes, no one could replace the beam because it was 100 ft in length and sourced from an old growth tree. Such logs were simply unavailable to buy. To solve the issue, the staff begin to look at major renovations to the building's architecture.

Until the Oxford groundskeeper heard about the problem. "We have a replacement beam," he said.

The groundskeeper took the curious admins to the edge of the grounds. There stood two old growth trees, over 150 feet tall.

"But these must be over 200 years old! When were they planted?" the admins asked.

"The day they replaced the previous beam."


I own the farm and farm it in Illinois. I owe the land through an LLC, because farming is dangerous and I don't want to go bankrupt if somebody sues me. Farms are expensive and hard to subdivide, so people will put them into a legal entity and pass down to the next generation via a trust. All of my neighbors are doing the same, so we're all counted as "not farmers" here

Farming is a terrible business. My few hundred acres (maybe worth $5M) will only churn out a few hundred grand in profit -- not even better than holding t-bills. The margins get better as you get bigger but still not great.

Many of the buyers keep growing their farms because it's a status symbol. Everybody in your area will instantly know you're a big wig if you're one of the X family who has 2,000 acres all without the ick that comes with running other businesses. You can't buy that kind of status in my community with anything other than land.


The diamond industry got into this mess by insisting that the best diamonds were "flawless". This put them into competition with the semiconductor materials industry, which routinely manufactures crystals with lattice defect levels well below anything seen in natural diamonds. The best synthetic diamonds now have below 1 part per billion atoms in the wrong place.[1] Those are for radiation detectors, quantum electronics, and such. Nobody needs a jewel that flawless.

De Beers tried to squelch the first US startup to turn out gemstones in production by intimidating the founder. The founder was a retired US Army brigadier general (2 silver stars earned in combat) and wasn't intimidated. That was back in 2011, and since then it's been all downhill for natural diamonds.

De Beers later tried building synthetic diamond detectors. There are simple detectors for detecting cubic zirconia and such, but separating synthetic and natural diamonds is tough. The current approach is to hit the diamond with a burst of UV, turn off the UV and then capture an image. The spectrum of the afterglow indicates impurities in the diamond. The latest De Beers testing machine [2] is looking for nitrogen atoms embedded in the diamond, which is seen more in natural diamonds than synthetics. The synthetics are better than the naturals. Presumably synthetic manufacturers could add some nitrogen if they wanted to bother. This is the latest De Beers machine in their losing battle against synthetics. They've had DiamondScan, DiamondView, DiamondSure, SynthDetect, and now DiamondProof. Even the most elaborate devices have a false alarm rate of about 5%.[3]

[1] https://e6-prd-cdn-01.azureedge.net/mediacontainer/medialibr...

[2] https://verification.debeersgroup.com/instrument/diamondproo...

[3] https://www.naturaldiamonds.com/wp-content/uploads/2025/06/A...


This story has been reposted many times, and I think GJS's remarks (as recorded by Andy Wingo) are super-interesting as always, but this is really not a great account of "why MIT switched from Scheme to Python."

Source: I worked with GJS (I also know Alexey and have met Andy Wingo), and I took 6.001, my current research still has us referring to SICP on a regular basis, and in 2006 Kaijen Hsiao and I were the TAs for what was basically the first offering of the class that quasi-replaced it (6.01) taught by Leslie Kaelbling, Hal Abelson, and Jacob White.

I would defer to lots of people who know the story better than me, but here's my understanding of the history. When the MIT EECS intro curriculum was redesigned in the 1980s, there was a theory that an EECS education should start with four "deep dives" into the four "languages of engineering." There were four 15-unit courses, each about one of these "languages":

- 6.001: Structure and Interpretation of Computer Programs (the "procedural" language, led by Abelson and Sussman)

- 6.002: Circuits and Electronics ("structural" language)

- 6.003: Signals and Systems ("functional" language)

- 6.004: Computation Structures ("architectural" language)

These were intellectually deep classes, although there was pain in them, and they weren't universally beloved. 6.001 wasn't really about Scheme; I think a lot of the point of using Scheme (as I understood it) is that the language is so minimalist and so beautiful that even this first intro course can be about fundamental concepts of computer science without getting distracted by the language. This intro sequence lasted until the mid-2000s, when enrollment in EECS ("Course 6") declined after the dot-com crash, and (as would be expected, and I think particularly worrisome) the enrollment drop was greater among demographic groups that EECS was eager to retain. My understanding circa 2005 is that there was a view that EECS had broadened in its applications, and that beginning the curriculum with four "deep dives" was offputting to students who might not be as sure that they wanted to pursue EECS and might not be aware of all the cool places they could go with that education (e.g. to robotics, graphics, biomedical applications, genomics, computer vision, NLP, systems, databases, visualization, networking, HCI, ...).

I wasn't in the room where these decisions were made, and I bet there were multiple motivations for these changes, but I understood that was part of the thinking. As a result, the EECS curriculum was redesigned circa 2005-7 to de-emphasize the four 15-unit "deep dives" and replace them with two 12-unit survey courses, each one a survey of a bunch of cool places that EECS could go. The "6.01" course (led by Kaelbling, Abelson, and White) was about robots, control, sensing, statistics, probabilistic inference, etc., and students did projects where the robot drove around a maze (starting from an unknown position) and sensed the walls with little sonar sensors and did Bayesian inference to figure out its structure and where it was. The "6.02" course was about communication, information, compression, networking, etc., and eventually the students were supposed to each get a software radio and build a Wi-Fi-like system (the software radios proved difficult and, much later, I helped make this an acoustic modem project).

The goal of these classes (as I understood) was to expose students to a broad range of all the cool stuff that EECS could do and to let them get there sooner (e.g. two classes instead of four) -- keep in mind this was in the wake of the dot-com crash when a lot of people were telling students that if they majored in computer science, they were going to end up programming for an insurance company at a cubicle farm before their job was inevitably outsourced to a low-cost-of-living country.

6.01 used Python, but in a very different way than 6.001 "used" Scheme -- my recollection is that the programming work in 6.01 (at least circa 2006) was minimal and was only to, e.g., implement short programs that drove the robot and averaged readings from its sonar sensors and made steering decisions or inferred the robot location. It was nothing like the big programming projects in 6.001 (the OOP virtual world, the metacircular evaluator, etc.).

So I don't think it really captures it to say that MIT "switched from Scheme to Python" -- I think the MIT EECS intro sequence switched from four deep-dive classes to two survey ones, and while the first "deep dive" course (6.001) had included a lot of programming, the first of the new survey courses only had students write pretty small programs (e.g. "drive the robot and maintain equal distance between the two walls") where the simplest thing was to use a scripting language where the small amount of necessary information can be taught by example. But it's not like the students learned Python in that class.

My (less present) understanding is that >a decade after this 2006-era curricular change, the department has largely deprecated the idea of an EECS core curriculum, and MIT CS undergrads now go through something closer to a conventional CS0/CS1 sequence, similar to other CS departments around the country (https://www.eecs.mit.edu/changes-to-6-100a-b-l/). But all of that is long after the change that Sussman and Wingo are talking about here.


Hulk Hogan was my business partner in an ill-fated web hosting business called Hostamania. While he ultimately had a lot of troubled, old-fashioned thinking that I don't agree with, he was a genuinely friendly person who was nice to everyone despite crowds following him around constantly.

He was an odd character but was truly a character - he was Hulk Hogan as you know him (bandana, the mustache, the yellow muscle shirt) from the moment he got up to the moment he went to bed; unlike some stars who had a life outside of their character, his character became his life which was really something interesting to behold up close.

I've been getting a lot of calls and talking to friends today; and again - while Hogan was not exactly a "good person" in all regards - he was a friend and brought a lot of joy to a lot of people and he will be missed.


This is great.

I set it up and was conspicuously swiping in bed. My wife is all hey, what are you doing? I’m all nothing.. put the phone down on the dresser.

No, let me see your phone etc. I relent, she opens the app with sulphur smoldering in her nostrils lol, then she starts poking around, and we have been having a really great night since.


Some thoughts on this as someone working on circulating-tumor DNA for the last decade or so:

- Sure, cancer can develop years before diagnosis. Pre-cancerous clones harboring somatic mutations can exist for decades before transformation into malignant disease.

- The eternal challenge in ctDNA is achieving a "useful" sensitivity and specificity. For example, imagine you take some of your blood, extract the DNA floating in the plasma, hybrid-capture enrich for DNA in cancer driver genes, sequence super deep, call variants, do some filtering to remove noise and whatnot, and then you find some low allelic fraction mutations in TP53. What can you do about this? I don't know. Many of us have background somatic mutations speckled throughout our body as we age. Over age ~50, most of us are liable to have some kind of pre-cancerous clones in the esophagus, prostate, or blood (due to CHIP). Many of the popular MCED tests (e.g. Grail's Galleri) use signals other than mutations (e.g. methylation status) to improve this sensitivity / specificity profile, but I'm not convinced its actually good enough to be useful at the population level.

- The cost-effectiveness of most follow on screening is not viable for the given sensitivity-specificity profile of MCED assays (Grail would disagree). To achieve this, we would need things like downstream screening to be drastically cheaper, or possibly a tiered non-invasive screening strategy with increasing specificity to be viable (e.g. Harbinger Health).


Yes, ants evolved from wasps, and it's really not that surprising if you take a close look at a typical ant and a typical wasp, pretty much the only difference are wings and coloring. There also exist wingless wasps, and some of them are black and really quite indistinguishable from ants by non-entomologist. And that's after over 100 million years since the ants diverged from the wasps! Talk about a successful evolutionary design. Your closest relative from 100 million years ago was a little vaguely rat-like thing. (Edit to answer your specific question: the ancestor of ants and wasps obviously was winged and flying, since both families still have at least some winged members).

As a sibling has already pointed out ants do fly during "nuptial flight", and then discard their wings... wings would only be a hindrance for their largely underground lifestyle. Also ants have retained the stinger which also functions as an ovipositor (egg layer), and some species still use it for defense and pack a wallop of a poison, right up there with some of the of the worst wasps. Google "bullet ant" for some good stuff. Other ants just bite, and the burning you feel is from their saliva which consists mostly of an acid named after ants: fourmic acid (ant is "formica" in latin).

Edit to add one more random factoid that will surprise a lot of people: termites are not related to ants at all, and they evolved from... (drumroll)... cockroaches! It's rather harder to see the resemblance, except for their diet... both are capable of digesting (with help from endosymbiotic microbes) pure cellulose. And while termites don't really resemble ants either, parallel evolution has chosen the same strategy of retaining the wings for the fertile individuals who go on a nuptial flight and then discard their wings and try to found new colonies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: