Hacker Newsnew | past | comments | ask | show | jobs | submit | jpcompartir's commentslogin

I can't remember which paper it's from, but isn't the variance in performance explained by # of tokens generated? i.e. more tokens generated tends towards better performance.

Which isn't particularly amazing, as # of tokens generated is basically a synonym in this case for computation.

We spend more computation, we tend towards better answers.


Most comments seem to be taking the code seriously, when it's clearly satirical?


Polars is great, absolute best of luck with the launch


Interesting, in the LLM case these compression artefacts then get fed into the generating process of the next token, hence the errors compound.


Not really. The whole "inference errors will always compound" idea was popular in GPT-3.5 days, and it seems like a lot of people just never updated their knowledge since.

It was quickly discovered that LLMs are capable of re-checking their own solutions if prompted - and, with the right prompts, are capable of spotting and correcting their own errors at a significantly-greater-than-chance rate. They just don't do it unprompted.

Eventually, it was found that reasoning RLVR consistently gets LLMs to check themselves and backtrack. It was also confirmed that this latent "error detection and correction" capability is present even at base model level, but is almost never exposed - not in base models and not in non-reasoning instruct-tuned LLMs.

The hypothesis I subscribe to is that any LLM has a strong "character self-consistency drive". This makes it reluctant to say "wait, no, maybe I was wrong just now", even if latent awareness of "past reasoning look sketchy as fuck" is already present within the LLM. Reasoning RLVR encourages going against that drive and utilizing those latent error-correction capabilities.


You seem to be responding to a strawman, and assuming I think something I don't think.

As of today, 'bad' generations early in the sequence still do tend towards responses that are distant to the ideal response. This is testable/verifiable by pre-filling responses, which I'd advise you to experiment with for yourself.

'Bad' generations early in the output sequence are somewhat mitigatable by injecting self-reflection tokens like 'wait', or with more sophisticated test-time compute techniques. However, those remedies can simultaneously turn 'good' generations into bad, they are post-hoc heuristics which treat symptoms not causes.

In general, as the models become larger they are able to compress more of their training data. So yes, using the terminology of the commenter I was responding to, larger models should tend to have fewer 'compression artefacts' than smaller models.


With better reasoning training, the models mitigate more and more of that entirely by themselves. They "diverge into a ditch" less, and "converge towards the right answer" more. They are able to use more and more test-time compute effectively. They bring their own supply of "wait".

OpenAI's in-house reasoning training is probably best in class, but even lesser naive implementations go a long way.


Assuming you've read OpenAI's paper released this week?

https://cdn.openai.com/pdf/d04913be-3f6f-4d2b-b283-ff432ef4a...

They attribute these 'compression artefacts' to pre-training, they also reference the original snowballing paper: How Language Model Hallucinations Can Snowball: https://arxiv.org/pdf/2305.13534

They further state that reasoning is no panacea. W hilst you did say: "the models mitigate more and more"

You were replying to my comment which said:

"'Bad' generations early in the output sequence are somewhat mitigatable by injecting self-reflection tokens like 'wait', or with more sophisticated test-time compute techniques."

So our statements there are logically compatible, i.e. you didn't make a statement that contradicts what I said.

"Our error analysis is general yet has specific implications for hallucination. It applies broadly, including to reasoning and search-and-retrieval language models, and the analysis does not rely on properties of next-word prediction or Transformer-based neural networks."

"Search (and reasoning) are not panaceas. A number of studies have shown how language models augmented with search or Retrieval-Augmented Generation (RAG) reduce hallucinations (Lewis et al., 2020; Shuster et al., 2021; Nakano et al., 2021; Zhang and Zhang, 2025). However, Observation 1 holds for arbitrary language models, including those with RAG. In particular, the binary grading system itself still rewards guessing whenever search fails to yield a confident answer. Moreover, search may not help with miscalculations such as in the letter-counting example, or other intrinsic hallucinations"


The problem is that language doesn't produce itself. Re-checking, correcting error is not relevant. Error minimization is not the fount of survival, remaining variable for tasks is. The lossy encyclopedia is neither here nor there, it's a mistaken path:

"Language, Halliday argues, "cannot be equated with 'the set of all grammatical sentences', whether that set is conceived of as finite or infinite". He rejects the use of formal logic in linguistic theories as "irrelevant to the understanding of language" and the use of such approaches as "disastrous for linguistics"."


Sorry, what? This is borderline incoherent.


The units themselves are meaningless without context. The point of existence, action, tasks is to solve the arbitrariness in language. Tasks refute language, not the other way around. This may be incoherent as the explanation is scientific, based in the latest conceptualization of linguistics.

CS never solved the incoherence of language, conduit metaphor paradox. It's stuck behind language's bottleneck, and it do so willingly blind-eyed.


What? This is even less coherent.

You weren't talking to GPT-4o about philosophy recently, were you?


I'd know cutting-edge linguistics and signaling theory well beyond Shannon to parse this, not NLP or engineering reduction. What I've stated is extremely coherent to Systemic Functional Linguists.

Beyond this point engineers actually have to know what signaling is, rather than 'information.'

https://www.sciencedirect.com/science/article/abs/pii/S00033...

Ultimately, engineering chose the wrong approach to automating language, and it sinks the field. It's irreversible.


If not language what training substrate do you suggest? Also not strong ideas are expressible coherently. You have an ironic pattern in your comments of getting lost in the very language morass you propose to deprecate. If we don't train models on language what do we train them on? I have some ideas of my own but I am interested if you can clearly express yours.


Neural/spatial syntax. Analoga of differentials. The code to operate this gets built before the component.

If language doesn't really mean anything, then automating it in geometry is worse than problematic.

The solution is starting over at 1947: measurement not counting.


The semantic meaning of your words here is non-existent. It is unclear to me how else you can communicate in a text based forum if not by using words. Since you can't despite your best effort I am left to conclude you are psychotic and should probably be banned and seek medical help.


Engineers are so close-minded, you can't see the freight train bearing down on the industry. All to science's advantage replacing engineers. Interestingly, if you dissect that last entry, I've just made the case measurement (analog computation) is superior to counting (binary computation) and laid out the strategy how. All it takes is brains, or an LLM to decipher what it states.

https://pmc.ncbi.nlm.nih.gov/articles/PMC3005627/

"First, cell assemblies are best understood in light of their output product, as detected by ‘reader-actuator’ mechanisms. Second, I suggest that the hierarchical organization of cell assemblies may be regarded as a neural syntax. Third, constituents of the neural syntax are linked together by dynamically changing constellations of synaptic weights (‘synapsembles’). Existing support for this tripartite framework is reviewed and strategies for experimental testing of its predictions are discussed."


I 100% agree analog computing would be better at simulating intelligence than binary. Why don't you state that rather than burying it under a mountain of psychobabble?


Listing the conditions, dichotomizing the frameworks counting/measurement is the farthest from psycho-babble. Anyone with knowledge of analog knows these terms. And enough to know analog doesn't simulate anything. And intelligence isn't what's being targeted.


One of the main takeaways from The Bitter Lesson was that you should fire your linguists. GPT-2 knows more about human language than any linguist could ever hope to be able to convey.

If you're hitching your wagon to human linguists, you'll always find yourself in a ditch in the end.


Sorry, 2 billion years of neurobiology beats 60 years of NLP/LLMs which knows less to nothing about language since "arbitrary points can never be refined or defined to specifics" check your corners and know your inputs.

The bill is due on NLP.


Incoherent drivel.


I would echo some caution if using as a reference, as in another blog the writer states:

"Backpropagation, often referred to as “backward propagation of errors,” is the cornerstone of training deep neural networks. It is a supervised learning algorithm that optimizes the weights and biases of a neural network to minimize the error between predicted and actual outputs.."

https://chizkidd.github.io/2025/05/30/backpropagation/

backpropagation is a supervised machine learning algorithm, pardon?


I actually see this a lot: confusing backpropagation with gradient descent (or any optimizer). Backprop is just a way to compute the gradients of the weights with respect to the cost function, not an algorithm to minimize the cost function wrt. the weights.

I guess giving the (mathematically) simple principle of computing a gradient with the chain rule the fancy name "backpropagation" comes from the early days of AI where the computers were much less powerful and this seemed less obvious?


The German Wikipedia article makes the same mistake and it is quite infuriating.


What does this comment have to do with the previous comment, which talked about supervised learning?


Reread the comment

"Backprop is just a way to compute the gradients of the weights with respect to the cost function, not an algorithm to minimize the cost function wrt. the weights."

What does the word supervised mean? It's when you define a cost function to be the difference between the training data and the model output.

Aka something like (f(x)-y)^2 which is simply the quadratic difference between the result of the model given an input x from the training data and the corresponding label y.

A learning algorithm is an algorithm that produces a model given a cost function and in the case of supervised learning, the cost function is parameterized with the training data.

The most common way to learn a model is to use an optimization algorithm. There are many optimization algorithms that can be used for this. One of the simplest algorithms for the optimization of unconstrained non-linear functions is stochastic gradient descent.

It's popular because it is a first order method. First order methods only use the first partial derivative known as the gradient whose size is equal to the number of parameters. Second order methods converge faster, but they need the Hessian, whose size scales with the square of the to be optimized parameters.

How do you calculate the gradient? Either you calculate each partial derivative individually, or you use the chain rule and work backwards to calculate the complete gradient.

I hope this made it clear that your question is exactly backwards. The referenced blog is about back propagation and unnecessarily mentions supervised learning when it shouldn't have done that and you're the one now sticking with supervised learning even though the comment you're responding to told you exactly why it is inappropriate to call back propagation a supervised learning algorithm.


regarding "supervised", it is a bit of a small nuance.

Traditional "supervised" training, required the dataset to be annotated with labels (good/bad, such-and-such a bounding box in an image, ...) which cost a lot of human labor to produce.

When people speak of "unsupervised" training, I actually consider it a misnomer: its historically grown, and the term will not go away quickly, but a more apt name would have been "label-free" training.

For example consider a corpus of human written text (books, blogs, ...) without additional labels (verb annotations, subject annotations, ...).

Now consider someone proposing to use next-token prediction, clearly it doesn't require additional labeling. Is it supervised? Nobody calls it supervised under the current convention, but actually one may view next-token prediction on a bare text corpus as a trick to turn an unlabeled dataset into trillions of supervised prediction tasks. Given this N-gram of preceding tokens, what does the model predict as the next token? And what does the corpus actually say as next token? Lets use this actual next token as if it were a "supervised" (labeled) exercise.


That's also why LeCun promoted the term "self-supervised" a while ago, with some success.


The previous comment highlights an example where backprop is confused with "a supervised learning algorithm".

My comment was about "confusing backpropagation with gradient descent (or any optimizer)."

For me the connection is pretty clear? The core issue is confusing backprop with minimization. The cited article mentioning supervised learning specifically doesn't take away from that.


^

And if we increase N enough we will be able to find these 'good measurements' and 'statistically significant differences' everywhere.

Worse still if we did not agree in advance what hypotheses we were testing, and go looking back through historical data to find 'statistically significant' correlations.


Which means that statistical significance is really a measure of whether N is big enough


This has been known ever since the beginning of frequentist hypothesis testing. Fisher warned us not to place too much emphasis on the p-value he asked us to calculate, specifically because it is mainly a measure of sample size, not clinical significance.


Yes the whole thing has been a bit of a tragedy IMO. A minor tragedy all things considered, but still one nonetheless.

One interesting thing to keep in mind is that Ronald Fisher did most of his work before the publication of Kolmogorov's probability axioms (1933). There's a real sense in which the statistics used in social sciences diverged from mathematics before the rise of modern statistics.

So there's a lot of tradition going back to the 19th century that's misguided, wrong, or maybe just not best practice.


It's not, that would be quite the misunderstanding of statistical power.

N being big means that small real effects can plausibly be detected as being statistically significant.

It doesn't mean that a larger proportion of measurements are falsely identified as being statistically significant. That will still occur at a 5% frequency or whatever your alpha value is, unless your null is misspecified.


It's standard to set the null hypothesis to be a measure zero set (e.g. mu = 0 or mu1 = mu2). So the probability of the null hypothesis is 0 and the only question remaining is whether your measurement is good enough to detect that.

But even though you know the measurement can't be exactly 0.000 (with infinitely many decimal places) a priori, you don't know if your measurement is any good a priori or whether you're measuring the right thing.


The probability is only zero a.s., it's not zero. That's a very big difference. And hypothesis tests aren't estimating the probability of the null being true, they're estimating the probability of rejecting the null if the null was true.


It's less of a big difference than it might seem, because it takes infinitely long to specify a real number to infinite precision. If you think about something like trying to tell if you hit the exact center of the bullseye, you eventually get down to the quantum mechanical scale and you find that the idea of an atom being in the exact center isn't even that well defined.

In a finite or countable number of trials you won't see a measure zero event.

> they're estimating the probability of rejecting the null if the null was true.

Right, but the null hypothesis is usually false and so it's a weird thing to measure. It's a proxy for the real thing you want, which is the probability of your hypothesis being true given the data. These are just some of the reasons why many statisticians consider the tradition of null hypothesis testing to be a mistake.


Edit: OP confirms there's no AI-generated code, so do ignore me.

The code style - and in particular the *comments - indicate most of the code was written by AI. My apologies if you are not trying to hide this fact, but it seems like common decency to label that you're heavily using AI?

*Comments like this: "# Anonymous function"


https://gptzero.me/ Says that at large portions of it are 100% human


Interesting comment. Why is it common decency to call out how much ai was used for generating an artifact?

Is there a threshold? I assume spell checkers, linters and formatters are fair game. The other extreme is full-on ai slop. Where do we as a society should start to feel the need to police this (better)?


The threshold should be exactly the same as when using another human's original text (or code) in your article. AI cannot have copyright, but for full disclosure one should act as if they did. Anything that's merely something that a human editor (or code reviewer) would do is fair game IMO.


Maybe OP just used an ai editor to add their silly comments, so that would be fair game I guess? Or some humans just add silly comments. The article didn't stand out to me as emberrassingly ai-written. Not an em dash in sight :)

Edit: just found this disclaimer in the article:

> I’ll show the generating R code, with a liberal sprinking of comments so it’s hopefully not too inscrutable.

Doesn't come out the gate and say who wrote the comments but ostensibly OP is a new grad / junior, the commenting style is on-brand.


Op here, no AI generated code, I'm wondering what gives the impression that it is?

I use Rmarkdown, so the code that's presented is also the same code that 'generates' the data/tables/graphs (source: https://github.com/gregfoletta/articles.foletta.org/blob/pro...).


If you say there's no AI-generated code then I retract the original comment, nice work.


That is not a disclaimer for generated code, it's referring to the code that generated the simulations/plots.

I had read that line before I commented, it was partly what sparked me to comment as it was a clear place for a disclaimer.


Agree here - in a nutshell it strikes me as intellectually dishonest to intentionally pass off some other entity's work as one's own.


i personally have no problem with people including AI gen’d code without attribution so long as they stand by it and own the consequences of what they submit. after all, we all know by now how much cajoling and insisting it takes to get any AI gen’d code to do what it’s actually requested and intended to do.

the only exception being contexts that explicitly prohibit it.


I believe their rationale is that a private tutor costs more than this per lesson, and they're targeting the people who will pay for a tutor once/twice a week for themselves or their children.

I tend to agree with you, it seems like they could be wayyy more competitive on price but I also understand where they're coming from.


They’re not a private tutor, though. They don’t explain very much and there certainly isn’t a way to ask questions. As I said elsewhere, to me they’re about twice as expensive as they should be.


> They don’t explain very much

That's not really the case. Each separate step of each lesson is explained and practiced many times. Repeated failures across multiple students are noticed and explanations reworked. If it's not enough, you can report your issues. And there are MA communities to check with if you really get stuck for some random reason.


The explanations are very limited compared to actual maths lessons, though: in my experience they were very often something like "it turns out that the formula for this is...".


IMO it's scaffolded and explained a bit more than an average mathematics lesson, though teachers vary a lot.

There's a whole lot of "here's the formula" and not so much "here's the derivation" in most classrooms.

The math classes that I taught: I tried to do a lot more of the why, either rigorously or using proof by gesticulation. But there were still absolutely times that I just handed something over and was like "do this, for now."


I’m currently doing the Calculus I course and while there are explanations interspersed throughout the problems, these mostly seem to be the bare minimum you need to work the problems. When I compare it to the calculus textbook I keep alongside it (Stewart’s “Calculus Early Transcendentals”) it barely seems enough.


Private tutors are much more expensive and not uniformly effective. Math Academy is an extremely low-risk bet for parents of math students (you'll know before the first usage period whether it's working out). I like the business model here a lot --- I also just think it's like something concocted in a mad scientists lab to annoy HN people, who always have a really hard time intuiting market/pricing segmentation.


Yes, they are not a private tutor, and they do not claim to be. That is just the market they are going after.

They believe they can help people reach better outcomes for less. Whether they're correct or not is another question.


This is an extremely welcome move in a good direction from OpenAI. I can only thank them for all of the extra work around the models - Harmony structure, metal/torch/triton implementations, inference guides, cookbooks & fine-tuning/reinforcement learning scripts, datasets etc.

There is an insane amount of helpful information buried in this release


Most reasoned take is directly from the paper itself:

"We strongly emphasize that this paper is largely a pedagogical exercise, with interesting discoveries and strange serendipities, worthy of a record in the scientific literature. By far the most likely outcome will be that 3I/ATLAS is a completely natural interstellar object, probably a comet, and the authors await the astronomical data to support this likely origin."


It makes me sad that so many people are seemingly so aggressive against Loeb and his takes on this stuff. Whether things might be aliens or not, people get so upset whenever it's even mentioned as a thought experiment. We should be able to have a bit of fun here and there.


I think its totally fair to be aggressive in pushing back against abstracts like "and hypothesize that this object could be technological, and possibly hostile as would be expected from the ’Dark Forest’ resolution to the ’Fermi Paradox’".

There is zero testing of either the hypothesis that it is technological or that it is hostile. At best, the methodology he employs in the paper could be argued to test the hypothesis that its path through our solar system is synthetic and intentional; but that's it, and that's also not remotely close to what he said.


Intentionality of the path is a good prerequisite of the object being technological, and its hostility is a possibility given the Dark Forest resolution is true (which we can't prove nor disprove). The sentence sounds a bit sensationalist but it seems scientifically valid to me, considering this is an area where we have little more than a bunch of unprovable hypotheses.


My favorite aspect of Dark Forest is that simply coming up with the concept also provides a resolution to the Fermi Paradox.


It isn't a good resolution, because it assumes all intelligent species in the universe must think and act according to the same rationale. But the one example of an intelligent species we're aware of (humanity) doesn't think and act this way - we've been blindly sending signals and probes out for decades now, and anyone observing our planet would probably notice obvious tech signatures.


The ones who behave that way don’t last long enough to be witnessed by new civilizations like humanity, hence the darkness


His argument regarding the trajectory into our Solar System is pretty flaky. It completely disregards Hopkin's computation of a "steep entry angle" and supposition that it comes from the "thick disk", instead assuming the incoming trajectories of interstellar objects are uniformly distributed across the celestial sphere.

Mass distribution in our galaxy is decidedly anisotropic - most mass lies in the galatic plane.

Loeb's estimate of the comet size is strange, when two observatories concur that the maximum size is around 10km.

Look at https://news.ycombinator.com/item?id=44713579 for links to real science.


"There is zero testing of the hypothesis" - He, as well as multiple unrelated others, also wrote papers detailing available options to intercept the object by re-purposing existing satellites from Mars or Jupiter, which would allow for data collection which would directly test the hypothesis.


Yes he did so: Poorly. His idea [0] to use Juno is a pretty bad one, given that it doesn't have the fuel to do what he suggests, and even if it did, one of its engines was damaged during a recent maneuver. And, at least according to Jason Wright, Loeb should have known all this but ignored it [1], because headlines.

The ESA has a possibly more promising plan to divert a probe that's on its way to Jupiter right now [2].

So, again: If you're going to write "The feasibility of intercepting 3I/ATLAS depends on the current amount of fuel available from the propulsion system of Juno" one thing a real scientist would do is, idk, try to find out how much fuel it has left, talk to team members, etc. Instead, Loeb just does presumptive math, which ends up being wrong, but that didn't stop a Florida state rep from taking this "idea from a harvard scientist" and turning it into an official request of NASA, which now more real scientists will have to waste their time with [3].

[0] https://arxiv.org/pdf/2507.21402

[1] https://x.com/Astro_Wright/status/1951530225533329789

[2] https://www.newscientist.com/article/2490618-can-we-send-a-s...

[3] https://x.com/RepLuna/status/1951379349128815062


I think if it was framed more as fiction it would get a better read. The title and the abstract suggest they take this possibility seriously, which is ridiculous.


The fact you think aliens are ridiculous in an infinite universe is more ridiculous.


Aliens existing is not ridiculous, the hubristic idea that aliens are visiting the solar system is what’s ridiculous, plus all the sensationalism around aliens from someone who should know better.


Seems equally ridiculous to expect we’d ever actually see aliens in a spatially and temporally infinite universe.


It's not that I find aliens ridiculous, I find it ridiculous to attribute 3I/ATLAS to aliens and I find it especially ridiculous that it's coming from Harvard. They have billions of dollars in endowment and this is what they waste their time on? Maybe the administration was right to pick a fight with them.


> if it was framed more as fiction

At some point, however fact-based, every speculation is a form of fiction, so the line is blurry ...

> The title and the abstract suggest they take this possibility seriously, which is ridiculous.

... but I'd say it's I think the idea is to take some serious and very realistic bits that have a vanishingly low probability ...

> We show that 3I/ATLAS approaches surprisingly close to Venus, Mars and Jupiter, with a probability of ≲ 0.005%

... and then walk from there as rigorously as possible.

As they say, "largely a pedagogical exercise".

There's still a line between the hardest hard sci-fi story about a Boltzmann brain and a fact-based thought experiment computing probabilities for a giant marshmallow to spontaneously appear in the vacuum of space.


Rama by Arthur C Clarke is a work of fiction, there's no blurry line there.

> We show that 3I/ATLAS approaches surprisingly close to Venus, Mars and Jupiter, with a probability of ≲ 0.005%

a) What does this even mean? If you throw a dart on a dartboard, anywhere it lands will have some probability. 1/200 doesn't seem that low.

b) It's the height of intellectual laziness and chicanery to go from not-that-low-of-probability to 'aliens'

They're free to make these claims. I'm also free to laugh at how ridiculous it is.

Now, if this thing had some précise shape, or rotational speed, or we saw it adding or subtracting delta V, or if it did gravity assists from multiple planets (not just 'flew kinda close to a couple of them'), now that would be interesting.


If you read the paper, you'll find there are many improbable occurrences, rather than just this one.

> > We show that 3I/ATLAS approaches surprisingly close to Venus, Mars and Jupiter, with a probability of ≲ 0.005% > > a) What does this even mean? If you throw a dart on a dartboard, anywhere it lands will have some probability. 1/200 doesn't seem that low.

Not 1 in 200 here. 1 in 20,000.


I'm actually unsure what you mean - what is that line? Why aren't both just exercises in probabilistic reasoning?


I agree, I mean something like this only has to happen once in our lifetime for everything we know to change overnight. I’m not saying believe anything and everything at face value, but at least question whether immediate knee jerk dismissal of any idea you think you’ve seen before is actually considering the nuance of the specific thing in question or just a learned response.


Because that supposition is sensationalist -in modern online parlance, clickbait.

It’s as ridiculous as proposing that it could naturally be made of up of M&Ms or that monkeys built the ancient Egyptian pyramids.


it could naturally be made of up of M&Ms

That's silly. It's made up of Milky Ways.


He is the boy who cried wolf at this point. Every interstellar object is (oops I mean could be) alien artefact.

Also he raised a bunch of funds to dig one up under the ocean and got nothing.


When and if alien life is discovered, there’s a high chance the discoverer will be someone who’s spent their career searching for it, rather than someone just stumbling across ironclad proof one fine day.

I’m inclined to let those searchers speculate in public. If society’s rule is that you can’t even speculate about X until you have proof, it will hold back science significantly. History has many such examples of forbidden speculation leading to long delays.


Any idea why he gets so much pushback, when string theorists get a pass? Is it because "alien tech" is more easy to understand as a concept than Calabi-Yau manifolds?


I think it comes from a place of insecurity. people get sensitive about it because all astronomy is pointless and arbitrary, so someone having outlandish fun while doing it runs the risk of highlighting this fact.


because of UFO Conspiracy Theorists. When someone says Alien in a serious context, most people immediately associate it with UFO nutjobs.

String theory has not really made its debut in the conspiracy crowd afaik. I think "Quantum-___" has done so, especially with the "collapse of the wavefunction through the observer" it has so many esoteric people raving.

String theory is so meaningless to the normal person.


There's been very few interstellar objects he's claimed as alien, in fact only one - and for onomua (or however it was spelt) he also said the most likely outcome would be a natural object. His expedition to recover metal spheroids from the ocean floor was a fascinating one which garnered a lot of support and I believe still had value in devising methods to recover impact materials from underwater.

So really it's the same thing, he gets a lot of aggressive pushback online for mentioning 'aliens', but generally speaking nothing he says or does is actually that baseless.


Being repeatedly sensational about something like aliens will make people annoyed after a while, see the boy who cried wolf, etc.


Though we should also bear in mind that the wolf in that story eventually showed up.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: