Hacker Newsnew | past | comments | ask | show | jobs | submit | eig's commentslogin

What is the advantage of this Monte Carlo approach over a typical numerical integration method (like Runge-Kutta)?


I was wondering the same thing, but near the end, the article discusses using statistical techniques to determine the standard error. In other words, you can easily get an idea of the accuracy of the result, which is harder with typical numerical integration techniques.


Numerical integration using interval arithmetic gets you the same thing but in a completely rigorous way.


With many quadrature rules (e.g. trapezoidal rule, Simpson's rule) you have a very cheap error estimator obtained by comparing the results over n and 2n subdivision points.


Numerical integration methods suffer from the “curse of dimensionality”: they require exponentially more points in higher dimensions. Monte Carlo integration methods have an error that is independent of dimension, so they scale much better.

See, for example, https://ww3.math.ucla.edu/camreport/cam98-19.pdf


Typical numerical methods are faster and way cheaper for the same level of accuracy in 1D, but it's trivial to integrate over a surface, volume, hypervolume, etc. with Monte Carlo methods.


The writer would have been well served to discuss why he chose Monte Carlo over than summing up all the small trapezoids.


At least if you can sample the relevant space reasonably accurately, otherwise it becomes really slow.


as i understand: numerical methods -> smooth out noise from sampling/floating point error/etc for methods that are analytically inspired that are computationally efficient where monte carlo -> computationally expensive brute force random sampling where you can improve accuracy by throwing more compute at the problem.


Is there a visualization of the glider in the thread? Would love to see how it evolves with one dimension being time.


My understanding (which could be wildly wrong, I only skimmed the thread) is that it's running in a standard 2-dimensional Game of Life grid, it just happens to start out as a 1x3.7B cell line.


Pretty sure the GOL grid has more than one of those lines...


The best way to run it is in the software Golly. It has the HashLife algorithm needed to make it run fast enough to see it finish.


After the first step it isn't 1D any more, so I don't think that visualization is possible


It's possible. It'd just be a 3D visualization and more importantly, stupendously huge. If each cell was a cubic millimeter, the shape would be 3700km wide, and stretch 1/3rd of the way to the moon.


And if each cell was a cubic micrometer (which is a side length 200-300 times smaller than a pixel on a typical screen and 50-100 times thinner than a human hair), it'd still stretch 3.7 kilometers, which is about the length of a commercial airport runway.


I'm having trouble finding a common image file format that has > 32bit resolution fields, fit it in.


Be rest assured Ben’s previous job was in the medial imaging industry. While he worked on MRI machines rather than ionizing radiation, I think he’s very well aware of the dangers of X rays and has many projects dealing with ionizing radiation. There’s a lot of bad safety science YouTubers, Ben isn’t one of them :)

Funny thing: it’s actually rare to get radiation damage to human hands and feet since there’s not too much growing tissue there!


>there’s not too much growing tissue there!

On the contrary, I was told stories in school that old IR doctors used to lose the hair on their hands after using the fluoro for years. The fingernails are also radiosensitive.

The main reason that X-rays of the hands and feet are usually very low risk is because the beam intensity (dose) required to penetrate the small amount of tissue is very low. Because the video uses a high-sensitivity detector (photon counter) the dose may be even less than usual. However, it would still be a regulatory violation if you did it in a hospital.


I’m not a mathematician, so could someone explain the difference in usage between Lean and Coq? On a surface level my understanding is that both are computer augmented ways to formalize mathematics. Why use one over the other? Why was Lean developed when Coq already existed?


I think the difference is mostly cultural. The type theories of Lean and Rocq are fairly close, with the exception that Lean operates with definitional proof irrelevance as one of the default axioms. This causes Lean to lose subject reduction and decidability of definitinal equality as properties of the language.

Many people in the Rocq community see this as a no-go and some argue this will cause the system to be hard to use over the long run. In the Lean community, the interest in type theory is at a much lower level, and people see this as a practical tradeoff. They recognize the theoretical issues show up in practice, but so infrequently that having this axiom is worth it. I consider this matter to be an open question.

If you look at what's being done in the communities, in Lean the focus is very much on and around mathlib. This means there's a fairly monolithic culture of mathematicians interested in formalizing, supplemented with some people interested in formal verification of software.

The Rocq community seems much more diverse in the sense that formalization effort is split over many projects, with different axioms assumed and different philosophies. This also holds for tooling and language features. It seems like any problem has at least two solutions lying around. My personal take is that this diversity is nice for exploring options, it also causes the Rocq community to move slower due to technical debt of switching between solutions.


> The type theories of Lean and Rocq are fairly close, with the exception that Lean operates with definitional proof irrelevance as one of the default axioms. This causes Lean to lose subject reduction and decidability of definitinal equality as properties of the language.

Couldn't you introduce proof relevance as an explicit axiom into a Lean program to solve that particular issue?


Lean has a good library of formalized mathematics, but lacks code extraction (you cannot generate a program from the proofs it constructs). So it is more suitable and highly used by mathematicians to prove theorems.

Coq has always focused on proving program correctness, so it sees lots of use by computer scientists. It also does code extraction, so after you prove a program correct in Coq you can generate a fast version of that program without the proof overhead.


I think that (most) mathematicians were not that interested in formal proof until quite recently (as opposed to computer scientists), and most of the interest in lean has been self-reinforcing, namely there is a (relatively speaking) huge library of formally verified mathematics. So now basically anyone who cares about formal verification as a tool for mathematics is working in lean. There are of course numerous techincal differences which you can read about if you google coq vs lean.


Rocq is ancient and has some longstanding UX problems. It is pleasant to try making a new code base.

This is kinda like asking, why write Clang when we already had GCC? Or, why making Python if we already have Perl?

It's good to have some competition for these things, Rocq I believe felt the heat and has been also doing some good things in recent years.


Why do you keep saying Rocq when he asked about Coq? Are they the same thing?


https://en.wikipedia.org/wiki/Rocq

`The Rocq Prover (formerly named Coq) [...] `


Lean has much better UX to be frank. Rocq is fine, but if I were to start formalising today, I'd pick Lean.


Presumably they want to eventually put a human inside it, in which case having a humanoid robot to work off of wouldn't change the aero calculations and designs too much. The article talks about specific design considerations to avoid the exhaust gases.


Gravity Industries already has that sorted https://gravity.co/ . I was almost expecting this robot to just be a humanoid robot with one of those suits on


Looking at the 'hands' of this robot, it looks to be exactly that.


> Presumably they want to eventually put a human inside it [..]

I'm imagining a team just putting the organs of a human into the robot to save on space. Basically a brain plus whatever is absolutely necessary to run the brain.


Wasn't that Robocop 3?


Glad to see this! We need better treatments for OA short of a joint replacement.

I'm not too surprised that this treatment works. It's essentially like localized steroids to just the joint- killing off the immune cells causing inflammation.

Good features is that it's localized (so no systemic immunosuppression) and the risk of cancer is low since you rarely get radiation-induced cancer in joints because there's not enough dividing cells. Unfortunately heading to radiotherapy is a logistical challenge, but there are enough people suffering from OA that would happily do this to get relief.


> It's essentially like localized steroids to just the joint- killing off the immune cells causing inflammation.

Are you confusing osteoarthritis with rheumatoid arthritis? I didn't think the pain of osteoarthritis had anything to do with the immune response. You've literally got bone rubbing against bone. It's not going to feel good.


Both OA and RA involve some inflammation (-itis means inflammation). RA is more T cell driven inflammation (and clinically visible) while OA is more macrophage driven. Mechanical wearing still makes the joint unhappy at the cellular level- you just don’t see it big and red as a symptom like in RA.


Is it not illegal in the US to break up a company to isolate liabilities?


It's not illegal just (possibly) shady, but there are ways to link the former company's liabilities to the purchaser in some situations in some jurisdictions. That may apply here but that's for a whole court to decide.

https://kddk.com/2015/07/30/successor-liability-in-the-purch...


It's not specifically against a law but debtors who got shafted can choose to sue the "old" and "new" companies under a few broader laws, basically alleging "I had a valid contract with the old company but this sale is a sham transaction to get out of the contract and 'NewCo' is unjustly enriching themselves by screwing us 'OldCo' debtors." IANAL but my sense is such a case can be won but is far from a slam dunk and it will cost money and take time. Debtors will have to decide if they are out enough money to be worth sinking more money into recovering it. This kind of move might also be an aggressive escalation tactic in a hardball negotiation with debtors unwilling to renegotiate on acceptable terms. It's possible that the OldCo/NewCo people doing this may choose to leave certain assets in OldCo to make legal challenges less likely to prevail than if they'd completely emptied out OldCo.

Other impacts can include future potential NewCo lenders being pretty leery about getting involved with the same people. It's also not a great look for the founder(s)/senior execs in terms of future resume - unless there are extenuating circumstances which justify doing it. An example can be something like a fundamental disagreement between co-founders who are major shareholders. In that scenario this may not be to shaft debtors but rather for the majority co-founders, investors and key employees to 'dump' a minority non-cooperating co-founder who's no longer involved with the company, has a "change of control" veto and won't sell their shares but can't stop an asset sale. Basically the board approves the sale and the key execs/employees all vote with their feet. The original OldCo shareholders still own those shares, they're just worthless without the people, IP, assets, etc. In such a case, the non-cooperating shareholder might have grounds to sue but one defense can be a solid paper trail showing the company treated them fairly, offered to buy out their shares at fair market value and was basically forced into this as the only alternative.


This is why I said ostensibly. I think it should be assumed the financial parts were done on the up and up. Such that disclosures and such can waive a lot of the concerns that would make it illegal.

There are non-financial liabilities, as well.


I'm not too impressed with this article since it doesn't really give a definition for computing, just picks a few similarities between what we see as computing (in the practical sense) and what cells do.

It's a shame because there *has* been a lot of deep work done on what kind of computer life is. People often use the Chomsky Hierarchy (https://en.wikipedia.org/wiki/Chomsky_hierarchy) to define the different types of computer vs automata. Importantly, a classical Turing machine is Type-0 on the Chomsky Hierarchy. Depending on what parts you include from a biological system, you could argue it's anywhere from Type-0 to Type-4.

Interestingly, the PhD thesis of well-known geneticist Aviv Regev was to show that certain combinations of enzymes with chemical concentration states are enough to emulate pi-calculus, and therefore are Turing machines! https://psb.stanford.edu/psb-online/proceedings/psb01/regev....


This is the kind of evolved computer science that was going on when I was a teenager. Have an upvote eig!

My addition: it's funny for how much speculation we get in the, "hard cognitive science" (RIP) that in lieu of the big insights we get from Godel, Turing, Russell that many/most undergraduates and even post-graduates still haven't internalized Wittgenstein's work especially the Tractatus. I feel like it gets us to, "the questions you're asking about how life works and the questions about what is at the core of logic and mathematics (language) are definitely related but not in any of the fundamental ways you hope they are..."

For the uninitiated-- try reading the thing in one sitting. It takes about an hour:

https://wittgensteinproject.org/w/index.php/Tractatus_Logico...


The Aviv Regev paper you link was recently recommended to me as a useful reference for something. It was a nice surprise to see that Regev's thesis advisor was Ehud Shapiro, known to the Prolog community from his co-authorship of one of the good Prolog books (The Art of Prolog, with Leon Sterling - https://mitpress.mit.edu/9780262691635/the-art-of-prolog/). Indeed, Regev's thesis (and the paper above) propose a system based on a Flat Concurrent Prolog.

Shapiro was also the author of one of the two PhD theses that were a major influence to Inductive Logic Programming, a field at the intersection of logic programming and machine learning.

A lot of the kind of "deep work" you mention used to be done in the logic programming and ILP community in times past, before everyone seemingly switched to neural nets and statistical machine learning.


While I think most of the examples are incredible...

...the technical graphics (especially text) is generally wrong. Case 16 is an annotated heart and the anatomy is nonsensical. Case 28 with the tallest buildings has the decent images, but has the wrong names, locations, and years.


Yeah I think some of them are really more proof of concept than anything.

Case 8 Substitute for ControlNet

The two characters in the final image are VERY obviously not in the instructed set of poses.


Yes, it's Gemini Flash model, meaning it's fast and relatively small and cheap, optimized for performance rather than quality. I would not expect mind-blowing capabilities in fine details from this class of models, but still, even in this regard this model sometimes just surprisingly good.


Worth noting that Elon only gets the $1T if he can deliver >1M Robotaxis, >1M "AI bots", and bring Tesla's market cap to $8.5T.

Looks like they are really doubling down on the 2018 compensation package and the idea that Elon's attention is worth more than anything.

More details on the milestones can be found in the SEC statement: https://www.sec.gov/Archives/edgar/data/1318605/000110465925...


Those targets don’t seem like they would generate in excess of $1 trillion in profits. Not entirely sure they would generate that much in revenues, so why wouldn’t Elon just buy that many robotaxis and AI bots himself as a profitable strategy to unlock his comp package?


Cumulative profits have come nowhere near his last pay package yet either that's not what they care about they want the market cap to go up


According to the chart in the linked document, Elon’s 2012 compensation plan captured 8% of the increased shareholder value. His 2018 plan captured 8.9% of the increased shareholder value, and this 2025 plan captures 13.3% of the increased shareholder value. I guess it is getting harder and harder to get him to focus.

Also, I don’t see anywhere in this incentive package where Tesla has to continue manufacturing EVs, or any vehicles at all. A joint venture with someone that provides the vehicle, ICE or EV, while Tesla adds the AI self driving tech, slaps their logo on the hood, and puts it into service via their own channels will count toward these incentives.


Good that it's not just market cap I suspect he'd find some way to get there.

I also dont doubt if it just came down to it they could claim 1m robotaxis with some words nothing ("Supervised" or "Robotaxi capabilities") whatever state it actually is in, the robots will be trickier.


Anything to keep him from starting a viable third party of any kind.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: