Hacker Newsnew | past | comments | ask | show | jobs | submit | syadegari's commentslogin

While not a community itself, Terence Tao's blog (https://terrytao.wordpress.com/) is a good source of high quality, clearly explained maths. As you may know, he has a very broad range of interests, and there might be something for you to pick up and get involved in. He has also been involved in several collaborative math communities, which he writes about at length in his blog. I recall Polymath project and recently some project involving Lean Theorem Prover that even showed up on the first page of the HN.


Tangentially relevant to this is Perl Secrets, a list of operators and constants discovered by various users: https://github.com/book/perlsecret/blob/master/lib/perlsecre...


Does someone know why the bright stars in the image do not show the hexagonal pattern that is present in published images of Webb?


I presume it's not a live view of what Webb is seeing but existing images / visualisations from other sources pointing in the same direction Webb is right now.


The background image is not data from Webb; it is a near-infrared image from the 2 Micron All-sky Survey (2MASS). See @skybrian's comment: https://news.ycombinator.com/item?id=40013769


My understanding is that the 6-pointed spike diffraction pattern only occurs on stars, and that the bright points without them are galaxies.


+1!! This may sound weird, but I can use this as a mood booster when life gets tough. Side note: I might be biased because I also find it useful to think about the vastness space to trick my brain falling asleep (I watch a lot of video astronomy videos).


“Space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space.”

Per wikipedia:

The observable universe contains as many as an estimated 2 trillion galaxies and, overall, as many as an estimated 10^24 stars – more stars (and earth-like planets) than all the grains of beach sand on planet Earth. The estimated total number of stars in an inflationary universe (observed and unobserved) is 10^100.

IMO, a trillion is a number the human mind has trouble conceiving. We understand it only in an abstract sense - if you try to imagine what a trillion stars looks like in front of you, or a billion, or a million even, most likely you're imagining at best tens of thousands. 10^24 is orders of magnitude more abstract:

1,000,000,000,000,000,000,000,000


I have always found the analogy all the grains of sand on the beaches of earth to be something that helps accurately convey large numbers to myself and others. Somethings I’ve always wondered about this analogy:

1 – at what depth are they referring to, i.e. if you’re at a beach and dig down two feet you’re still Coming in contact with sand (and grains of sand). Do these count also?

2 – if you wade out into the ocean and go underwater, there are also more grains of sand under the water (are these included?). (what about all the grains of sand on the various vast deserts around earth?)

For number two I would assume NO as the analogy says all the grains of sand on the earths beaches a beach.


As per parents numbers, its way way more than all the grains of sand anywhere on Earth.


I started doing the math based on some assumptions, but this guy came up with 500 quadrillion grains:

https://science-atlas.com/faq/how-many-grains-of-sand-are-th...

Which is 5x10^17 versus 10^24.

Way more stars!


Forget the universe, just consider the size of the Milky Way. If the Milky Way was shrunk down to the size of United States, the Earth would be smaller than the gap between the ridges in a fingerprint.


The point Julia makes about `grep` resonate with me alot! I have the same (call it problem if you want) issue with `ps`. There is only one variation of ps I know of (`ps aux`) and if I want to change that I have to either look for the options in the man page or google it.


ps is bad in general, the default view is almost never what you want and lots of minor formatting options make for a complicated man page. But it is especially a shitshow on linux.

fsf: which ps options should we use(bsd, systemv, solaris, sgi)?

also fsf: well that's a tricky one... why not all of them?

The linux ps man page is a wild mess.


Oh, that's a nice one, thanks for mentioning it. I'm personally used to "ps afx", but the "u" does give out some quite useful info... and it's compatible with "f"! So I guess I'll add "ps aufx" to my repertoire.


faux is easier to remember


Does anyone know why they decided to go for buying Arm in the first place? If they needed tight integration with their GPUs and wanted to move away from x-86, can't they come up with Arm-based solutions like Apple did?


Nvidia absolutely could come up with their own ARM-based solutions like Apple did. Guess where all the people who know how to do that work? Arm. If nothing else, Arm offers a great engineering team to accelerate Nvidia's plans and it also means Nvidia can push new ideas into the latest ISAs much easier. A similar thing happened a few years ago with Imagination Technologies who produced Apple's GPU IP. Except instead of Apple buying them, Apple built an office next door and poached the entire engineering team. Leaving ImgTec as a deeply scarred company that eventually got sliced up and sold off.


Nvidia has their own totally custom ARM cores. That team was leftovers from Transmeta.


Yes, I imagine they could. I wonder if they acted out of fear that someone would come along and gobble up ARM and do them what everyone is now scared of Nvidia doing to everyone else.

Now the question is, who will that "someone" be next? What kind of suitor is fit for ARM?

The curse of ARM being so successful and incredibly crucial to many pervasive industries, yet seemingly unable to go it alone.


I always assumed NVIDIA wanted to license IP through Arm just like Arm does.


nVidia is trying to become a full-stack company.

They have GPUs. Got network capabilities with Mellanox. Add ARM knowledge on top and you have a complete platform building capability.

TL;DR: nVidia just wants to dominate the whole stack. Like Apple, but for data center / scientific / AI / HPC, etc.


I am an NVIDIA employee in a niche of the HPC business. This is not an NVIDIA official opinion.

HPC is nice, but when you hear Jensen getting really excited, it’s not about dominating some niche like that, it’s about a vision of the shiny sci-fi high tech future, and actually delivering the tech to make it real.

So don’t just look at HPC to understand the NVIDIA ambition. Start at edge computing; imagine a world with ubiquitous autonomous robots (cars and drones and otherwise). Think of the onboard chips driving their vision and speech recognition models: That’s a great place for ARM and NVIDIA chips together, whether as one company or two. Watch a recent keynote and see how all the rest of the tech fits into place as part of that: 5G signal processing chips, for instance, something you might gloss over if you’re not in telecom. You don’t need a roadmap to see how it is all connected in support of this world of the future.

(I certainly don’t have the roadmap, either, I just watch the keynotes and help shuffle bits.)


> Start at edge computing;

It seems pretty clear that this what they're thinking of. They want to be able to license an integrated architecture that includes power-efficient computing and a powerful ML engine. They've been so heavily investing in this space for a reason.

What I can't figure out is why this is such a big deal to regulators. Nvidia doesn't manufacturer these things (aside from Jetson I believe? Not 100% clear on this). They license IP. And this is IP that I think the world would really like to have.

Currently the only player in this space is Apple. They've built their own integrated silicon with their perpetual ARM license that is now giving them a huge market advantage, and will continue to do so until there is another competitor. The R&D required to compete with a cash-liquid >2.5 Trillion dollar company is just not feasible for any of the other major players at present. Nvidia/ARM opens doors for tons of other companies.

I also think it's foolish to think that Apple won't try to expand this tech offering well beyond personal computers and tablets. They will expand to IoT/Edge devices and services. But the difference is they won't be licensing their IP to other manufacturers, they will be building it themselves (or contracting Foxconn to) and keeping everything in their walled garden.

Guess I'm just frustrated that of all ridiculous acquisitions and anticompetitive nonsense I've seen in the past decade, THIS is the one getting smothered.


> What I can't figure out is why this is such a big deal to regulators.

Because when you own all the IP, you can cut your competitors off by revoking licenses to them, and it'll instantly kill a huge ecosystem from Raspberry/OrangePi to Ampere A1 and everything in between.

I'm not sure nVidia would make such a drastic move, but I'm sure that they'll move strategically to ensure their leadership, which is understandable from a corporate PoV, but it'll be very bad for everybody else.

This is not a big deal, it's a huge deal, and I'm happy that we're here as of today.

nVidia can of course license ARM to embed and/or further improve upon this, or they can use any other ISA or come up with their own. I'm sure they're capable of this, and it'll be much better in the long run for everyone.

> I also think it's foolish to think that Apple won't try to expand this tech offering well beyond personal computers and tablets. They will expand to IoT/Edge devices and services. But the difference is they won't be licensing their IP to other manufacturers, they will be building it themselves (or contracting Foxconn to) and keeping everything in their walled garden.

nVidia's walled garden is not different in any scale when compared to Apple. Considering how friendly nVidia was towards OpenCL, I'm guessing that they'll be at roughly the same distance towards Vulkan for GPGPU applications, keeping CUDA the only possible thing to run with any meaningful performance on their hardware. On the open driver front, they're equally friendly. So it's more like the pot is calling the kettle black here.


At least in the networking side, nvidia’s HW merchant silicon nature is quite evident. They have a very marginal SW stack (at this point still trying to beat the dead horse of cumulus and doing the weakest of investment in Sonic) and basically nothing at all meaningful beyond that. They keep approaching friends trying to sell their ToRs but it’s not happening outside of HPC.

They seemingly don’t see any value in SW. architectures like end-to-end designs (DPU->Network->DPU->pcie) can be great but without SW to make them consumable it’s doa outside of dedicated clusters.


Is it just me or the wordings of the article sound too generic/unspecific to someone else?



It's good to see that the physical escape key is back!


The lack of it on 2016 mbp made me re-assign the caps-lock key to ESC, and in fact this is the correct place for it anyway - much more natural than as an extra F-key.


As a Vim user, I'd highly recommend this to anyone — even if you have a physical ESC key.


On every keyboard the escape key is at the top left.


Incorrect: https://catonmat.net/why-vim-uses-hjkl-as-arrow-keys

The keyboard vim was designed for had ESC where tab is on modern keyboards.


Still no number pad though.


Why would you want two sets of number keys on the keyboard? Except for the case of one-handed exclusive numeric input.

The biggest problem with a number pad on a laptop, is it puts the rest of the keyboard off-center from the screen. No way around that I can think of.


> Why would you want two sets of number keys on the keyboard?

Because I (and other professionals) use them. What kind of question is that?


I hate number pads on laptops. They never quite match the layout of the number pad on a proper 101 key desktop keyboard. I can't ever actually touch type on them, and they shift the center of the letters part of the keyboard and the touchpad.

I'd much rather use a generic $10 USB number pad if I really had to do a bunch of data entry. Or sit down at a desk and use a full size monitor and keyboard for that task.


Well obviously, but again, why? Is it strictly for one handed data entry, or do you find it easier to move your hand from the home row to the numpad when entering a number mixed with text? (I.e., is it that common for touch typists to not touch type numbers on the top row?)


Cannot wait to hear what they're going to say about "reinventing keyboard"!


If you're on desktop you can use eww (emacs)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: