This is such a positive and wholesome example - both the mail itself (getting to know the preferences of the project), the proposed fixes (naming functions in a better way, adding explanatory comments), and on the higher level - the humility in which it was done, and the time being invested by him for a project he isn't part of.
Not only that, but it really illustrates that even well-known, famous engineers can start contributing to a project with simple pull requests that just rename variables or add comments to code for clarity.
Not every pull request needs to be introducing complex/intricate code, and yet it still has value to the project (and the community).
The parent comment seems to suggest its author is not a frequent reader of NetBSD/OpenBSD mailing lists. This message is not at all atypical of what one would routinely see on these lists from regular contributors. Perhaps John Carmack spent some time reading the OpenBSD mailing lists before posting. It just shows that in more popular projects, or whatever the projects the comment's author is accustomed to, the expectations placed on contributors can be lower.
I am actually subscribed to -tech, and an OpenBSD user for around 15 years.
Usually when mailing list messages or PR comments are posted to HN, they are about something very negative, either the attitude of the author (random examples - mails by Linus, mails by Poettering, and sometimes also mails by Theo) or the content of the request it replies to. And in the openbsd community does has some negative reputation about harsh communications.
Just wanted emphasize it, since seeing good examples is much more educating IMHO than seeing only negative ones.
An interesting contrast from the Torvalds approach. I seriously think this is a huge part of why John Carmack is so popular. He's always kind to people, and very keen on sharing useful information.
My favorite developer contributing to my favorite operating system! This is like Christmas. I’ve been contributing $ to OpenBSD since 3.x release days, and reading John carmack’s “.plan” file since Doom 2, when they ran digital Unix!
Did you know: you have John Carmack to thank for first porting X11 to OSX? Thank you John!!
> Did you know: you have John Carmack to thank for first porting X11 to OSX? Thank you John!!
Is there anything written about the background to this? Wikipedia doesn't talk about him ever working for Apple (although maybe that's just not mentioned).
I discovered it by accident from a source file header many years ago, “that can’t be THE John Carmack?!” I said to myself, it is! there are many web citations, too.
I don't think Carmack ever worked for Apple. The XQuartz release notes say Carmack ported XFree86 (X server for x86 machines) to OS X; my guess would be that he did a limited port to get what he needed to run the Doom/Quake engine?
XFree86 wasn't x86 only, but the name does hint at how it did target PCs starting in a time when the norm for X would have been to run commercial Unix on non-PC hardware.
yep, was going to say that this is probably the real source behind the mystery. Early OS X is basically NeXT when Jobs came back to Apple. Doom was developed on NeXT, so id Software and Carmack already had a history of using BSD. They also released Quake for Linux around 1996, which may have used X11 (I cannot recall, maybe it just depended on 3dfx/nvidia without X11 at the time... it's been so long)
Yeah, but the "normal" BSDs just ship an X server out of the box; no need to port anything. (It remains curious to me that Apple bothered writing Quartz from the ground up, actually; seems like it would've been easier to build on top of X.)
Building on X gets you very little other than compatibility with the X ecosystem - something that had little to no value to Apple - they certainly didn't want to promote OSX as some kind of Xserver in place of writing quality Mac apps.
Note that with current X, HiDPI and multi-monitor and tear-free experience still have problems and they are handled via a morass of legacy extensions of varying design quality (this is aside from the drivers issue)-- when OS X was started, none of these existed except maybe DBE and a nascent RANDR (there was no COMPOSITE, no RENDER, RANDR vs very primitive), I think there was the Xinerama crap that no one uses anymore because it is terrible.
So they would have had to develop that, for what benefit? What does X get you out of the box? A drawing model that was outdated even by 1995 standards. And network transparency? Everytime these discussions come up, somebody brings this up, and I wonder what drugs they are on. The love by some for the X network model has always baffled me, because it is terrible - it has virtually no practical usability for modern drawing models and compositing, and the worst is it isn't robust - lose your TCP connection, goodbye session! X hits #1 and #2 of the "Fallacies of distributed computing" pretty hard.
RDP, VNC, SPICE are what we use today, for very good reasons.
By writing Quartz from the ground up, Apple accomplished "every frame is perfect, by design" in the year 2000, which we in the free world are only getting now by switching to Wayland.
I share your excitement! I've never contributed to OpenBSD but have been running and following the distro for ages. I have a few mugs and t-shirts and support when I can. I'm hoping this may bring some fans of his over to the distro who wouldn't have run into (or contributed to) it before.
I tried many source contributions to OPENBSD when I was in school, but only one was accepted—-to fix a “house purchase overflow bug” in the command-line monopoly game from the BSD games collection, lol :)
I stopped hacking on it some years ago, but always contribute $$ when I’m well employed (currently not, sigh)
I’ve yet to work at a $JOB with anyone else who wishes to use OpenBSD, even in the places it is most appropriate, my most recent employer had someone under qualified as the devops lead so we had to use pfSense, an endless source of frustration of issues that couldn’t be addressed, but that I knew exactly how to address with OpenBSD, but, such is life. The most experienced people seem to have the least control, a common complaint in all industries, I’m sure.
I always encourage everyone to use many different Unix’s, it provides a much deeper understanding of everything you use, even if you end up only using Linux, anyway, it provides a much better perspective of what a distribution is and how to select the correct one even if you end up being forced into Linux anyway.
But just like how I always end up using python and JavaScript and Ubuntu Linux everywhere —- unless you’re in charge, it’s much like school, the whole team has to succumb to the lowest common denominator, the lowest also often being the boss or lead, who does very little hands-on work but feels making decisions of which Linux and which Language to use is their best contributing factor, lol
> my most recent employer had someone under qualified as the devops lead so we had to use pfSense, an endless source of frustration of issues that couldn’t be addressed, but that I knew exactly how to address with OpenBSD [...]
I, for one, think your employer and the “DevOps lead” were advocating for using the wrong technology. pfSense software is many things, to many people, but if it’s not written for a DevOps environment.
And you can tell your ex-employer that i said so. /s
It was for managing multi-site WAN, P2P radio, satellite, VPN links and VPN endpoints for employees. I can’t help that IT Infrastructure departments are erroneously adopting the term “devops” for themselves :(
Anyway best wishes Jim, shucks I ordered the first and second gen PcEngines and Winston CM11a radios from netgate.com long ago, and many since, thank you for running a proper store!
Is there any more context to OP's post? I guess someone just noticed in the mailing list, but it's not clear whether maybe John Carmark does this with all code everywhere in the universe. And then probably still has some interesting coffee break conversations...
On a different note. I am something of a Ubuntu fan and try to be more involved there. But then again the xBSDs where x = {Free, Open, Net, ...} are one of those things that always prods the back of your mind. It's a bit like topics in mathematics that always catch my attention, only to be forsaken for more mundane tasks. Then again, mathematics is a bit like the art of making everything mundane.
>Is there any more context to OP's post? I guess someone just noticed in the mailing list, but it's not clear whether maybe John Carmark does this with all code everywhere in the universe.
I remember a story about Carmack going off the grid with a laptop and OpenBSD for a couple of weeks, and him being delighted about their documentation and the ability to build things without access to the web.
Carmack is a hero in the C community, and OpenBSD is also all about C and classic Unix, so he might just feel at home there.
I don't see any additional context but this does appear to be a new thing. Two commits credited to Carmack in the OpenBSD source and they are dated 5/16/2020 and 5/17/2020.
To me this kind of stuff is fan junk news. I get that even hackers need celebrities, but front page of HN is a little extreme for a thing like this without context. If there was more context like, John C has re-written half of BSD, then we're talking - but this is very celebrity grocery list to me.
I get what you're saying, but I think it's great for people to see that an old-school hard core programmer puts effort into improving documentation and names. I've worked with a lot of people who think programmers can and should be too good to invest time and thought into changes like that.
Also to see that he would be so tactful about asking if the changes are welcome, even though he is who he is and knows it. I like to think there are people in their twenties or teens or even younger browsing HN and subconsciously absorbing the fact that this is how a programming god behaves.
So many celebrities are celebrities for banal, superficial reasons.
Calling this a "very celebrity grocery list" story is a disservice.
Carmack's one of the most accomplished programmers in history in terms of innovation, skill, and longevity. His words and opinions also carry a fair bit of weight, even today. His contributions and technology choices matter in a way that a celebrity's grocery list do not.
If there's an injustice here, it's the fact that there are many Carmack-level coders out there, toiling away in relative obscurity, who don't have their contributions trumpeted in this way.
I'd bet everyone on this thread will be grabbing their cheerleading squad and their pom-poms everywhere if John Carmack visited them in person. They won't be able to contain themselves and start bursting into tears of joy.
'Extreme' maybe an understatement here but it seems like HNers here love seeing the second coming of King Midas when everything he touches turns into gold.
I’m sorry you feel this way, maybe if you could try contributing more positively to FOSS in some way and receive some accolades for yourself, you would not feel so spiteful about the attention Carmack receives.
I think you will find Carmack has contributed more to the hacker ecosystem than anyone else if you just take a very shallow peek.
I would suggest to start by reading the source code to Doom, it is surprisingly approachable and educational in several fields of math, geometry, data structures, and computer science. There are even books to help you understand it, heck, there are a dozen or more books about Id software and John Carmack, so there are many authors and readers who feel very differently from you, maybe you could consider the reason. There are many things in our field that would have happened eventually, no matter who did it, but much of Carmack’s contributions really would not have happened without him, he really is in a league of his own.
It’s really amazing to me that you feel so negatively towards people who have endearment for one of the hacker community’s greatest contributors. I can only guess that you might be very unhappy, for that I’m very sorry and I hope it turns around for you very soon.
I would suggest to just install OpenBSD and try it (maybe in a VM?), one of the selling points is the simplicity of the setup process. It can be a one evening project.
But that's the thing. The real benefits from OpenBSD come when it's the only system you need. Who cares how simple and elegant the guest system is, if you need to maintain a linux installation to host it?
A project on the scale of OpenBSD can't possibly support all the hardware that's out there. And the system is under active development, so there are constant regressions in hardware support. That's fine for John Carmack: he can fire up ed and fix a kernel panic, that's how he winds down after work. But the vision that attracts me to OpenBSD is a box in a closet that Google doesn't have the key to, which sits there for 6 months receiving my email and serving my homepage, takes 10 minutes to update, then sits there for the next 6 months; maybe once every 5 years I'd need to read a manpage and port my smtpd.conf to a new syntax. Hardware regressions are what stop that from happening.
The OpenBSD developers are getting what they want, and good luck to them. The shame is that a little bit of coordination, where they chose one model of ThinkPad every other year and committed to supporting it for a decade, would make the system so much more useful for so many other people.
> But that's the thing. The real benefits from OpenBSD come when it's the only system you need. Who cares how simple and elegant the guest system is, if you need to maintain a linux installation to host it?
I don't think he's suggesting that as the long term way to use it but as an introduction to the system.
Yeah it generally works pretty happily with most hardware in modern X/T Thinkpads. Happily running it on my X270. It's missing the 802.11ac and Bluetooth support but I'll live. I bought a BT-W2 so that my Bluetooth headphones will work and OpenBSD handles it swimmingly.
There's very little that I can't do with it versus my Linux workstation, at least for my needs.
The Arch wiki is great. Reminds me of the old Gentoo wiki from way back when Gentoo was the new hotness. I'm not even an Arch user but documentation as to how things like Window Managers work is fantastic.
In my experience, the first step in trying out a new system is installing it, configuring everything (from network connectivity to various peripherals), installing your favorite software and so on.
This gives a good idea on how to work with the system, how good the documentation is (as another poster already mentioned), how simple and consistent everything is.
Installing in a VM is not the best differentiation, since in many OSs these experience was streamlined and everything usually works out of the box, but the customization part is still a good example.
Of course in the end, you might get to a setup similar to other OSs - running a browser in a windows manager and a buch of other common tools (IDEs, editors, productivity apps), but the way you get there, and the amount of problems you encounter along the way is a major part of using an OS.
I think you can get a good impression in a few hours, and see how it's different. (For example, I had a similar experience with NixOS: In a few hours you see how radically different it is and learn how it fits you)
I recently went and installed FreeBSD on my laptop. It has until now been a pretty good experiance. The main reason I went with FreeBSD over OpenBSD at the moment was that NVIDIA only supports FreeBSD. FreeBSD is also pretty simple (at least compared to linux) so it wins there as well.
Well it is a bit of a grey zone but I have gotten Vulkan to work and I think at least NVENC works as well. The solution is rather hacky though. It extracts libraries from the Linux driver and loads them when you run programs. A look at the code make it seems like cuda is getting loaded as well so it may work.
Nvshim is an amazing project, but it is a far cry from official Nvidia support for CUDA/NVENC/NVDEC.
(All Nvidia would have to do is build another binary; it's still chatting with the same driver blob internally so it's not really clear to me why they don't bother compiling it for FreeBSD.)
They could even do it with less churn, because of https://github.com/freebsd/freebsd/blob/master/sys/conf/opti... FreeBSD easily achieves binary compatibility with earlier versions. So no need to mess around with DKMS or their own thing. Or at least less so.
Ahem. Tell that to all the binary drivers for network cards and raid controllers which relied on that functionality, which i happened to use not all that long ago.
GP's link is to the COMPAT_FREEBSD binary compatibility for earlier versions of FreeBSD, oddly. I don't think it (or linux emulation) has anything to do with Nvidia's needs re: CUDA/NVENC.
It means you have a module from some vendor, or something similar to the NVIDIA installer for linux, which technically is nothing else than a batch compiled makefile, spitting out a kernel-module made for version 1.2.3 while you are running 2.4.5. It worked. I don't know why it shouldn't now.
Edit: I'm not having any FreeBSD systems in use right now, so i don't know if they changed the defaults. What i remember is at least the last two versions were compiled into GENERIC, so you didn't even had to compile your own kernel when that range backwards was enough.
Similar thing in NetBSD, a loooong time ago, even their own driver for 3Com905 relied on the equivalent of that functionality. That caught me by surprise, because i used NetBSD like Gentoo, found some Compat foobarjurassicBSD, wondered why i should use that at all, and had no working net. Fun! :-)
That's not how the nvidia binary blob works in FreeBSD. Instead, there is some compiled-from-source-code OS portability glue, which is compiled against the target version of the FreeBSD kernel. Then the binary blob links the portability glue.
I believe something similar is done in Linux. The difference in experience may come from FreeBSD maintaining a stable KBI over a release version (e.g., stable/12), while Linux aggressively does not maintain any KBI stability; hence the need for DKMS/akmods.
Sigh. Been typing too fast as usual. I know thats not the way the Nvidia-thing works, instead doing the glue-thing which interfaces to the binary blob like you mentioned. But it could (instead of that)! That's what i meant to say.
There was actually some period of time when the Tegra division (I think) contributed to Nouveau a bit (!) but that didn't really grow into anything good.
It depends on your use-case, i guess. If my intent would be using FreeBSD as my daily-driver "desktop/laptop-OS" i'd avoid NVIDIA like the plague, becaue without CUDA their unique selling point does not exist, but the hassle of some "switcheroo-black-magic" possibly persists.
Which i wouldn't have with an intel-gpu, also less power draw. Even AMD is often supported in a better way, be it opensourced, or relying on their binary blob.
Stable meaning changes, in this context, right? freeBSD changes ABI compatibility with every version.
If you mean that they have old software (and old==stable if you think Debian) then that’s because of BSD licensing vs GPL. GPL can take from BSD but the other way around is not possible.
For me, stability means uptime and infrequency of application crashes. Unfortunately, OpenBSD didn't meet that requirement on my (standard desktop) machine--it would frequently kernel panic on resume.
On the other hand, I found that OpenBSD has absolutely outstanding reliability and stability on hardware like my ThinkPad X220.
> Stable meaning changes, in this context, right? freeBSD changes ABI compatibility with every version.
Every major version, right? From becoming -Stable to ending its support lifecycle, I'd think a FreeBSD version and its ABIs are supported as long as many Linux distro LTS releases.
The major difference being that if you have a statically compiled binary from linux 2.4 it will work on linux 5.5
The only difference will be any shared libraries and potentially the libc. But if we're speaking from a pure technical perspective, the libc is not linux.
Apples and oranges. FreeBSD is providing the userland too so it’s not like the ABI change is going to break all the things it might in a Linux distro if the kernel broke ABI.
I know that they change (well, reserve the right to change) syscalls with every major version. But that's not a big deal, since only the base system is supposed to be using syscalls directly; everything else goes through libc.
More of what I learned using Linux 0.99 applies to present-day OpenBSD than present-day Linux, and I can’t think of any breaking changes specific to the Linux side that weren’t clear regressions (except apt and dpkg. Those rock.)
Seems a little dismissive of BSD systems given how widely deployed the are in comparison to plan9 or zx spectrums. As far as I understand Netflix uses freebsd fairly extensively, and there are powerful tools like pfSense built on BSDs too, which are also used fairly extensively as far as i know.
I think there's a difference between using BSD to build an appliance and running BSD as a day-to-day system.
pfSense is a wonderful firewall appliance. FreeNAS is a wonderful storage appliance. If you need pf or native ZFS, a BSD is probably your best way to get it.
If you want to run a random headless server to mess around with, you are probably going to have a harder time with any of the BSDs than you would with even bleeding-edge Linux distros. If you want to run a graphical desktop with 3D acceleration and HD video you will almost certainly have a harder time than any Linux user.
---
There are, of course, people who are happily using BSD-based desktops right now and I am certainly not denying them anything, but even they'd have to admit they would have been able to get most of the same experience with a lot less effort on a modern Linux.
> If you want to run a random headless server to mess around with, you are probably going to have a harder time with any of the BSDs than you would with even bleeding-edge Linux distros
Actually, I became a BSD user after just trying OpenBSD out of curiosity on my headless server after years of running Linux. The experience was so damn good.
> There are, of course, people who are happily using BSD-based desktops right now and I am certainly not denying them anything, but even they'd have to admit they would have been able to get most of the same experience with a lot less effort on a modern Linux.
It's the opposite for me. Modern Linux distros are a pain; I can't find one I really like. Whenever I ask people to recommend a distro that feels like OpenBSD, I get recommendations from people who obviously haven't used OpenBSD in two decades. Maybe they tried FreeBSD two decades ago, and then figured that some old Linux distro was kinda similar..
Frankly I don't think the Linux way of "ducktape a bunch of third party packages together" will ever produce a distro that comes close to being as pleasant as OpenBSD. At least they'd have to heavily patch things and diverge from upstream.
Just because OSX has a very outdated BSD-derived userland does not make it a BSD. The OSX kernel a mach microkernel and shares no code from the BSD projects. This is a farcry from the *BSDs which can directly trace all of their code and history all the way back to 4.4BSD
So I didn't understand what that syntax meant for the code John was talking about, so I figured I would share my findings for other C newbies like myself. (If I am incorrect, please tell me!)
This:
int (*alloc_attr)(void *c, int fg, int bg, int flags, long *attrp);
means it's defining a function pointer that can point to a function that returns the type int and has a matching parameter list.
I have read many posts and articles by John Carmack, as well as viewed or attended many of his talks. In short, I have nothing but respect for him as a developer, thinker and human being in general. Consequently, if he's contributing to OpenBSD, that can only be a good thing in my opinion.
I’ve always liked Carmack’s walking the walk. It’s common on Internet forums (like this one) to talk about how some code is shit or whatever. Carmack submits patches and sees them through. An ideal to aspire to.
His Twitter is a great replacement for his planfiles. One of the few I follow.
> I can’t vouch for the actual algorithms, but the software engineering seems fine.
I also looked at the code a bit, and it seemed like typical bad natural sciences / electrical engineering grad student code that could have as easily been done the same way in matlab or python (or, as Carmak suggested, fortran).
I guess I’m in good company in thinking it was ok-ish, so now I’ll ask.
Why are people so upset with this code base / the choice of C++?
Politics. “Anything that supports a position I don’t like, should be attacked with full force. Anything less than perfect must disqualify the entire position.”
I think that there are two pathologies. The first is that alot of programmers have very specific and utterly unshakable views on what constitutes good or acceptable code. These are rooted in their experiences, but often have no foundation in terms of evidence or science (because there is very little good research on this).
The second issue is politics; there are a lot of people who hate the idea that data science or science can provide inputs to policy decisions (in business and government). I see modern political groups as :
- traditionalists : protecting what they have, policy aimed at preserving status quo
- capitalists : untrammeled use of capital power to maximize capital return
- idealists : interested in a higher causes/morality (as they see it) and unconcerned about other impacts or concerns
- technocrats : concerned about taking the highest utility actions (measuring utility in various often unconnected ways).
Capitalists have held the whip hand for 50 years, idealists (trade unionists, safety campaigners, ect) have been in full on retreat, traditionalists have managed to align with capitalists (by co-opting them with nice things like country estates). Technocrats have swayed in and out of influence; the crisis of 2009 really smashed up peoples faith in economists for example, the first gulf war really brought logistics and management science to the fore... but the war on terror has smashed this up now.
Data scientists are mostly technocrats but often are compromised/partially capitalists (because... money). In every case that I have seen where capitalists are thwarted by data science they react with fury - because at the moment they are very unused to being thwarted.
Recently though idealists have been grabbing more of the mic, and have a large contingent of data scientists of their own - ie. university scientists. This is why climate change modellers are so hated!
"He co-founded the video game company id Software and was the lead programmer of its games Commander Keen, Wolfenstein 3D, Doom and Quake and their sequels."
Being now 1/4th of the way through Doom Eternal I wonder how Canon that is. It seems Doomguy is the direct descendent of human-but-not warrior race? So BJ Blascowitz being human (maybe?) doesn't quite fit. They're doing such an awesome job expanding the Doom mythology that it'd be tight to watch them go back and explore Keen more, silly a game though it may have been.
Ah, but there is so much value in fresh eyes. Miod, who has been involved with OpenBSD development for 20 years and has surely seen those function names a thousand times, provided the historical reason (they thought they might need to allocate memory) but agreed they never ended up using that.
The fresh eyes don't have the history and are thus able to point things out that the older eyes inherently accept.
As someone who is not following closely, did he ever mention why he is looking into alternative OS? And why OpenBSD instead of say NetBSD or FreeBSD? I assume he has many experience with linux already.
While I can't say for sure, during a hole-up-for-a-week-to-learn a couple of years ago, he decided on neural networks and OpenBSD. He commented favorably on its opinionatedness. I'm suppose he liked the system and decided to contribute.
I feel this quote of his sums up the initial draw:
"Despite not having actually used it, I have always been fond of the idea of OpenBSD — a relatively minimal and opinionated system with a cohesive vision and an emphasis on quality and craftsmanship. Linux is a lot of things, but cohesive isn’t one of them."
He wrote awhile back about why he chose OpenBSD to get more familiar with Unix during a coding retreat. tl;dr: the offline documentation and quality of the source code were major draws for him working with slow/no internet.
I didn't check the link, but keep in mind he worked for Oculus, which was acquired by Facebook. Not sure it was a requirement from Facebook, but was probably some choice around that.
With Linux, you get fewer officially supported games than Mac, but at least it's on people's radar. Few computer users know BSD is a thing, but most have heard of someone using Linux. It's strictly a techie/advanced hobbyist system.
I think I’d swap “NetBSD” and “OpenBSD” in you public consciousness hierarchy.
I’m a (very happy) NetBSD user of nearly 20 years. It’s excellent; I stick with it for reasons. Even I, though, (with humour) think generally that OpenBSD has general audience attention for Theo, h4x0ring, and bomb proof security, while NetBSD is there in case you inherit a VAX. ;)
Here is your daily reminder to read "Masters of Doom", a book about the early days at ID Software. It's an amazing look into a really cool time in gaming and software as a whole.
It's a great book, mostly about John Carmack and the guys at ID!
This thread reminded me that Carmack has been a free software advocate in the past, and my citation is, quite amusingly, that book:
> It was February 8, 1998, and Carmack was about to put his brain to the test: counting cards in blackjack. This had become something of a new fascination of his. “Having a reasonable grounding in statistics and probability and no belief in luck, fate, karma, or god(s), the only casino game that interests me is blackjack,” he wrote in a .plan file. “Playing blackjack properly is a test of personal discipline. It takes a small amount of skill to know the right plays and count the cards, but the hard part is making yourself consistantly [sic] behave like a robot, rather than succumbing to your ‘gut instincts.’ ” To refine his skills before the trip, Carmack applied his usual learning approach: consuming a few books on the subject and composing a computer program, in this case one that simulated the statistics of blackjack dealt cards.
> His research proved successful, netting him twenty thousand dollars, which he donated to the Free Software Foundation, an organization of like-minded believers in the Hacker Ethic.
On his recent appearance on Joe Rogan, he talks about Free Software and making older engine software open when they released new stuff. I think he might have mentioned icculus too but I don’t recall.
Well yes, it wasn't the part that interested me about that paragraph, but if you want the whole thing...
> His research proved successful, netting him twenty thousand dollars, which he donated to the Free Software Foundation, an organization of like-minded believers in the Hacker Ethic. “Its [sic] not like I’m trying to make a living at [blackjack],” Carmack wrote online after his trip, “so the chance of getting kicked out doesn’t bother me too much.” It didn’t take long for him to find out just how he’d feel. On the next trip, Carmack was approached by three men in dark suits who said, “We’d appreciate if you’d play any other game than blackjack.”
> The others at the table watched in disbelief. “Why are they doing this to you?” a woman asked.
> “They think that I’m counting cards,” Carmack said.
> “They think you can remember all those different cards?”
> “Yeah,” Carmack replied, “something like that.”
> “Well, what do you do?”
> “I’m a computer programmer,” he said, as he was escorted out the door.
Wouldn't surprise me - I've read some stuff on card counters and seen a few videos, seems like there's a whole other side of successful card counting beyond the technical skill where you have to read out the social situation, since any pit boss worth their salt will be able to spot a card counter a mile away. Which I'm guessing Carmack didn't really care to do since he was just on an intellectual kick.
Good card counters are always a team, you can't go far solo as it's pretty easy to tell when you're betting 20 dollars one hand, and 500 another.
You usually have a few smaller players just playing fixed bets who call over some higher roller that will do the big betting when the count is high, and then just leave when the count gets low.
Rumor has it that the security cameras now have software that will tally the count automatically, and flag people that play when the count is up. Coupled with player identification (facial recognition/player rewards cards) and correlating player times, casinos are starting to automate this. All rumor, of course, but sounds plausible if you're willing to throw a few million at the problem.
Can confirm that the facial recognition cameras and correlating faces to blackjack play is not rumor. To abuse a joke, "There are more cameras in a casino than in a East Village camera close out store." :-)
Back in 2005 I was approached by one of the casino groups that was looking for a director of technology to manage the team that did "camera analytics". Seemed like a pretty neat job technically.
So just go to Europe? Pretty sure the GDPR prevents these kind of recognitions.
So I looked up the privacy policy of Holland Casinos and there is no mention of facial recognition, only mention of CCTV for security purposes (under which I wouldn't expect prevention of adverse losses to fall) and a seven day storage period of camera footage. They are not explicit but suggest they do not use camera data for profiling purposes.
More like you need to leave before they notice you and make the decision to ban you (once they make that decision it doesn't matter if you've left already or not; you ain't coming back either way).
No, but casinos are private businesses with the right to refuse service to anyone, and they tend to exercise that right against people they believe to be counting cards. Card counters are not a protected class.
If I remember correctly, then that is true in Nevada, but not in Atlantic City. There they have to let you play even if they suspect you of counting cards.
Of course that just means they will employ other means to prevent you from counting cards (larger shoes, continuous shuffling, etc).
As we're here, can someone explain the difference between counting cards vs being actually good at the game of Blackjack?
I've always assumed that the two are really the same thing. If you get good at the game then that implies you're using an intuition for the probabilities of certain events based on the history of the cards.
I suppose the casinos don't have to justify this, or have any moral quandaries about it.
It's not illegal. But the casino is private property, and if you don't follow their instructions to stop counting, and ask you to leave, now it's trespass.
If you would have said not many poker rooms ran No-Limit Hold'em in 1998, you would have been right. But poker rooms and poker (limit Hold'em and 7-card Stud) were easily accessible in casinos in 1998. At least in Las Vegas it was.
No-Limit Hold'em is what took off in early 2000 with TV/WSOP popularity and quickly became basically the only game in town by mid 2000's.
I randomly selected this book off a shelf in my high school library when I had to pick a book to do a report on for a bullshit elective in high school. Procrastinated as usual, so on the Sunday before it was due I figured I had less than 24 hours, so I might as well look up the cliff notes and barf 1000 lines on paper. I decided to at least try reading the first chapter for whatever reason and couldn't stop reading. Went from planning to do the bare minimum to trying way too hard to do a good job and somehow convince everyone it was a good book. Now, working as a developer, I credit that book with getting myself into game development and programming in general. This book was life-changing for me.
Counterpoint (just to manage expectations - you should read it for sure!): I was hyped to read the book, but ended up finding it a bit tedious. The technical parts were not technical enough (sometimes flat-out wrong, or maybe, naive sounding?) and the rock star parts got repetitive (even a lot of the same phrases used over and over). Still great fun, but not as good as I'd hoped.
I'd recommend Fabien Sanglards's Wolfenstein and Doom books (https://fabiensanglard.net/gebbwolf3d/). These go into WAY TOO MUCH technical info (in a great way!), but don't cover any of the hilarious rock-star antics.
My secret wish is someone would re-write Masters Of Doom to be 25% more Game Engine Black Book, and fix up some of the prose!
>I'd recommend Fabien Sanglards's Wolfenstein and Doom books (https://fabiensanglard.net/gebbwolf3d/). These go into WAY TOO MUCH technical info (in a great way!), but don't cover any of the hilarious rock-star antics.
The Doom book does mention Romero getting locked into his office one day and Carmack cutting down the door with a battleaxe, but I didn't mind knowing that detail. (There's a picture too)
I hope they bring in some of the people that made Halt and Catch Fire work so well. The Romero id Software years were filled with just enough drama and chaos to make it work.
HCF was great. I can't recommend it highly enough to anyone who hasn't seen it yet, if you have any interest in the history of the computer industry at all. And I say that not because of how historically accurate it is, or isn't. It's just plain good drama. I just finished watching the entire series for the second time, and I wouldn't be shocked if I wind up watching it all a third time at some point in my life.
The caveat being to stick through it for the first season, which is a little rough around the edges and overwrought. From there, it really takes off and becomes fantastic.
That's a fair point. But I wouldn't want to suggest to anyone that Season 1 is bad. There's a lot of good stuff there. But the show does get progressively better as it moves along.
I also often long for that magical feeling of everything being new and exciting again (to you). Like the computer was waiting for you to just press the right sequence of buttons...
I obviously can't guarantee getting that feeling back in full, but I've been able to experience remnants of it by just poking around with completely unfamiliar tools/languages/servers/systems that have little to no use in business. Maybe even obsolete.
Basically, look for anything that you wouldn't be tempted to turn into a job or use to make any real money. Just enjoy it for what it is. Write hacky code. Press all the wrong buttons. Make it do things it was never meant to do. Do stupid things with it just for the sake of doing stupid things. Have fun just for the sake of it being fun.
I really enjoyed the book, especially the part between Softdisk and Quake when the two Johns combined to form a great team and productivity was super high.
That's neat to see. This may not be a big story but as a fan of OpenBSD who doesn't currently have time to read the CVS log I appreciate it.
OpenBSD source is one of the cleanest I've had the pleasure to work with. You (generally) won't be scoffed at for submitting a semantic change like this, fixing typos, or the like. It certainly isn't perfect but the bar feels higher than elsewhere. Clean code and as importantly the documentation kept in sync.
First I have to have a stable OS. Then I have to make kernel updates to support the kind of process management I want. Then I have to fix the other kernel bugs affecting my update. Then I have to modify the network drivers. Then I have to ... 5 years later what was I even working on in the first place?
Oculus wouldn't have had that buzz without Carmack pushing the original Rift DIY kit before their Kickstarter. He was already commenting on Palmers DIY creations in the community where the ideas for Rift were born.
It was more or less a community project, because without community input in those forum threads Palmer woulnd't even had considered glasses, he was deciding between VR glasses and a 3D monitor in his first posts there, and got nudged by a member towards glasses, lol. He just never stopped absorbing information and kept iterating on his franken HMDs and prototypes, and one day when he had already reached a good level Carmack showed up.
Most people who actually work in AI know that AGI being possible is still an IF and IF it is possible, it is centuries or even further away (basically some indeterminate time frame because we don't even know if we're on the right path). The techniques we're using today are likely not going to be applicable to AGI. John Carmack is very smart, but I wouldn't place him on the same pedestal as any of our great historical polymaths and IMO if a single person is going to make any progress in AGI it's going to be someone like Leibniz or Euler.
Not to be judgmental, and to support your argument, but two points: I don't think we know how smart he is and I don't think it's that important. Euler was to me a balanced person as opposed to a extreme person like Grothendieck. So, to each their own.
Second point: I think almost everyone being promoted for being involved in AI or AGI are probably irrelevant in the longer run. Self driving cars are at this point something of a social problem rather than a clear mathematical problem. So, whether we solve the current list of business related AI problems is probably immaterial for mathematics, but important for public opinion. I haven't read or seen any projects that are even remotely close to AGI, unless someone is hiding something.
Whether some person x is doing (edit: working on) AGI I guess matters only in the social or business sense.
Well what we do know is that current AI methods are very task specific and there is nothing that generalizes well to many tasks. If it was happening in the next century we would at least have some idea how to start generalizing to many tasks with a single model or something comparable to that. We had the analytical engine in 1830s and we didn't see a turing machine or lambda calculus until the 1930s
It is extremely hard (read impossible) to predict what is going to evolve exponentially or asymptomatically in the future. Look at computing and aerospace for example. Take the state of those 2 domains and ask someone in 1900 where they think we would be in 2000, I am pretty sure they would have been quite far off.
Working at an AI lab myself, while I agree with your statement about the state of AI today, I don't know any of my colleague that would be confident to say it's 100+ years away.
> If it was happening in the next century we would at least have some idea
Just like at the beginning of the 20th century we knew ~0 about quantum physics and less than 50 years later A-bombs were getting dropped.
Or how, at the same time no plane could achieve sustained/controlled flight (and the goal was merely to stay above ground) and less than 70 years later not only were we leaving Earth's atmosphere: a man was walking on the moon.
We have no idea how long it will take to achieve AGI, let's leave it at that.
> IF it is possible, it is centuries or even further away
Just out of curiosity, would you have been willing to bet that something most people will agree on being AGI will not be developed before 2221 at the earliest?
Because given the history of science and technology since 1821, that seems like a statement that's just drawn out of thin air. "I don't know" would make sense. "Definitely not in the next 200 years" does not make much sense.
That's an interesting perspective: why do you believe that it will be an individual who will make a breakthrough, as opposed to a web of Carmack-like folks?
I actually don't believe an individual will make a breakthrough, that was more of a if an individual did do it statement. IMO it's actually going to be the result of a lot of methodical work by an army of researchers over a really long period of time.
I am not saying that he will solve AGI on his own (he did quit rocket science after all), but you are greatly underestimating the impact he can have in the field. He basically evolved almost on his own two multi-billion dollar markets by doing noticeable technology leaps and finishing things at the right time.
That must count for something. I would bet he cannot solve AGI, but he will surely leave something inspiring behind.
There is basically none. People on HN have this weird god-worship of John Carmack. He's a great engineer and businessman, but a top-tier theoretical researcher? He has less to offer than most PhD students in these fields. It's kind of insane the cult following he has.
This is really a great coincidence. I was talking to a friend of mine two days ago about code that we have read for pleasure. I said that the Doom source code was so surprising because the logic was so clean and elegant. One felt that the code did the most obvious thing to do, rather than using unnecessary cleverness.
It wasn't immediately obvious to me what this change actually was about (perhaps I scimmed the comments parts too quickly), but I found this reply somewhat illuminating:
Take care to observe how John Carmack went about it, though. He described the issue from his perspective, gave a clear example, didn't blame anyone, and asked before making any changes at all. Someone replied suggesting that the refactoring be performed separately from other changes. It's hard to give flak to someone taking a measured approach like this.
Is it possible Occulus has a big need for such an OS and this is the cause of the contributions? Or is this Carmack being a hobbyist and simply making contributions for its own sake?
He's basically retired at this point and working on things he finds interesting (which was AGI as of a few months ago; there was a blog post where he went into more detail).
I thought it was funny that the first reply comes from someone whose email address was @id.au. (Pretty sure this is unrelated to Id Software but quite a coincidence!)