Hacker Newsnew | past | comments | ask | show | jobs | submit | ecliptik's commentslogin

And a Gunship sticker. After looking at some of these I wish there was an optional field people could add so others with overlapping interests could follow a blog/socialmedia/etc.

Are SpaceWalk [1] or Moku Pona[2] what you're looking for?

1. https://tildegit.org/sloum/spacewalk

2. https://github.com/kensanata/moku-pona


In the past I've donated old hardware to OpenBSD [1] and would love to donate to them directly, but they aren't registered as a 501c3 in the US and can't claim the deduction on taxes (yes, I know, I am not 100% altruistic).

Instead I donate to FreeBSD and support OpenBSD in an ancillary way through OpenBSD Amsterdam [2]. Which yes, is also not tax exempt, but does comes with nice OpenBSD VM.

1. https://www.openbsd.org/want.html

2. https://openbsd.amsterdam/


I haven't read it, but High Noon[1] comes up in recommendations about Sun Microsystems history.

1. https://archive.org/details/highnoon00kare


Great, thanks for the pointer! I see it was published in 1999, so I imagine it’ll be a good time-capsule read too, even if it predates the dot com bubble burst and the eventual Oracle acquisition, though maybe that’s where the “Larry Ellison lawnmower” talk fills in well.


5 years ago I blew out a knee running and we ended up getting a Peloton (coincidentally the end of Feb 2020) since the doctor said it would be less stress on my knees.

Used their service some, but I 3D printed a phone holder for the handlebars. Now instead of sitting in a chair watching videos and scrolling HN, I do it with an elevated heart rate for 30 minutes.

Sure it's not the "right" way to exercise, but I've lost weight (in combination with an improved diet), have more energy and I feel less guilty about screentime.


Applying the principle of least resistance to working out is the right way. The best workout will always be the one you can do consistently.

https://matt.might.net/articles/hacking-strength/


Years ago a coworker asked me if it's okay to take his multivitamin before going to bed.

I replied, "taking them at all is the hurdle. Don't worry about not getting 5% of your multivitamin, because you're still getting the other 95%."

Starting something is the biggest hurdle.


This. Most people don't finish [definiton of your race] not because they give up but because they don't try.


It is the right way to exercise. I think that the modern "if it is fun and pleasant then it does not count enough" is what keeps people off exercising, off learning, etc.


Why isn't it the "right" way to do exercise? I do something similar with my "zone 2" cycling workouts, where I throw a movie on Netflix while I mindlessly pedal at a fixed power output. It's a great way to get in some exercise, especially in the Winter months when it's hard/unappealing to get outside.


I have a bad ankle/foot that is causing me trouble when I run so I switched to cycling. I never had knee problems when running but I now get knee pain from cycling...


It could be an issue with your position (seat too high, too low, cranks too long, if you are using clipless pedals maybe need to get wider stance, ...). I would get a professional bike fit.


I've been reading The Innovators[1], which includes early computing history, just finished the section on The Mother of all Demos yesterday coincidentally.

1. https://en.wikipedia.org/wiki/The_Innovators_(book)


It's not a best practice, but the last 10 years I've run my home server with a smaller faster drive for the OS and a single larger disk for bulk storage that I choose using Backblaze Drive Stats. None of have failed yet (fingers-crossed). I really trust their methodology and it's an extremely valuable resource for me as a consumer.

My most recent drive is a WDC WUH722222ALE6L4 22TiB, and looking at the stats (albeit only a few months of data), and overall trend of WDC, in this report gives me peace of mind that it should be fine for the next few years until it's time for the cycle to repeat.


Take these stats with a grain of salt.

I am becoming more and more convinced that hard drive reliability is linked to the batch more than to the individual drive models themselves. Often you will read online of people experiencing multiple failures from drives purchased from the same batch.

I cannot prove this because I have no idea about Blackblazes procurement patterns but I bought one of the better drives in this list (ST16000NM001G) and it failed within a year.

When it comes to hard drives or storage more generally a better approach is protect yourself against down time with software raid and backups and pray that if a drive does fail it does so within the warranty period.


>Often you will read online of people experiencing multiple failures from drives purchased from the same batch

I'll toss in on that anecdata. This has happened to me a several times. In all these cases we were dealing with drives with more or less sequential serial numbers. In two instances they were just cache drives for our CDN nodes. Not a big deal, but I sure kept the remote hands busy those weeks trying to keep enough nodes online. In a prior job, it was our primary storage array. You'd think that RAID6+hot spare would be pretty robust, but 3 near simultaneous drive failures made a mockery of that. That was a bad day. The hot spare starting doing its thing with the first failure, and if it had finished rebuilding before the subsequent failures, we'd have been ok, but alas.


This has been the "conventional wisdom" for a very long time. Is this one of those things that get "lost with time" and every generation has to rediscover it?

Like, 25+ years ago I would've bought hard drives for just my personal usage in a software raid making sure I don't get consecutive serial numbers, but ones that are very different. I'd go to my local hardware shop and ask them specifically for that. They'd show me the drives / serial numbers before I ever even bought them for real.

I even used different manufacturers at some point when they didn't have non consecutive serials. I lost some storage because the drives weren't exactly the same size even though the advertized size matched, but better than having the RAID and extra cost be for nothing.

I can't fathom how anyone that is running drives in actual production wouldn't have been doing that.


It’s inconvenient compared to just ordering 10x or however many of the same thing and not caring. The issue with variety too is different performance characteristics can make the array unpredictable.

Of course, learned experience has value in the long term for a reason.


I had to re-learn this as well. Nobody told me. Ordered two drives, worked great in tandem until their simultaneous demise. Same symptoms at the same time

I rescued what could be rescued at a few KB/s read speed and then checked the serial numbers...


I personally like to get 1 of every animal if I can.

I just get 1/3 Toshiba, 1/3 WD, 1/3 Seagate.


Nearly every storage failure I've dealt with has been because of a failed RAID card (except for thousands of bad quantum bigfoot hard drives at IUPUI).

Moving to software storage systems (ZFS, StorageSpaces, etc.) has saved my butt so many times.


Exactly this.

I mostly just buy multiple brands from multiple vendors. And size the partitions for mdadm a bit smaller.

But even the same model where it's 2 each from bestbuy, Amazon, newegg, microcenter, seems to get me a nice assortment of variety.


Same thing I did except I only wanted WD Red drives. I bought them from Amazon, Newegg, and Micro center. Thankfully none of them were those nasty SMR drives, not sure how I lucked out.


Well to me the report is mostly useful to illustrate the volatility of hard drive failure. It isn't a particular manufacturer or line of disks, it's all over the place.

By the time Backblaze has a sufficient number of a particular model and sufficient time lapsed to measure failures, the drive is an obsolete model, so the report cannot really inform my decision for buying new drives. These are new drive stats, so not sure it is that useful for buying a used drive either, because of the bathtub shaped failure rate curve.

So the conclusion I take from this report is that when a new drive comes out, you have no way to tell if it's going to be a good model, a good batch, so better stop worrying about it and plan for failure instead, because you could get a bad/damaged batch of even the best models.


I looked at this last report and I came to the same conclusion I did in the first report: Seagate drives are less reliable than WD.


> I am becoming more and more convinced that hard drive reliability is linked to the batch more than to the individual drive models themselves.

Worked in a component test role for many years. It's all of the above. We definitely saw significant differences in AFR across various models, even within the same product line, which were not specific to a batch. Sometimes simply having more or less platters can be enough to skew the failure rate. We didn't do in depth forensics models with higher AFRs as we'd just disqualify them and move on, but I always assumed it probably had something to do with electrical, mechanical (vibration/harmonics) or thermal differences.


My server survived multiple drive failures. ZFS on FreeBSD with mirroring. Simple. Robust. Effective. Zero downtime.

Don’t know about disk batches, though. Took used old second hand drives. (Many different batches due to procurement timelines.) Half of them was thrown out because they were clicky. All were tested with S.M.A.R.T. Took about a week. The ones that worked are mostly still around. Only a third of the ones that survived S.M.A.R.T. have failed so far.


I didn't discover ZFS until recently. I played around with it on my HP Microserver around 2010/2011 but ultimately turned away from it because I wasn't confident I could recover the raw files from the drives if everything went belly up.

Whats funny is that about a year ago I ended up installing FreeBSD onto the same Microserver and ran a 5 x 500GB mirror for my most precious data. The drives were ancient but not a single failure.

As someone who never played with hardware raid ZFS blows my mind. The drive that failed was a non issue because the pool it belongs to was a pool with a single vdev (4 disk mirror). Due to the location of the server I had to shut down the system to pull the drive but yeah I think that was 2 weeks later. If this was the old days I would have had to source another drive and copy the data over.


ZFS is like magic.

Every time I think I might need a feature in a file system it seems to have it.


IME heat is a significant factor with spindle drives. People will buy enterprise-class drives, then stick them in enclosures and computer cases that don't flow much air over it, leading to the motor and logic board getting much warmer than they should.


I have four of those drives mentioned and the one that did fail had the highest maximum temperature according to the SMART data. It was still within the specs though by about 6 degrees Celsius.

The drives are spaced apart by empty drive slots and have a 12cm case fan cranked to max blowing over it at all times.

It is in a tower though so maybe it was bumped at some time and that caused the issue. Being in the top slot this would have had the greatest effect on the drive. I doubt it though.

Usage is low and the drives are spinning 24/7.

Still I think I am cursed when it comes to Seagate.


Heat is also a problem for flash. If you care about your data, you have to keep it cool and redundant.


With the added complication that the controller should be kept cool, but the flash should run warm.

The NVMe drives in my servers have these little aluminium cases on them as part of the hotswap assembly. They manage the temperature differential by using a conductive pad for the controller, but not the flash.


This. My new Samsung T7 SSD overheated and took 4T of kinda priceless family photos with it. Thank you Backblaze for storing those backups for us! I missed the return window on the SSD so now have a little fan running to keep the thing from overheating again


This is why it’s best practice to buy your drives from different dealers when setting up RAID.


>It's not a best practice, but the last 10 years I've run my home server with a smaller faster drive for the OS and a single larger disk for bulk storage that I choose using Backblaze Drive Stats. None of have failed yet (fingers-crossed). I really trust their methodology and it's an extremely valuable resource for me as a consumer.

I also have multiple drives in operation in the past decade and didn't experience any failures. However unlike you, I didn't use backblaze's drive stats to inform my purchase. I just bought whatever was cheapest, knowing that any TCO reduction from higher reliability (at best, around 10%) would eaten up by the lack of discounts the "best" drive. That's the problem with n=1 anecdotes. You don't know whether nothing bad happened because you followed "the right advice", or you just got lucky.


> WDC WUH722222ALE6L4 22TiB

Careful... that is 22 TB, not 22 TiB. Disk marketing still uses base 10. TiB is base 2.

22 TB = 20 TiB


Nobody should ever have peace of mind about a single drive. You probably have odds around 5% that the storage drive fails each cycle, and another 5% for the OS drive. That's significant.

And in your particular situation, 3 refurbished WUH721414ALE6L4 are the same total price. If you put those in RAIDZ1 then that's 28TB with about as much reliability as you can hope to have in a single device. (With backups still being important but that's a separate topic.)


>You probably have odds around 5% that the storage drive fails each cycle

What do you mean by cycle?


"My most recent drive [...] it should be fine for the next few years until it's time for the cycle to repeat."

The amount of time they stay on a single drive.


Drive manufacturers often publish the AFR. From there you can do the math to figure out what sort of redundancy you need. Rule of thumb is that the AFR should be in the 1-2% range. I haven't looked at BB's data, but I'm sure it supports this.

Note, disk failure rates and raid or similar solutions should be used when establishing an availability target, not for protecting against data loss. If data loss is a concern, the approach should be to use back ups.


You picked a weird place to reply, because that comment is just saying what "cycle" means.

But yes, I've done the math. I'm just going with the BB numbers here, and after a few years it adds up. The way I understand "peace of mind", you can't have it with a single drive. Nice and simple.


I assume by "cycle" they are referring to mtbf/afr


I am reasonably confident that "time for the cycle to repeat" is the cycle of purchasing a new drive and moving to it.

Whether that's right or wrong, when I talked about 5% failure chance per cycle that's what I meant. And 5% is probably an underestimate.


Ahh. That % would depend on fleet size. AFR should be under 1% for most drives.


I'm estimating about five years, and the context is a single drive.


My understanding is that with the read error rate and capacity of modern hard drives, statistically you can't reliably rebuild a raid5/raidz1


Not an expert but I’ve heard this too. However - if this IS true, it’s definitely only true for the biggest drives, operating in huge arrays. I’ve been running a btrfs raid10 array of 4TB drives as a personal media and backup server for over a year, and it’s been going just fine. Recently one of the cheaper drives failed, and I replaced it with a higher quality NAS grade drive. Took about 2days to rebuild the array, but it’s been smooth sailing.


The bit error rates on spec sheets don't make much sense, and those analyses are wrong. You'd be unable to do a single full drive write and read without error, and with normal RAID you'd be feeding errors to your programs all the time even when no drives have failed.

If you're regularly testing your drive's ability to be heavily loaded for a few hours, you don't have much chance of failure during a rebuild.


I'm sure you're aware but consider putting another drive in for some flavor of RAID, it's a lot easier to rebuild a RAID than to rebuild data usually!

Edit: By "some flavor" I mean hardware or software.


RAID doesn't cover all of the scenarios as offsite backup, such as massive electrical power surge, fire, flood, theft or other things causing total destruction of the RAID array. Ideally you'd want a setup that has local storage redundancy in some form of RAID and offsite backup.


In fact for home users backup is WAY more important than RAID, because your NAS down for a (restore time) is not that important, but data loss is forever.


For essential personal data you're right, but a very common use case for a home NAS is a media server. The library is usually non-essential data - annoying to lose, but not critical. Combined with its large size, it's usually hard to justify a full offsite backup. RAID offers a cost-effective way to give it some protection, when the alternative is nothing


For a number of people I know, they don't do any offsite backup of their home media server. It would not result in any possibly-catastrophic personal financial hassles/struggles/real data loss if a bunch of movies and music disappeared overnight.

The amount of personally generated sensitive data that doesn't fit on a laptop's onboard storage (which should all be backed up offsite as well) will usually fit on like a 12TB RAID-1 pair, which is easier to back up than 40TB+ of movies.


Same here, I use raid 1 with offsite backups for my documents and things like family pictures. I don't backup downloaded or ripped movies and TV shows, just redownload or search for the bluray in the attic if needed.


I think there's a very strong case to be made for breaking up your computing needs into separate devices that specialize in their respective niche. Last year I followed the 'PCMR' advice and dropped thousands of dollars on a beefy AI/ML/Gaming machine, and it's been great, but I'd be lying to you if I didn't admit that I'd have been better served taking that money and buying a lightweight laptop, a NAS, and gaming console. I'd have enough money left over to rent whatever I needed on runpod for AI/ML stuff.


Having to restore my media server without a backup would cost me around a dozen hours of my time. 2 bucks a month to back up to Glacier with rclone’s crypt backend is easily worth it.


Have you checked the costs for restoring from Glacier?

It's not the backing up part that's expensive.

I would not be surprised if you decided to spend the dozen hours of your time after all.


AWS Glacier removed the retrieval pricing issue for most configurations, but the bandwidth costs are still there. You pay $ 90 to retrieve 1 TB.


The retrieval cost is less than 1 hour of my time and I expect less than 10% chance I'll ever need it.


How are you hitting that pricing? S3 "Glacier Deep Archive"?

Standard S3 is $23/TB/mo. Backblaze B2 is $6/TB/mo. S3 Glacier Instant or Flexible Retrieval is about $4/TB/mo. S3 Glacier Deep Archive is about $1/TB/mo.

I take it you have ~2TB in deep archive? I have 5TB in Backblaze and I've been meaning to prune it way down.

Edit: these are raw storage costs and I neglected transfer. Very curious as my sibling comment mentioned it.


Yup, deep archive on <2TB, which is more content than most people watch in a lifetime. I mostly store content in 1080p as my vision is not good enough to notice the improvement at 4K.


> more content than most people watch in a lifetime

The average person watches more than 3 hours of TV/video per day, and 1 gigabyte per hour is on the low end of 1080p quality. Multiply those together and you'd need 1TB per year. 5TB per year of higher quality 1080p wouldn't be an outlier.


Holy crap! I watch like, maybe a couple of movies a month and two or three miniseries a year?

Is that including ads too? And sports/news?

EDIT: Wait, are these "average person" or "average American?"


Average person.

https://uk.themedialeader.com/tv-viewing-time-in-europe-ahea...

https://www.finder.com/uk/stats-facts/tv-statistics

https://www.eupedia.com/forum/threads/daily-tv-watching-time...

"In a survey conducted in India in January 2022, respondents of age 56 years and above spent the most time watching television, at an average of over three hours per day."

https://www.medianews4u.com/young-india-spends-96-min-per-da...

For china I'm seeing a bit over two and a half hours of TV in 2009, and more recently a bit over one and a half hours TV plus a bit over half an hour of streaming.

Yes it includes ads, sports, and news.

Personally I don't watch a lot of actual TV but I have youtube or twitch on half the time.


That assumes disks never age out and arrays always rebuild fine. That's not guaranteed at all.


for the home user backing up their own data, I honestly think that raid has limited utility.

If I have 3 disks to devote to backup, I'd rather have 1 local copy and two remote copies, vs 1 local copy with RAID and 1 remote copy without.


It's super useful for maintenance, for example you can replace and upgrade the drives in place without reinstalling the system.


If it's infrequently accessed data then yes, but for a machine that you use every day it's nice if things keep working after a failure and you only need to plug in a replacement disk. I use the same machine for data storage and for home automation for example.

The third copy is in the cloud, write/append only. More work and bandwidth cost to restore, but it protects against malware or fire. So it's for a different (unlikely) scenario.


I end up doing this too, but ensure that the "single data disk" is regularly backed up offsite too (several times a day, zfs send makes it easy). One needs an offsite backup anyway, and as long as your home server data workload isn't too high and you know how to restore (which should be practiced every so often), this can definitely work.


I switched to TLC flash last time around and no regrets. With QLC the situations where HDDs are cheaper, including the cost of power, are growing narrower and narrower.


It really depends on your usage patterns. Write-heavy workloads are still better cases for spinning rust due to how much harder they are on flash, especially at greater layer depths.


Plus that SSDs apparently have a very dirty manufacturing process, worse than the battery or screen in your laptop. I recently learned this because the EU is starting to require reporting CO2e for products (mentioned on a Dutch podcast: https://tweakers.net/geek/230852/tweakers-podcast-356-switch...). I don't know how a hard drive stacks up but if the SSD is the worst of all of a laptop's components, odds are that it's better and so one could make the decision to use one or the other based on whether an SSD is needed rather than just tossing it in because it's cheap

Probably it also matters if you get a bulky 3.5" HDD when all you need is a small flash chip with a few GB of persistent storage — the devil is in the details but I simply didn't realise this could be a part of the decision process


If this is really a significant concern for you, are you accounting for the CO2e of the (very significant) difference in energy consumption over the lifetime of the device?

It seems unlikely to me that in a full lifecycle accounting the spinning rust would come out ahead.


The figure already includes the lifetime energy consumption and it's comparatively insignificant. The calculation even includes expected disposal and recycling!

It sounded really comprehensive besides having to make assumptions about standard usage patterns, but then the usage is like 10% of the lifetime emissions so it makes a comparatively small difference if I'm a heavy gamer or leave it to sit and collect dust: 90% remains the same

> If this is really a significant concern for you

It literally affects everyone I'm afraid and simply not knowing about it (until now) doesn't stop warming either. Yes, this concerns everyone, although not everyone has the means to do something about it (like to buy the cleaner product)


Um, no. Not unless you're still running ancient sub-1TB enterprise drives.

It turns out that modern hard drives have a specified workload limit [1] - this is an artifact of heads being positioned at a low height (<1nm) over the platter during read and write operations, and a "safe" height (10nm? more?) when not transferring data.

For an 18TB Exos X18 drive with a specified workload of 550TB read+write per year, assuming a lifetime of 5 years[2] and that you never actually read back the data you wrote, this would be at max about 150 drive overwrites, or a total of 2.75PB transferred.

In contrast the 15TB Solidigm D5-P5316, a read-optimized enterprise QLC drive, is rated for 10PB of random 64K writes, and 51PB of sequential writes.

[1] https://products.wdc.com/library/other/2579-772003.pdf

[2] the warrantee is 5 years, so I assume "<550TB/yr" means "bad things might happen after 2.75PB". It's quite possible that "bad things" are a lot less bad than what happens after 51PB of writes to the Solidigm drive, but if you exceed the spec by 18x to give you 51PB written, I would assume it would be quite bad.


ps: the white paper is old, I think head heights were 2nm back then. I'm pretty sure <1nm requires helium-filled drives, as the diameter of a nitrogen molecule is about 0.3nm


hopefully you have 2x of these drives in some kind of raid mirror such that if one fails, you can simply replace it and re-mirror. not having something like this is risky.


Wasn’t the issue with large drives that remaining drive has a high chance of failure during re-silvering?


That may be true for pools that never get scrubbed. Or for management that doesn't watch SMART stats in order to catch a situation before it degrades to the point where one drive fails and another is on its last legs.

With ZFS on Debian the default is to scrub monthly (second Sunday) and resilvering is not more stressful than that. The entire drive contents (not allocated space) has to be read to re-silver.

Also define "high chance." Is 10% high? 60%? I've replaced failed drives or just ones I wanted to swap to a larger size at least a dozen times and never had a concurrent failure.


If you're doing statistics to plan the configuration of a large cluster with high availability, then yes. For home use where failures are extremely rare, no.

Home use is also much more likely to suffer from unexpected adverse conditions that impact all the drives in the array simultaneously.


Just triple mirror with cheap drives from different manufacturers.


No RAID 0 for the bulk storage? What’s your disaster plan?


Surely you mean RAID 1? Or 5, 6, 10 perhaps?


restic + rclone to cloud storage for data I care about, the majority of the data can easily be replaced if needed.


That’s exactly how I do it.


Disaster-plan is always backup (away from location) or out-of-house replication, raid is NOT a backup but a part of a system to keep uptime high and hands-on low (like redundant power and supply)

Disaster = Your DC or Cellar is flooded or burned down ;)


I still miss my X61s daily.

The trackpoint was great simply because I didn't need to move my hands anywhere else when using it, could keep my fingers all on the home row and still move pointer.


Can be done with Mac. You use your thumb at top of trackpad while fingers stay on home row.


You can't scroll with just your thumb, while you can scroll with a TrackPoint while holding down the middle mouse button. Using your thumb on a trackpad with your hand positioned above the keyboard is also less accurate and restricted to a narrower range of motion.


Not remotely the same experience. One never picks up the pointer finger during input with the nub. The thumb on the track pad looks like one is a crab, constantly twitching and lifting.


One of my favorite video essay's on this is "Nintendo - Putting Play First" by Game Makers Toolkit [1]. It goes into when making a game, Nintendo first determines the mechanic they want to focus on; jumping, throwing a hat, shooting paint, etc and finding out how to make it fun, then building and iterating on the idea.

It's how they can keep putting out essentially the same games but are completely different.

1. https://youtu.be/2u6HTG8LuXQ


I can't tell you how much respect I have for this mindset. Like them burning a heap of money on Metroid Prime 4, for years, and then coming out with an announcement along the lines of "sorry guys, this sucks, so we've chucked it out and started again because we only do things right, see you in another 3-4 years when it's ready."

It pays dividends, because they just don't ship junk, so everything they DO ship sells extremely well.


Some stuff they have sells well: Smash, Zelda, Pokemon. Metroid sells a lot less well.


How does that relate to this discussion?


This is the right mindset. It makes your customers trust you.


Mostly true, but Everybody 1-2-Switch was pretty close to being junk though.


GMTK is popular, but he's mostly talking out of his ass. He's got zero industry experience and most gamedevs I know personally clown on his takes constantly. Unless he references specific Nintendo interviews where they talk about their design process, I have doubts about this video containing an accurate description of how Nintendo does things.


At least in this video, all the interviews and documents that they base their claims/opinions on are listed in the description, so you can easily also peruse them if you doubt the interpretation.


I've seem some of his videos, but I'm not that familiar with GMTK. But they did release a game, and it was by all accounts "Very positive" /pretty good.

https://store.steampowered.com/app/2685900/Mind_Over_Magnet/


You should have watched the video before you shat on it.

Yes, he references specific Nintendo interviews in the video. Frequently, in fact, and in detail.


Most games are pretty bad, so this tracks I guess.

Need more Larians in the world.


His videos are great!


This always made sense to me. Think of Super Mario Bros. No way you come up with something like that from a top-down design document. Probably slapped Mario on a screen, played with the physics a bunch, and threw a lot of different stuff at the wall to see what stuck before they came up with the final product.


Not sure about the original game but at least since the 3d age, Miyamoto is on record, saying that when making a new Mario game, one of the first steps is that is just fun to goof around with Mario alone in an empty flat void and mess with whatever new abilities they are thinking of giving him.


"what Andy giveth, Bill taketh away"

https://en.m.wikipedia.org/wiki/Andy_and_Bill's_law


Now it's AMD + Nvidia vs. games, web apps, and "AI".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: