Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tiny Core Linux 13.0 is a full Linux desktop in 22 MB (adafruit.com)
351 points by tsujp on July 4, 2022 | hide | past | favorite | 190 comments


> full Linux desktop

> The Core Project is a highly modular based system (...) It is not a complete desktop nor is all hardware completely supported. It represents only the core needed to boot into a very minimal X desktop typically with wired internet access.

That is not a full desktop, and the [The Core] project doesn't say it is.

That being said the concept of "full desktop" is somewhat loaded. Today we have a somewhat unreasonable expectation that at least one, but sometimes several browsers, office suites, multimedia viewers and editors to be a "basic desktop".

Back in the 90's people did not expect the computer to come with any such software applications built-in.


The problem with this tiny projects is that you need a web browser. And good luck browsing the web in old PCs. You can use Lynx and the like, but you know what I mean.

If you need second life for an old pc, It may be better to repurpose as server or something like that.


> And good luck browsing the web in old PCs. You can use Lynx and the like, but you know what I mean.

Hello electron where everything is a browser :)

Yes, even if your system is lean doesn't mean a lot if the application you're using on top still needs a lot of TFLOPs and GBs of fast RAM.

Trying to use a 2GB 2 thread atom netbook here as a dumb terminal of sorts for other systems - a pain for anything that is not very basic limited remote shell. The closest I got to a "working system with browser" was cheating using mosh + browsh on another computer.


I have Debian 11 on a 2GB laptop running Xfce and it's very usable. The key is to install a 32-bit distro which can be a bit challenging on 64-bit UEFI systems. I have a dummy 64-bit install just for the bootloader and then installed the 32-bit system alongside it.


Instead of 32-bit i686 consider x32 ABI. It uses 32 bit pointers (good for memory) but has access to all the x64 registers, has SSE2 as a min. requirement, etc. which is great for performance - it's the best of both worlds


My understanding - backed by https://en.wikipedia.org/wiki/X32_ABI - is that x32 has seen very little adoption and even Linux upstream has considered removing it. It appears to be available on Gentoo because of course it is, Debian if you jump through hoops (https://wiki.debian.org/X32Port), and some embedded tool chains (I'm not running yocto on my laptop). Are there any other accessible options?


If your hardware is not too old. I still have 2 PCs that don't have 64 bit support. They still run Xubuntu 16.04 just fine, but have not upgraded or used them after that when out of support.

Which distro offers x32? (Haven't checked, might be a stupid question...) I understood it was a great idea at the time, but implementation took a while and the world had moved on when it was finally ready.


> Which distro offers x32?

A lot of modern distros are still packaging for i686, but actually installing it might be a bit of a pain. I think your safest bet is to go with an OS like Debian that's sure to offer ample support for older systems, or you could go for broke and run a distro like Gentoo/Arch/NixOS that has package manifests/build instructions for each tool and pray that nothing breaks (spoilers: it will).

So, temper your expectations; 32-bit systems aren't a huge priority nowadays, but I'm certain you could put together a usable config if you choose the right base system. Or just keep the machines as they are, I'm sure Xubuntu still runs fine.


x32 is different from i686


Have you tried any of the VNC class of programs?

It allows a virtual screen solution to other computers, even servers. Effectively a private Zoom session.


> Effectively a private Zoom session.

I just wanted to let you know that this comparison absolutely threw me. Of course you're right, they both let you "screen share" but the way you applied a 2022 metaphor to explain 90s tech made me feel a little weird and old.

I guess nano is kind of like a private google docs.


You kids and your new fangled nano.

Back in my day it was pine and pico. We did SMTP both ways. And we liked it!

Actually, it weirds me out that the default editor on most Linuxes now is a pico clone. Some don’t don’t even come with a vi out of box.

I started with emacs in college because I liked LISP. But once I was a sysadmin, I learned vi fast because it was the common denominator between Solaris, HP-UX, AIX, BSD, Linux, etc. and I was in a group that absorbed teams that used all of them. …OK, to be fair, our stuff was BSD :).

My first instinct on a fresh/new to me host is to vi. And even when I type nano, I can’t stop my hands from doing vi and get peeved. Fine ‘apt install vim’ or whatever.


> Actually, it weirds me out that the default editor on most Linuxes now is a pico clone. Some don’t don’t even come with a vi out of box.

It really annoys me, because I learned vi and never learned pico/nano, but now when I e.g. `sudo systemctl edit foo@bar` on a new box I end up in a nano session and have to figure out how to quit and get back in using vi.


Yup. You don’t want to pass EDITOR or VISUAL through sudo so update-alternatives it is.

It’s weird. I know distros are trying to be friendly to new users. But aren’t we past “new users” now for server OSes?

I’d get it on a Raspberry Pi or whatever, where you’re going after kids.

But really, the learning curve and foreign nature of the tools is what got me hooked on Unix in 1997. That and not getting cryptic error messages from Win NT that I could actually dig all the way down and troubleshoot. >:[


To be fair, there's not much to learn. You only need to know Ctrl-O and Ctrl-X, for which they have a cheat sheet at the bottom.


I'd take that over being a new Ubuntu user. Using the visudo command and trying to do anything.


That was my experience with Solaris and Linux in the late 90s. It being foreign to me was what got me hooked. You could install pico or learn to use Unix tools.

2 days of frustration followed by Eureka! Repeat.

I still do that process now just with other stuff. (Why is my program doing X? Why the F am I getting a RST?) I think that’s what hooks people that are really into this world.

I do actually understand nano as a default on a “friendly” desktop, Raspberry Pi, etc. targeting new users. I’m just a grumpy old man.


A typewriter is effectively a keyboard hardwired directly to a printer.


That's the modern case of an electric typewriter. A normal typewriter is a printer with manually actuated print heads for dedicated charcters.


Looks like you haven't heard of typeball and daisy wheel printers.

In any case, we were discussing metaphors, not technical details.


Worked already smoothly around 2000 over 33.6 kbit/s modem. And lossless compression!

I often have to think of that when we share terminals over Google Meet these days and it takes tens of seconds until the encoding artifacts of red fonts have faded away so text becomes reable again. 100 times more bandwidth for a worse experience, that must be progress...


> If you need second life for an old pc, It may be better to repurpose as server or something like that.

I'd go as far as a third life. This comment sounds like this is written by someone for whom Windows is not the first life (good for you!). To me, restoring an old PC to something usable day-to-day with Linux is only effective if you're switching away from Windows or switching away from the modern web.


>need a web browser

you'd be surprised how many people here (and many FOSS zealots in general) live in their own little bubble of command-line-only existence. who eschew modern things like javascript and social media and only want to take part in that which is easily done from a text terminal.


> The problem with this tiny projects is that you need a web browser. And good luck browsing the web in old PCs.

Browsing the web is the last thing i do on an old PC. These old computers are for entertainment. Modern Web is a sh*tty experience.


You can browse the web on a ten year old machine just fine.


While clean reinstalling my macOS last month, my recovery partition was lost - and I booted via internet recovery to MacOS Mavericks.

Needless to say, I couldn't even sign in with Apple ID via browser or app store because the world has moved on & these browsers can't work on modern webpages. (I had to use my phone concurrently to help me out with downloading rescue stuff & moving via USB or terminal)


Ten year old software is very different from ten year old hardware.


The parent (you replied to) discussed the state of browsers

The problem with this tiny projects is that you need a web browser. And good luck browsing the web in old PCs [...]

As I can imagine now, a lot of things will not run on 10 y.o. hardware once you get limited by the last base OS you could install.


In the context of Linux, ten year old hardware is widely supported, and will not limit what base OS version and software is installable, though its capabilities might affect usability. We're not really discussing OS X here, where planned obsolescence is such a huge factor.


That's not really a Linux problem as far as I can tell. You can install a five-minute old Debian-testing on 10 year old hardware without too many surprises.


Main private laptop is an HP elitebook 8440p from 2012 and it works perfectly fine with modern web and streaming, currently running Fedora 36.

It came out with a core i7 and it supports up to 16GB of ram. I upgraded it to 12GB a few years ago and an SSD.

Caveat: it was a model from the professionnal line, came with free ram slot and is easily disk swappable. The same could not be said of a model from the consumer line that would have had its memory maxed out at 2GB.


8.5 yo for sure and with Ubuntu. I improved it a little (32 GB and two 1 TB SSDs) but I bet that the original 8 GB + HDD would be OK for browsing. Browsers got faster and that helps compensating sites that got slower.


I recently was upgraded away from those exact "old" specs at work, and I largely disagree. Win10 just doesn't work with HDDs. I'm sure the mandatory 3rd party AV doesn't help either. But logging in was enough of a source of pain, that I ended up using my iPhone with a HDMI dongle and a wireless accessories.

It's upgraded now to an overly powerful web browsing machine. I'm a bit happier.


I agree. Win 10 + HDD = frustration.


Win 10 even with SSD is a frustation.


4GiB is plenty for browsing if you run uBlock Origin and disable Javascript by default.


Well, if you don't run uBlock Origin (or bare minimum another less efficient tracker blocker), no amount of RAM can really save you once you cross the threshold of 60+ tabs open.


If you want to save resources (and that what talking about 2GB machines is) you just don't keep 60+ tabs open. You cannot eat the cake and keep it at the same time. I cannot see how anyone can jump around between 60 tabs every day. If I have more than 10-20 tabs open after a working day I know that the day went in a very unorganized way and I have probably more open tasks than in the morning when I started. It happens because I have plenty of unused RAM on my work machine, but it's nothing I'd defend. Maybe a smaller machine would just force me to organize my work better.

Bookmarks are there for organizing stuff one will need at some time later. Keeping tabs open does not seem to serve any special purpose. Unless you want to save cycles to render it again, but I don't remember many pages I would like to use frequently that even a very old PC does not bring up in decent time. The local weather forecast loads rather slowly, but that I want to reload every time I visit it anyway.


> I bet that the original 8 GB

Second source data I collected from the Steam Hardware Survey (that being a biased group with better that average specs) for 2012 seems to point that the "average Steam user" had around 5 GB of system RAM.

> HDD would be OK for browsing. Browsers got faster and that helps compensating sites that got slower.

I just remembered this article (and that article talks about SSDs, not HDDs!) surprising conclusions, posted in HN a few days ago: https://simonhearne.com/2020/network-faster-than-cache/


A good $500 PC [1] 10 years ago seems to have a 2x3GHz Pentium, 4GB RAM and a 500GB HDD. The $1000 PC [2] from tomshardware was a 4x3.4GHz Core i5 with 8GB RAM, a GTX 670 and a 60GB SSD paired with a 750GB HDD.

The $1000 still sounds totally adequate for surfing, office use and some light gaming, as long as you invest $20-$50 in a bigger SSD (or have some patience when starting software). Machines like this are still sold, they're just smaller and cheaper now. Even the 2012 $500 PC is probably fine if you upgrade the RAM, and even without that upgrade is not much different from some of the mini-PCs sold today [3]

1: https://www.tomshardware.com/reviews/gaming-pc-overclocking-...

2: https://www.tomshardware.com/reviews/build-a-pc-overclock-be...

3: https://psref.lenovo.com/Detail/ThinkCentre/ThinkCentre_M70q...


Exactly $499US is all you need even today to get a pretty damn good machine in my book....4.4GHz 12th Gen i5, 4k OLED, 8GB DDR5, 256GB SSD PCIe4.0...from, of all places, Best Buy?

Now trust me...Best Buy is pretty much the last place I would look to buy a computer these days, but I have to give them credit for value here even though this system comes with crappy Intel graphics.

https://www.bestbuy.com/site/asus-zenbook-14-2-8k-oled-lapto...


I’m a fan of 10-year-old hardware. In 2012 I was handed an i7 3770 with 8 GiB of RAM and a git URI for WebKit and told go forth and port! I think the box cost $750 and I know it ran the latest Debian. This machine is still my daily use machine with today’s latest Debian, more HDD, some SSD for ccache, and triple the RAM. The original configuration was so nice I bought a duplicate for home use. That one has had its video card upgraded and the HDD replaced with SSD, it gets used somewhat differently. The graphics upgrade was to match it to a 21x9 screen I splurged on. And it runs Windows 10 decently enough.


Absolutely - I am typing this on a 2012 Windows 7 PC, 8GB RAM and an SSD. CPU has a Geekbench 5 single-core score of 400 or something, i.e. in no way fast. But surfing the web is snappy and the computer works fine.


If you limit yourself to sane, 1.0 web simple pages such as Hacker News, sure. If you need to visit anything using unoptimized 100+ MB of Javascript? Good Luck.


For a 20 year old machine, sure. But any machine that was decent 10 years ago should be at least okay today. My 2008 Thinkpad handles all but the heaviest websites without too much trouble, as long as I don't have too many tabs open at a time.


Agreed, I am using a 2009 Toshiba laptop with Pop! on it, I did some cheap upgrades (maxed RAM to 10GB, used a spare SSD to replace the old HD), it's perfectly usable.


This is a fair point, the machine I am using as a reference was pretty bad at best even when it was new (a 2012 netbook).


Are you trying to tell me that you can't browse the modern web on an Ivy Bridge-era system?


Either it's not that bad, or people are lacking patience.

I have a Phenom II desktop next to me that browses the web just fine. Yes, even YouTube and infinite scrolling pages.


I'd really like to see that "not that bad".

> or people are lacking patience

If it's not instant, it's a bug. I guess this mindset explains why modern software tires me so much..

There's absolutely no "patience" to have with machines whose CPUs can process a dozen Wikipedia (text) worth of information per second


> If it's not instant, it's a bug

If only developers felt the same way. I'm also exhausted by a lot of modern software. There's not much I can do about the modern web though. I've learned to be patient. Stockholm syndrome.

It was a pretty powerful machine back then - quad core, 16GB RAM, SSD, 1GB GPU. Not a fast machine today by any means, and not my primary machine, but more than adequate for checking email and watching a tutorial on something.

Stick to fast software and it is still a decent experience. Is two or three seconds of loading before the 15 minute YouTube video a long time to wait?


Same here. I have a Phenom II X6 desktop and it has no problem browsing the modern web running either Linux or Windows 10.


Yes, remember that the netbook trend died in 2013. A netbook is a Ivy Bridge-era system.


Those were awful, especially the first gen, but I have fond memories of them since it was the first time young me could afford anything new and shaped like a laptop.

The MSI Wind U100 with an Atom N270 CPU would even overclock. It didn't help, but it would do it.


If that's allowed, you can't browse the web on an Alder Lake-era system.


I was on my good old 2500k until 2019. It was perfectly and I only upgraded because I had the itch


How is hacker news a 1.0 website? It's full of JS and dynamic elements. It might be aping the style of 1.0 web but it is very much a web 2.0 site.


> It's full of JS

152 lines of sparse JS ammounting to 5kb uncompressed? Not exactly what I would call "full of js".

For comparison your average hasty made, JS infested legacy web site there embeds Moment.js - and that by itself is 19kb compressed.


Compared to the size of the site, and the features? Yeah, most of them involve some amount of JS. Besides, HN is all dynamic user generated content, that's the crux of web 2.0.


How I perceive it is that web 2.0 is about dynamic loaded content and as far as I know HN doesn't do this but loads static pages from a server just like ye old web 1.0 forum pages would do.


If you go off Wikipedia's definition, it's more about user generated content and HN is definitely that. Might not meet your definition of web 2.0, but it meets this one:

>A Web 2.0 website allows users to interact and collaborate with each other through social media dialogue as creators of user-generated content in a virtual community. This contrasts the first generation of Web 1.0-era websites where people were limited to viewing content in a passive manner.

https://en.wikipedia.org/wiki/Web_2.0


I'm not going to argue which web it belongs to, just comment that that definition is a bit odd. Forums would fit that definition, and have existed for far longer. phpBB itself is over 20 years old even.

I know it's all arbitrary, but it feels like there should probably be a better definition than the ones given.


At the risk of arguing the semantics of what is ultimately a marketing term rather than a technical term, Web 1.0 did actually have JavaScript - it was usually a simpler, more restrained usage, with no XMLHttpRequest.


Even that might not work. If it came with a succinctly obscure Operating System you run into SSL problems.


Heck, the first gen retina MBP is now 10 years old. It’s definitely still a very capable machine, albeit the Nvidia GeForce 650M GPU is a bit underwhelming, even for non-gaming.


I'm forcing myself to get better at linux using a 16 year old laptop. I expected its age to be the second biggest irritation (after my ingorance), but really it's fine. A tiny bit slower than I'm used to, but for most of what I use it for, it's shockingly capable.


Totally agree, today 'desktop' pretty much means 'browser' for many.

Wish someone knows how to build chromium into a light-weight mode, e.g. restrict its tab to 1 or 2 only, remove some fancy but not mandatory features, so that I can run it, e.g. with 100MB RAM.

bad news is that some website might be very resource hungry on its own, not sure what to do with that, maybe the browser can also throttle that?


This is were things like browservice comes to the rescue if you have a bigger machine available and only need more independent terminals: https://github.com/ttalvitie/browservice

Also rdp/freenx/x2go/vnc can help.

That doesn't make the old computer a full independent desktop but it can avoid sending a perfectly good keyboard/trackpad/monitor in the landfill. And unless you are a dev, 99% of the time you need a browser you already have network and won't point to localhost so that remote faster box can be connecred through vpn anywhere.


You can use a lightweight and modern web browser like Epiphany or Pale Moon.


I don’t know… now days I think if I can run a browser that pretty much is a full desktop…


How much of the browser can it run though? Probably won't be capable of audio/video, for instance.


Oh that would be bliss! Can it remove cookie/newsletter/discount/app promo popups too? :)


Reading this, I’m wondering why we settled on those particular questions, as a civilization. The civilization next door must have a slew of “What is your address?” / “Record your voice!” / “Scream ‘NETFLIX’ to the street” / “Upload your fingerprint to get access to our free content” dialogs.


Extreme view: The user agent needs to be "shrunk down and drowned in a bathtub".


Wouldn't it be easier to simply build a larger bathtub?


You can install web browsers including chromium-browser and firefox via the package manager.

List of packages: http://www.tinycorelinux.net/13.x/x86_64/tcz/


Desktop, as in desktop computer, used to mean a particular form factor, namely one that sits on a desk.

If "desktop" is used to mean a GUI resembling a real world desktop then, according to the dictionary definition, it only means the layout, colors, icons, etc., i.e., the GUI, not the applications that may be installed or pre-installed on the computer.

Even in the real world, desktop means the top of a desk not what is placed there.

A "full" desktop, as in "disk full", could mean a desktop that has so many items on it that there is no space for anything else.


> Back in the 90's people did not expect the computer to come with any such software applications built-in.

Having a complete set of mutually compatible applications is literally why Linux distributions exist.


The Stallman copypasta kind of explains this verbatim:

> I'd just like to interject for a moment. What you're refering to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.

> Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called Linux, and many of its users are not aware that it is basically the GNU system, developed by the GNU Project.

> There really is a Linux, and these people are using it, but it is just a part of the system they use. Linux is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called Linux distributions are really distributions of GNU/Linux!


That's not actually what I was thinking of, but it's related. I was thinking more of the hard work of making sure you have a compatible set of library versions that every app in the repositories can link against, with whatever patches they need to make that work, to provide a complete working system for the user - including end user apps like mail clients, browsers, editors, the whole shebang.


I expect a full desktop to manage mounting of USB devices for me. My awesome-wm setup doesn't do that :(. (And I know it's on me)


>Today we have a somewhat unreasonable expectation that at least one, but sometimes several browsers, office suites, multimedia viewers and editors to be a "basic desktop".

Why is this considered unreasonable?

I agree that is was unreasonable back in the 90s, 30 years ago, but we also now expect computers to come with a few TB of storage rather than a few GB -- times have changed.


> we also now expect computers to come with a few TB of storage

No, we don't. Modern Macbook which is overpriced elite computer sells with 256GB storage. There're plenty of laptops selling with 120GB storage.

I'd argue that since we migrated from HDD to SSD, we expect computers to come with less storage than before. I had 200GB HDD in like 2005 or something like this.


My point was that expectations have changed over time...

You can get as pedantic as you want over the exact sizes, but someone buying a computer today expects more storage than someone who was buying a computer in the early 90s.


> Modern Macbook which is overpriced elite computer

If you bough a overpriced elite computer, and potentially other parts of that ecosystem of products, why would you not pay for overpriced elite cloud storage as well?

That is very reasonable thinking from Apple.


> Modern Macbook which is overpriced elite computer sells with 256GB storage

its just that our industry is a joke, imagine they'd sell cars with three wheels and carge you extra for the 4th.


The joke is that you can't even buy the 4th wheel later as a add-on, you need to replace your entire car instead.


I reasonably agree with your assertion that times have changed; several of the optimizations we come to expect from compilers/JIT trade faster execution for more space as such tradeoff is often very worth it.

However if you agree with the proposition of attempting to supply a reasonably "lean core" with extensions, if said lean core is too opinionated, you will, soon or later, either have to adapt your workflow, or workaround said lean core.

I think a somewhat similar thing applies to silverblue ( https://silverblue.fedoraproject.org/ ) and it's very hard to actually use it as intended (only using things inside flatpak/toolbx), without messing with the overlay system. I very often feel the need to replace half of the "provided" applications, and as such it would be in fact better if they were not supplied in the first place.


It is possible to remove things from the stock image. I haven't done it yet, but most common is to remove default Firefox and use Flatpak FF.


Yup. Alpine's rootfs is tiny (~3mb) and if you don't need firmware you would probably have a desktop image around this size.


Alpines rootfs images don't include the kernel or any drivers at all, right? Since it's usually run within a container that's a sensible choice, but if you look at https://www.alpinelinux.org/downloads/ their rootfs is 2.6mb, but even their slimmed down version meant to run only on virtualized machines is 52mb, and their standard or netboot versions (which actually include the stuff needed to boot on actual hardware) are over 150mb.


Even the "virt" Alpine image is unable to successfully install without pulling in extra packges from the internet. setup-alpine fails at the disk step, depends on syslinux and sfdisk.


maybe you're thinking about full desktop environment ?


Direct link:

http://tinycorelinux.net/

I have wondered how such a tiny distribution is possible. I'm thinking of the kernel, in particular--when I try to compile my own kernel with no module support and only the drivers I need built-in ("make localyesconfig" will do this), it comes out like 10MB compressed. And I am no kernel expert, so it's hard for me to tell which settings I can change and what will happen if I do.

Then when I boot the most bare-bones system with /bin/sh as init, it is using like 70mb of RAM doing nothing.

So anyway, I found that you can just grab their kernel config, I'll be curious to see how it differs from more typical configs: http://tinycorelinux.net/13.x/x86/release/src/kernel/config-...

Windows 95 ran in 8 MB of RAM. (Well, officially 4 MB for marketing purposes, but no one thought that was actually enough.) I would be really impressed to see a graphical Linux environment that could run in that amount of RAM.


> Windows 95 ran in 8 GB of RAM. (Well, officially 4 GB for marketing purposes, but no one thought that was actually enough.) I would be really impressed to see a graphical Linux environment that could run in that amount of RAM.

Huh? Windows 95, IIRC, will _refuse to even boot_ if you have more than 480Mb of RAM, let alone 4Gb.

Most machines of the time had 4Mb of RAM. Something seriously powerful had 16Mb.

As for running a modern 32bit Linux on something with minimal hardware... The creator of "uARM" [0] says that it's useable, and it uses a "30-pin 16MB SIMM" piece of RAM. But honestly, the speed of the RAM is more important than the size, for that project.

[0] https://dmitry.gr/index.php?r=05.Projects&proj=07.%20Linux%2...


> Most machines of the time had 4Mb of RAM. Something seriously powerful had 16Mb.

Yes I did mean to write MB rather than GB; thanks for the correction.

On that note, I found this 1995 newspaper article (linked from Wikipedia) on the subject. Conclusion was that W95 ran on 4MB, but slowly:

https://archive.seattletimes.com/archive/?date=19950924&slug...

Funny that the writer remarks on having a bunch of Windows open and still having "73% resources free". I think I remember reading in one of the contemporary books (maybe Andrew Schulman's Unauthorized Windows 95) that the dialog that shows the % resources free inflates the number significantly.


I did run windows 95 with 2MB. It was my main reason to get 2 more bars with overall 4MBs going up to 6 in total. Boot process with 2MB took about 20 to 25 minutes.


Long time ago I tried to boot Win3.11 on 4Mb on a PC XT that I had built (including soldering components). It booted but was horribly slow, so I doubt Win95 was usable on 4Mb.


It was technically usable, but very slow. I ran it on a 486 DX2-66 with 4MB of RAM before upgrading to 8MB of RAM. That made a drastic difference.


Was that 4Mb or 4MB. We had Win 3.11 on 386 with 1MB RAM, which was running but slow. My father paid a large(3rd-world) sum to get it upgraded to 4MB after which it was fast enough for another 6 years including Office, Wordperfect, Visual Basic, Pagemaker etc.


An XT? That slowness was probably due to the CPU speed than anything else.


With 8Mb the speed was OK for me and most PC XT at the time were not perceived as slow for office or industrial usage.

People used office software like Multiplan or Wordstar without complaining about speed even on Z80 machines. They complained much more about capabilities to connect their brand new printer, or the slowness and limitations of their floppy disks, etc.


> Conclusion was that W95 ran on 4MB, but slowly

Yeah, I clearly remember today that HDD light never going off if you ran Win95 with 4MB RAM. One of the reasons I had to upgrade my PC.


Today's "tiny" distributions would be considered yesterday's bloatware. One of my first Linux desktops (Slackware Linux) ran on a 486 with 8 megs of RAM, including X11. This was a 1.x kernel. Running emacs would put it into swap.


> on a 486 with 8 megs of RAM,

> Running emacs would put it into swap.

Heh; a time when "Eight Megs And Constantly Swapping" was a literal comment:)


It sure was! I upgraded to 32 megs about a 6 months later, and it was amazing. I had never dreamed of so much memory.


X11 without WM was fine on 2.x as well on 8 MB ram. Loading fvwm or whatever I was using ended up with endless swapping.


I think I was using TWM or a very early version of FVWM.


> Windows 95 ran in 8 GB of RAM. (Well, officially 4 GB for marketing purposes, but no one thought that was actually enough.) I would be really impressed to see a graphical Linux environment that could run in that amount of RAM.

Are you sure? https://devblogs.microsoft.com/oldnewthing/20030814-00/?p=42...

> Windows 95 will fail to boot if you have more than around 480MB of memory. (This was considered an insane amount of memory back then. Remember, Windows 95’s target machine was a 4MB 386SX and a powerful machine had 16MB.


Yes, I wrote GB but meant MB. (I suppose because the former is more common nowadays.)


> Windows 95 ran in 8 GB of RAM. (Well, officially 4 GB for marketing purposes, but no one thought that was actually enough.)

I did :-P. Back in the day i tried to run Windows 95 on my AMD 386DX 40Hz with 4MB of RAM. Took ages to boot but it did boot. I also tried Delphi 2 on that installation, took around 15 minutes to start.


I was surprised they're running a modern kernel like 5.13.x. I imagines it'd be 4.19.x or something older.


"Windows 95 ran in 8 GB of RAM"

Did you mean MB rather than GB?


Yes, thanks. Fixed


I tried "make tinyconfig" (https://tiny.wiki.kernel.org/) and I got bzImage to just under 500 kilobytes. Now, that is only the bare minimum and you would still probably want stuff like amd64, but it does give a good baseline reference. (https://scribe.rip/building-a-tiny-linux-kernel-8c07579ae79d)


There's a bunch of nostalgia about what small meant to mean, so I'll leave some links to the QNX 1.44MB demo floppy:

http://toastytech.com/guis/qnxdemo.html

https://news.ycombinator.com/item?id=10483653


I was just about to post about this as well. I still don't know how I ended up with this disk, but got shipped to high school-aged me. I was blown away at the time!



> There's a bunch of nostalgia about what small meant to mean, so I'll leave some links to the QNX 1.44MB demo floppy:

My first "real OS" experience was OS-9, which IIRC booted from a 400KB floppy. I believe Radio Shack sold it on a cartridge as well.

https://microware.com/


QNX was/is simply amazing!

Talk about an OS that should have utterly destroyed DOS and all other CLI-based OSs.

I first encountered it at an oil-terminal company I worked at in the 80's and couldn't believe the features packed into such a small footprint.

It's real-time event handling was top-notch, and it was stable as all-hell...in a sane world it would have dominated the market.


DOS was on the way out by the mid 90s. Windows had all the software, other commercial OSs didn’t have a chance. Mac hung on and rebounded ~15 years later.


I remember QNX floppy blowing my mind even back in the 90's. This was when Linux could load the kernel and the root fs at least with two floppies.


Although people usually cite the qnx demo int these threads, I'm much more impressed today this: https://news.ycombinator.com/item?id=28515025


And under the hood that did a lot more than most OSs of the day. Or of today, for that matter.


I remember when it was actual. It was impressive then as well!


> Ubuntu can barely run with 2 GB of RAM

That's probably if you're running Gnome. I've found that Ubuntu is pretty fast on very old (> 10 years old) equipment if you use i3wm or Openbox. AFAICT the only reason old hardware is a problem with recent distros is graphics. Lighter desktops don't make the same demands. I bought a cheap Dell laptop in 2018 for $300. It came with Windows 10, but there's honestly no way to use it. A default Ubuntu install is no better. Installed Openbox and it flies.


I've got Lubuntu 22.04 on my first gen 2GB Intel Compute Stick, it came with Windows 8 installed. Definitely getting a little long in the tooth but I can use it for streaming no problem if I have a reliable WiFi connection. I've been trying to get even lighter environments configured in an image maybe even a lighter distro but... it's definitely getting a little long in the tooth and I only understand so much of the work Ian W Morrison has done for these devices: https://www.linuxium.com.au/


I'd really love an explanation of what these modern desktop environments are actually doing in hardware that requires so many resources. I'm not at all experienced with them but to me it seems like it's just rendering 2D boxes and couldn't possibly doing anything difficult.


Well for one, GNOME is now running javascript code for many parts of the UI


Tiny Core's small enough that I'll often throw the entirety of it in /boot or /boot/EFI on my Linux desktops as a recovery environment.


That sounds very useful! Do you have a writeup or any tips to set that up?


Please details!

How can I put an image in /boot and load and run it from GRUB?


I'm away from my PCs at the moment, but if you already have a Linux distro installed then you should be able to edit grub.cfg, copy the existing entry for your distro, and punch in TCL's vmlinuz+initrd.

Once I'm back home I'll put up a fuller write-up.


>you should be able to edit grub.cfg

You mean the grub configuration file that shouldn't be edited?

#

# DO NOT EDIT THIS FILE

#

# It is automatically generated by /usr/sbin/grub-mkconfig using templates

# from /etc/grub.d and settings from /etc/default/grub

#

It is a pet peeve of mine, but according to the good people that wrote GRUB2, grub.cfg is subject to auto-magic modifications when updating the (main) Linux install, so it shouldn't be edited manually but rather sort of scripted.

For a GRUB2 installed to harddisk, modifications should be written to

40_Custom

https://help.ubuntu.com/community/Grub2/CustomMenus

otherwise (it depends on which Linux distro is the "main" one) it can happen that at next update the grub.cfg is recreated losing your changes to it.


Right, that's totally what I meant; I totally don't just ignore that warning and reapply my patch if I ever accidentally run grub-mkconfig after initial install ;)

In any case, something to the effect of

    root (hd0,1)
    kernel /tinycore/vmlinuz tce=sda1/tinycore/tce vga=794
    initrd /tinycore/core.gz
in GRUB's shell should do the trick no matter what might lurk in grub.cfg.


Yep. :)

And (as a side note) one could make a separate, static .cfg, let's say tiny.cfg and load it from shell with configfile, aka configfile /tiny.cfg.


you know, that's a really clever idea


Right? It sure saved my ass more than once.


22 MB is impressive, although I assume usage will be quite limited. My favorite small-distro is still puppy linux [0], which has a good size/functionality tradeoff.

[0]: https://puppylinux-woof-ce.github.io/


There is also SliTaz Linux, whose ISO is only 43 MB (https://www.slitaz.org/en/get/#rolling).


An important property of Puppy is that you can access the Ubuntu repos. I don't know what you get with Tiny Core.


I like to think of the TinyCore as (const)Linux.

TinyCore boots from a fixed instance state. When booted, this state can change as usual, but on reboot it will be reinitialized to the defined state.

Persistent changes to the state are done as 'backup' of whatever change aspects. The backup is driven by lists -- special text files that define files and directories to persist.

TinyCore apps are packed in a custom .tgz format and are downloaded from a number of mirrors or locally, if cached.

[0]: TinyCore Concepts http://www.tinycorelinux.net/concepts.html


I used tiny core Linux about 10 years ago in a high security environment for a data wipe verification step. Worked great and it booted to an immutable ramdisk from a USB 2 stick in seconds. It was also a breeze to setup with drivers.


I distrust writeups like this that don't mention other, similar work. Alpine Linux diskless mode comes to mind[0].

0 - https://wiki.alpinelinux.org/wiki/Installation#Diskless_Mode


The original article was written due to v13.0 being released.



A lot of these pages predate Alpine's existence


> Yeah. That’s right. You can run your entire operating system… from RAM. And, even with only 48 MB… it still runs fast.

I feel old. We had GUI OSs booting from floppy drive and RAM Disks with the OS + a few apps in 16MB of RAM some 30-40 years ago.


The amiga could run in 256kb. 512k was more common, eventually 1MB, and more was a luxury.


Yeah, the original Mac had 128k of RAM total. But is was 1 bit black and white and only one app at a time.


He tells you about his Gran Torino and you are mentioning a Model T.


If you're into very small linux desktops- I've had a lot of fun with Oasis: https://github.com/oasislinux/oasis

The full desktop image is 77mb


I remember DSL - Damn Small Linux, which sat at something like 5-7MB and was pretty feature complete, including a nice package manager.


DSL was near-exactly 50MB (since that's the size of a "business card" CD-R). And yeah, somehow that thing managed to ship with multiple web browsers and a full office suite.

DSL was my first distro (really second, but it was the first I actually installed on a machine - specifically, the Compaq Presario 1210 my mom handed down to me), and I remember it fondly.


I think TinyCore is the same developer


One of the developers, yeah. Bit of a complicated and messy history: https://distrowatch.com/weekly-mobile.php?issue=20090323&mod...

In short: John Andrews created DSL. Robert Shingledecker started contributing patches and got brought in as a developer; he created (among other things) the MyDSL extension system and a bunch of other goodies. DSL started falling apart organizationally, and Andrews "exiled" Shingledecker (in the latter's words) - so Shingledecker carried on with developing Tiny Core Linux instead.


Oh wow, for comparison you can fit about a third of the "typescript" NPM package in that much space.


Be warned though the image does not boot in pure UEFI systems without CSM like recent laptops.


Yeah, that's unfortunate it does not support UEFI out of the box.


Is it 22 MB because it's 2022 ? In 2011, TinyCore iso was 11 MB.


22MB for the OS. 22GB once you have a few browser tabs open.

I remember when a normal Linux install clocked in at a few hundred megabytes, and running 1080ish fullscreen video was pushing the envelope. Browsers were lightweights then. Now any old junker can play movies but modern web bloat... oh so painful.


It's an interesting exercise to try to strip down Ubuntu or Fedora. You quickly find out that your WM probably relies on X11, which relies on Mesa, which relies on both libgcc and libllvm (because you might want to compile OpenGL shaders, duh!)... yeah there's two gigs right there.


Our PDP 11/34 UNIX ran on 256 KB core and 6MB disk. With room to run apps.


Now imagine this running on your phone, with a week or more of battery life.


Related:

Tiny Core Linux 13 Released: Needs Just 46MB of RAM, 50MB of Disk - https://news.ycombinator.com/item?id=30249581 - Feb 2022 (26 comments)

Tiny Core Linux 13.0 released for older or lower-end x86 hardware - https://news.ycombinator.com/item?id=30183435 - Feb 2022 (2 comments)

Tinycore Linux - https://news.ycombinator.com/item?id=25158736 - Nov 2020 (81 comments)

Tiny Core v9.0 Released - https://news.ycombinator.com/item?id=16483880 - Feb 2018 (4 comments)

Tiny Core Linux - https://news.ycombinator.com/item?id=16366807 - Feb 2018 (98 comments)

Creating purpose-built TinyCoreLinux Images - https://news.ycombinator.com/item?id=10525377 - Nov 2015 (32 comments)

Tiny Core Linux - https://news.ycombinator.com/item?id=10308606 - Oct 2015 (10 comments)

Tiny Core Linux 4.7 overhauls the OnDemand system - https://news.ycombinator.com/item?id=4739459 - Nov 2012 (5 comments)

Tiny Core offers a complete Linux solution in 11MB - https://news.ycombinator.com/item?id=1769624 - Oct 2010 (25 comments)

Tiny Core: The Little Distro That Could - https://news.ycombinator.com/item?id=745439 - Aug 2009 (3 comments)


Is tiny-core linux open source? I would like to see their package selection configuration. Small distributions are a ideal case study for grokking the internals.


Short answer is 'Yes?' although finding the answer was a lot more difficult than I thought given the many broken links on the TCL site. As usual - Wikipedia provides the following "Tiny Core Linux is free and open-source software licensed under the GNU General Public License version 2.[4]" although the [4] link to the FAQ doesn't provide that detail hence the 'yes?' reply.


> Is tiny-core linux open source?

It's GPL, so... Yes. Their git repositories are currently here. [0]

> I would like to see their package selection configuration. Small distributions are a ideal case study for grokking the internals.

The "TCZ" packaging system works by mounting applications via squashfs images, that then act as overlays. [1]

[0] https://github.com/tinycorelinux

[1] http://tinycorelinux.net/arch_copymode.html


It's huge! My first Linux distribution fit on four floppies and included emacs GCC and x11.

A.out was a terrible binary format though


Is it still using X? What WM is that?


It uses FLWM [0]. Looks like it also still uses X according to that page as well.

[0]: http://tinycorelinux.net/downloads.html


Thanks for whoever keeps on working on this. My serotonine levels are increasing just by reading.


In 1997 I only had 12MB of RAM and was pretty happy


I had 64MB and was very happy. I still have the machine, but it now has a whopping 80MB and I'm certainly trying this distro.


most people were happy with a 640x480 60hz interlaced CRT display, too.


I distinctly remember checking the size of the C:\Windows folder on Windows 95 and it being around 50mb.


tinycore and microcore were real saviors for me back in my dcops days. insanely useful back when someone needed a quick linux box with serial capabilities. prob less useful now a days with things like console kvm being everywhere.


I'm honestly surprised that someone on Ladyada's staff would link to an article by the highly transphobic and racist Bryan Lunduke. As recently as March of this year he was deliberately deadnaming and misgendering a prominent ElementaryOS developer.

https://www.osnews.com/story/134655/elementary-os-is-implodi...


I still remember when all we cared about were the technical contributions that people made. Now it seems like the ideologists from one extreme side of the political spectrum have infiltrated the technical communities and are destroying them in the same way as they are this country.

Stop with the culture wars identity politics shit: it's what's tearing this country apart.

Of all the days in the year to have to write a comment like this on...!


> I still remember when all we cared about were the technical contributions that people made.

Well yeah, but then Hans Reiser killed his wife and ruined it for everyone.


That actually is a good example of what has changed because AFAIK there was no "cancel Reiser" movement or anything of that sort when it happened, just a lot of surprise and some long-running jokes ("MurderFS", https://en.wikipedia.org/w/index.php?title=Comparison_of_fil... ).


You're not wrong.

Unrelated but also not: That's why the left introduced the "Code of Conduct" everywhere, to enable useful idiots like that one.


This is not relevant to the topic at all.


It's not, but people should be aware of where they get their news sources.


If this is his twitter, it looks like he's changed:

https://twitter.com/BryanLunduke


That looks like a parody account. This is the real one: https://twitter.com/TheLunduke


that's actually really funny.


At this point I can't really tell which is the parody.


I've seen that and I believe he felt he had to do so to maintain his readership after being called out for his bigotry so often of late. I for one don't believe he's changed at all, but I don't personally know the guy so I can't say for sure he hasn't. Anyway, I'll still refuse to read his articles just to be sure I'm not supporting a bigot with page views.


As mentioned above that's apparently a parody account that fooled me. His real Twitter shows he's still the raging bigot he always was.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: