I built many such oscillators same way. But I used BC107 NPN, connected collector to the ground, emmiter to resistor/capacitornet. Base free. Add LED in series with such connected transistor, and you have LED blinker.
I always assumed ultra-processed means that the food is loaded with preservatives like phosphates or BHT. I guess that's part of it, but maybe the efficiency of digestion should be considered. I remember Ben Krasnow (Applied Science) measured the calories in poop, humans are not very efficient at extracting all calories leading to very likely large efficiency variance between foods. But extending this further, the calories lost during preparation should be accounted for...
So how about: calories * digestion-efficiency - calories you personally need to expend to prepare or acquire it. The higher this number, the more processed is the food. So cane sugar is very bad, unless you personally harvested it.
Bad news for highly paid programmers.. basically all food should be considered ultra-processed since no physical labor was needed to acquire it.
A better example is astronauts. Their diet (on the job) is 100% ultra-processed food. They perform highly and have limited access to normal physical activity. But they’re hard to study because radiation and gravity differ so much that categorization of food might not be influential at all.
Ironically Bill Gates was big into UNIX, see his Xenix interview, and had they not gotten lucky with the whole MS-DOS deal, maybe they would have kept Xenix and who knows how that would have turned out.
Xenix was also my introduction to UNIX.
However due to our school resources, there was a single PC tower running it, we had to prepare our examples in MS-DOS using Turbo C 2.0, and API mocks, and take 15m turns at the Xenix PC.
> had they not gotten lucky with the whole MS-DOS deal, maybe they would have kept Xenix and who knows how that would have turned out.
Oh, absolutely, yes. It's one of the historical inflection points that's visible.
My favourites...
• MS wanted to go with Xenix but DOS proved a hit so it changed course.
• DR had multitasking Concurrent DOS on the 80286 in 1985, but Intel's final released chip removed the feature CDOS needed, so it pivoted to FlexOS and RTOSes, leaving the way open to MS and OS/2 and Windows.
• MS wanted OS/2 1.x to be 386-specific but IBM said no. As a result, OS/2 1.x was cripped by being a 286 OS, it flopped, and IBM lost the x86 market.
• Quarterdeck nearly had DESQview/X out before Windows 3: a TCP/IP enabled X11-based multitasking DOS extended that bridged DOS to Unix and open systems... but it was delayed and so when it appeared it was too late.
* GNU discussed and evaluated adopting the BSD kernel for the GNU OS, but decided to go with Mach. Had it gone for the BSD kernel, there would have been a complete working FOSS Unix for 386 at the end of the 1980s, Linux would never have happened, and Windows 3 might not have been such a hit that it led to NT.
I got whole series of articles out of this, titled in honour of Douglas Adam's fake trilogy about god...
Never saw one of those. Tandy computers did exist in the UK, and even here on the Isle of Man there was a single Tandy's store. (They weren't called "Radio Shack" here.) But while they sold lots of spares and components and toys, they didn't sell that many computers.
> I had kind of the reverse feeling: when the 486 came out, I knew those expensive SPARC and MIPS workstations were all doomed.
Well, yes. Flipside of the same coin.
Expensive RISC computers were doomed. Arm computers weren't expensive back then: they were considerably cheaper than PCs of the same spec. So for a while, they thrived, then when they couldn't compete on performance they moved into markets where they could compete on power consumption... which they then ruled for 30 years.
This worked in DOS, but was easily ported to Linux.
As far as DPMI: I used the CWSDPMI client fairly recently because it allows a 32-bit program to work in both DOS and Windows (it auto-disables its own DPMI functions when Windows detected).
The lack of (easy) recursion in CPP is so frustrating because it was always available in assembly languages with even very old and very simple macro assemblers- with the caveat that the recursion depth was often very limited, and no tail call elimination. For example, if you need to fill memory:
; Fill memory with backward sequence
macro fill n
word n
if n != 0
fill n - 1
endif
endm
So "fill 3" expands to:
word 3
word 2
word 1
word 0
There is no way this was not known about when C was created. They must have been burned by recursive macro abuse and banned it (perhaps from m4 experience as others have said).
The other assembly language feature that I missed is the ability to switch sections. This is useful for building tables in a distributed fashion. Luckily you can do it with gcc.
I thing I learned only recently is that the write protect switch on the SD card is not an electrical switch connected to anything in the SD card itself: it just hits a lever in the SD socket that opens a contact closure and it's up to the system (hardware and software both) to bother to look at it. So on many systems the write protect switch doesn't even work.
I've been getting tremendous use out of PicoRV32- it works, it's tiny, and for many use cases ("management plane") you just don't need much speed. I work around its slowness by providing things like relatively large communication buffers in the FPGA. I use it in "execute in place" mode from external SPI-flash (the FPGA's config flash), but with an instruction cache. It can do floating point via emulation which is handy for printf.
I've been meaning to update my toolbox to at least a pipelined processor of some sort (to up the IPC rate to at least 1), but so far just had no strong need. For applications that really need CPU power, I use SoC FPGAs like Zynq.
reply