I know that oil and tungsten are materially different from fish. Marginal costs of digging things out of the ground does in fact tend to increase gradually with scarcity, as do incentives to devote resources to basic research into recycling, consumption reduction, and alternatives.
But teaching undergrads about tungsten and not fish is just propaganda.
Edit: To be clear, the point is not that my professor was idiotically wrong about oil and tungsten specifically. The "idiotic" part is to assume and furthermore imply to young students (who don't know any better) that the general principle holds equally well in all domains. Maybe "arrogant" would be a better word than "idiotic".
Oil and gas are different than metals though because some deposits spurt out of the ground, while others require heroic drilling, pumping and refining efforts that take more energy to realize. This is the concept of energy returned on energy invested (EROEI). When it takes more energy to get the oil out of the ground than is in the oil, it might be better to use the energy for something else and leave the oil in the ground.
I was directly involved in graphics during this time, and was also a tech lead at an Activision studio during these times of Direct3D vs OpenGL battle, and it's not that simple.
OpenGL was the nicer API to use, on all platforms, because it hid all the nasty business of graphics buffer and context management, but in those days, it was also targeted much more at CAD/CAM and other professional use. The games industry wasn't really a factor in road maps and features. Since OpenGL did so much in the driver for you, you were dependent on driver support for all kinds of use cases. Different hardware had different capabilities and GL's extension system was used to discover what was available, but it wasn't uncommon to have to write radically different code paths for some rendering features based on the cababilities present. These capabilities could change across driver versions, so your game could break when the user updated their drivers. The main issue here was quite sloppy support from driver vendors.
DirectX was disgusting to work with. All the buffer management that OpenGL hid was now your responsibility, as was resource management for textures and vertex arrays. In DirectX, if some other 3D app was running at the same time, your textures and vertex buffers would be lost every frame, and you're have to reconstruct everything. OpenGL did that automatically behind the scenes. this is just one example. What DirectX did have, though was some form of certification, eg, "Direct X9", which guaranteed some level of features, so if you wrote your code to a DX spec, it was likely to work on lots of computers, because Microsoft did some thorough verification of drivers, and pushed manufacturers to do better. Windows was the most popular home OS, and MacOS was insignificant. OpenGL ruled on IRIX, SunOS/Solaris, HP/UX, etc, basically along the home/industry split, and that's where engineering effort went.
So, we game developers targeted the best supported API on Windows, and that was DX, despite having to hold your nose to use it. It didn't hurt that Microsoft provided great compilers and debuggers, and when XBox came out, which used the same toolchain, that finally cinched DX's complete victory, because you could debug console apps in the same way you did desktop apps, making the dev cycle so much easier. The PS1/PS2 and GameCube were really annoying to work with from an API standpoint.
Microsoft did kill OpenGL, but it was mainly because they provided a better alternative. They also did sabotage OpenGL directly, by limiting the DLL's shipped with windows to OpenGL 1.2, so you ended up having to work around this by poking into you driver vendor's OpenGL DLL and looking up symbols by name before you could use them. Anticompetitive as they were technically, though, they did provide better tools.
I was also involved in game graphics at that time (preceding DirectX) and do not quite remember that as you do. Debugging graphics on PC was a pain, you had to either use WinDbg remote mode (which was a pain to set up to get source symbols) or SoftICE and an MDA monitor. That's just for the regular CPU debugger because of the fullscreen mode. There had not been a graphics debugger until DX9. Meanwhile all consoles could be debugged from the same dev machine, and starting from PS2 we had graphics debuggers and profilers. Even the OG xbox had PIX, which introduced pixel debugging (though was a bit of a pain to set up and needed the game to submit each frame twice).
HW OpenGL was not available on the consumer machines (Win95, Win2K) at all, GLQuake used so-called "mini-driver" which was just a wrapper around few Glide APIs and was a way to circumvent id's contract with Rendition, which forbade them from using any proprietary APIs other than Verite (the first HW accelerated game they released had been VQuake), by the time the full consumer HW OpenGL drivers became available circa OpenGL 2.0 time, DirectX 9 already reigned supreme. You can tell by the number of OpenGL games released after 2004 (mobile games did not use OpenGL but the OpenGL ES, which is a different API).
You must have worked on this earlier than me. I started with DX7 on Windows, before that I worked purely in OpenGL on workstations on high end visual simulation. Yes, in DX7 we used printf debugging and in full screen-only work, you dumped to a text file or as you say, MDA if necessary for interactive debugging, though we avoided that. DX9's visual debugger was great.
I don't remember console development fondly. This is 25 years ago, so memory is hazy, but the GameCube compiler and toolchain was awful to work with, while the PS2 TOOL compile/test cycle was extremely slow and the API's were hard to work with, but that was more hardware craziness than anything. XBox was the easiest when it came out. Dreamcast was on the way out, but I remember really enjoying being clever with the various SH4 math instructions. Anyhow, I think we're both right, just in different times. In the DX7 days, NVIDIA and ATI were shipping OpenGL libraries which were usable, but yes, by then, DX was the 800lb gorilla on windows. The only reason that OpenGL worked at all was due to professional applications and big companies pushing against Microsoft's restrictions.
I don't recall any slowness on the PS2 development, I dreaded touching anything graphical on PC though as the graphics bugs tended to BSOD the whole machine and rebooting 20+ times a day was not speeding up anything (all the Windows took their sweet time to boot, not to mention restarting all the tools you needed and recovering your workspace) lol.
This is very neat, but you are delving into a very complex world, as you are well aware. In your video, you have generated static server side pages, without any JS, where your annotated HTML uses the embedded go to generate static HTML.
This is much nicer syntactically than using the Go html/template engine, but it seems roughly equivalent in expressive power. Are you converting your "up" syntax into go templates with the Go expressions extracted into compiled Go code, referenced be the templates, out of curiosity? If so, the way you've transparently handled interleaving (such as html elements in the for loop) is really cool.
How would your go scripting interact with JS? For example, say that I have a backend API that the fronted calls into. In your design, would I call out into Go to perform the http request, or would I do this in JS? I'm sure both would work - since the Go request would be handled server side and simply slow down static page generation, but it seems like calling into Go might not be the right thing to do for a more responsive AJAX app. Do you envision mixing Up/JS in the same pages? Can I do crazy stuff like use JS to insert a variable (by value probably) into your Go code, or vice versa?
Over the years, I've learned that web front ends are positively bonkers in the kinds of things they want to do, and when you are the developer of any kind of frameworks, you end up suffering greatly if you insert yourself into middle of that ecosystem, since you will be asked to support features that you never dreamed of. If you support them, the framework gets more use, if you don't, the cool kids move onto other things.
I've tried to tackle a much simpler problem with a project of my own [1], which is a backend server code generator to make implementing JSON models more easily from an OpenAPI spec, and I've found that even in this relatively simple concept, the limitations of a strictly, statically typed language like Go end up running into incredible complexity due to the dynamic nature of web frontends and JSON. Outside of trivial strings and numbers, you start running into the limits of the Go typing system.
Anyhow, good luck, this is very cool and it seems like a fun thing to play with.
This really brings back memories of my childhood. I grew up on a small family farm, and all of our butter came from the cream from our own cows. Not only are there huge differences between kinds of butter, there are also huge seasonal differences between the milk produced by cows, and hence, the butter. When dandelions are in bloom, it's going to be yellower and more fragrant, and definitely more sour when the sorrel is sprouting. It was always a surprise what you'd get, even when using the same process.
This is one chemical that is a major part of the scent and flavor, but there are countless others. As an analogue, citric acid is the main "sour" flavor in lemons, but it's not very representative of a lemon, is it?
I agree with you but citric acid is a bit different as its not a volatile aroma compound in a lemon, its only sour and you taste it. I couldn't find a specific flavor compound responsible for lemon flavor in a quick google search but limonene appears to be responsible for the aroma of oranges...
Alternatives are always good, as is competition. Go applies competitive pressure to others, just as they do to Go.
I really enjoy using Go as my main server backend programming language. It's verbose, but simple, and that simplicity makes it easy to maintain. The CSP model of concurrency/parallelism works really well at making multi-threaded applications.
In Go, I can pull from a work queue, fan out for parallel processing via multiple threads, fan into a smaller pool for some post-processing, really easily. In the HTTP engine, I can service each request with its own goroutine, which lets me write simple, synchronous, linear code, versus using promises or futures or callbacks in other languages, breaking up the flow.
To me, subjectively, it's the best backend language when I'm writing something from scratch and don't need to tie into a pre-existing ecosystem. Go exists because there are lots of people like me.
I'm one of those people. I've programmed in a lot of languages, but it wasn't until I was introduced to Go that programming actually become fun rather than more or less a job. Everything about the language just immediately clicked for me, and suddenly I wasn't thinking about the language, but rather what could I build (and I felt like I could build almost anything at that point).
Go is about efficient compilation, efficient execution, and ease of programming.
When you are 46 and look back at 36, you realize that you still didn't have the world figured out, and it was so cute how you believed that everything was your own hands. Now, I'm curious what I think at 56 in a few years.
These things are quite sensitive and their heuristics are sometimes wrong. I've been driving at the track for quite a few years, and now and then, you see a car deploy airbags for no good reason when cornering really hard or hitting a berm or cresting a hill. OnStar calling 911 is pretty common, because it decides to do that based on driving style, and on the track, it's pretty near the limit.
I've been programming professionally since about 1994, using C++, Java, Scheme, Python, Go, JavaScript and friends.
Today, tools are incredibly better; compilers, debuggers, profilers. I'll take something from JetBrains or Visual Studio any day over what I had available in the 1990's. There were some gems back then, but today, tools are uniformly good.
What has gotten difficult is the complexity of the systems which we build. Say I'm building a simple web app in JS with a Go backend and I want users to have some kind of authentication. I have to deal with something like Oauth2, therefore CORS, and auth flows, and to iterate on it, I have some code open in Goland, some code open in VS Code and my browser, and as a third component, I have something like Auth0 or Cognito in a third window. It's all nasty.
If I'm writing a desktop application, I have to deal with code signing, I can't just give it to a friend to try. It's doubly as annoying if it's for a cell phone. If I need to touch 3D hardware, I now have to deal with different API's on different platforms.
It's all, tedious, and it's an awful lot of work to get to a decent starting point. All these example apps in the wild are always missing a lot of the hard, tedious stuff.
Edit:
All that being said, today, I can spin up a scalable server in some VM's in the cloud and have something available to users in a week. In the 1990s, if there was a server component, I'd be looking for colo facilities, installing racks, having to set up software, provision network connections, and it would take me ten times as long to the first prototype. I'd have to write more from scratch myself. Much as some things today are more tedious, on net, I'm more productive, but part of that is more than 25 years of experience.
> If I need to touch 3D hardware, I now have to deal with different API's on different platforms.
Been programming since late 70s. Graphics in many ways WAY easier than they were in the 70s, 80s, 90s. Sure there's Metal, DirectX, Vulkan, and OpenGL but you can still use OpenGL on pretty much all platform or use something like ANGLE.
Back in the 80s, 90s, you didn't even have APIs, you just manipulated the hardware directly and every hardware was entirely different. Apple II was different than Atari 800, which was different than C64, which was different than Amiga, which was different than CGA, and different than EGA, and different than Tandy, and different than VGA, and different than MCGA. NES was different than Sega Master System which was different the SNES which was different than Genesis which was different than 3DO which was different than Saturn which was different than PS1 etc... It's only about 2005-2010 that it all kind of settled down into various APIs that execute shaders and everything basically became the mostly the same. The data and shaders at an algorithmic level that you're using for your PC game are the same or close to it On Xbox 360, PS4, PS5, Xbox One, Mac, Linux. Where as all those previous systems were so different you had to redo all a ton more stuff.
On top of which there are now various libraries that will hide most of the differences from you. ANGLE, Dawn, wgpu, WebGL, Canvas, Skia, Unreal, Unity, etc...
It is easier in some ways, agreed. I kinda miss the olden days of doing page flipping in DOS through EMS :) You're right, it was the wild wild west up until D3D and OGL standardized everything, and a big shift happened in the shift from fixed function to programmable pipelines. I love this kind of stuff!
I can still program mode 13h from memory, and I can only imagine the cool, arcane witchcraft that you have acquired having started much before me.
To me, at least, OpenGL was my favorite, I miss its death, but Metal, Vulkan and DX are close enough. What drives me up a wall is 3D in the browser, since the difficulties of dispatch cost from JS land are profound.
I have zero problems with 3D in the browser and I love it way more than native. I can edit/refresh way faster than native, it works across all platforms (Linux/Windows/MacOS/iOS/Android). And I can share things with a link, I don't have to build/sign/noterize/distribute for 5 different platforms.
I feel like SGI’s Iris GL (OpenGL’s processor) was a really sweet spot for ease of touching 3d hardware. It didn’t have the features we have today, but it was really easy. Very similar to the easy graphics APIs in Processing or NodeBox (and ancestors and descendants).
OpenGL literally makes you write a program in weird language to draw a circle and communicate to it using byte arrays. Turbo Pascal had nice libraries that took care of that for you on various graphics boards.
The OPs comment was about 3D. Today, browsers have the Canvas API that makes it trivial to draw on pretty much any platform. Way easier than it was in the past.
Thank you for the eloquent comment. I showed up wanting to post something similar, but you did it better than I ever could.
Can I ask maybe a personal question? Do you worry about becoming obsolete? I’m at the 16 year mark, and I was wondering when’s the appropriate time to panic. You seem to have made it about a decade longer than I have, so I figured I’d ask some tips.
The hardest thing about programming now vs then seems to be staying relevant.
If you ever stop learning and feel your job is tedious, move on. You are inevitably an expert at something after 16 years, and if that's a relevant technology, you're set. If it's not relevant, find something else. Your engineering experience, even if not too relevant, is valuable because it's also a process, not just an outcome, and experienced people can apply the process of engineering to anything.
25+ years in, though, while I'm still an engineer in the org chart, I'm in charge of mentoring a lot of younger engineers, and my job has turned more into keeping them from making big mistakes and helping them grow as engineers, versus producing code myself. Through them, I can get much more done in aggregate, than if I sat down and did it myself, even though I'm probably faster than any engineer that I mentor.
Given what I have done, startups are a great place to learn via trial by fire, while big companies are good places to earn some big bucks while in between cool startup jobs.
Seconding this - in the last ummm... almost 30 years I've been in the industry, the whole industry has only expanded, and you gain value as you gain skills - if you feel like you're not learning or not enjoying or no longer able to find valuable roles as eg. a programmer, branch into infra or security or data tech or any of a bunch of different places that have grown - your expertise in any field inside IT will make you more valuable in others.
As an example, I've recently shifted from manager back to hands-on tech, and then from platform work (building tools for devs) to security - and knowing the engineering space makes me more valuable in the security space. I'll do this for a few years, then look for the next interesting jump. Nothing I've learnt is ever wasted - and I started when everything had to be patched to work on linux and sendmail.cf files were state of the art ;)
Also, flipping between the big three types of workplace - startup, enterprise, and consulting - adds to your understanding of the world and overall value.
When I started out, old engineers weren't a thing - it used to be accepted wisdom that this was a young person's field. That's not true any more, and I doubt it'll ever be - just keep learning, pace yourself (marathon, not sprint), enjoy yourself, and keep expanding your awareness/knowledge.
Oh man, the emotion that comes up for me from PS2 is "anger", since that's what the emotion engine produced in me.
The PS2 had lots of really tricky quirks to deal with. First, VU0 (IIRC, could be the other one) had really limited means of transferring data, and you'd have to stall the CPU to do it, making it very difficult to get any real use out of it. VU1's access to gfx was handy, though.
Next up, the framebuffer operations in the GS (Graphics Synthesizer) were limited to a single function with four terms. The most common blending function you'd want to use (src * src_alpha) + dst * (1 - src_alpha) was simply not possible. In real terms, this means you could not blend things into the framebuffer based on alpha transparency, so you had to do lots of tricks. Ever noticed how shadow casters in PS2 games were always casting pitch black shadows? It's because you could squash to black, but couldn't modulate existing color.
The "scratch pad" was effectively L2 cache that you controlled yourself, and it was critical to manage it properly to get any performance out of the cpu. This was quite tricky.
This era of console games was very interesting. The PS2 was frustrating, while the GameCube was a bit easier, but just as frustrating. Matrix math in the GameCube was limited to 4x3, not the normal 4x4. The last row was fixed to the unit W vector. This means you could not project - ever notice how shadows on GameCube games are little smooth ovals? It's because without the ability to project, you could not render a model as a shadow. Gamecube was also weird in that it had little main memory, but a whole lot of audio memory, so you'd write a paging engine which would use audio memory as general purpose, by swapping stuff in and out of it.
Then came the XBox. It had Visual Studio and you wrote code as you would for a PC. You could even debug in a debugger, not printouts! It was glorious.
> This means you could not project - ever notice how shadows on GameCube games are little smooth ovals? It's because without the ability to project, you could not render a model as a shadow.
I remember Star Fox: Assault having shadows that match the character's motion. I've linked a clip below:
I was over-simplifying. Where there's a will, there is a way. If you have a matrix that's got a 0 for the Z term in each column, you will squash the model onto the XY plane, as it is here. What you can not do with a last row being the unit W vector is shearing, perspective transformations, that sort of thing. Notice in that Star Fox game that the shadow isn't changing in size as the character jumps up and down relative to the light - also where is the light? It looks like shadows are cast by some virtual directional in the middle of the level. You could not do this for a point light.
You can totally fake this by shearing the model sideways and squashing one coordinate to zero. You render it twice, once as a shadow, once the normal way. Sometimes, you even build a separate shadow model that's much simpler. There's a lot of special case trickery that goes on in games. I was thinking of the little round circles since it's really cheap to compute, but yes, you could do shadows in limited cases.
What the 4th row being a unit W vector really prevents is projections. What Star Fox shows are non-projective shadows. Anyhow, this is some graphics nerdery here that is no longer relevant, and people faked it well enough.
:) IDK - I thought it was fun figuring that stuff out. The modern consoles still have their challenges and the mysteries are still there if you want to get very low level.
Read up on the Simon/Ehrlich wager. Nobody knows the future, but your professor was demonstrating a solid economic principle.