Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know about the present state, but certainly 10 years ago you would have had to be out of your mind to use anything but a Mac in an Audio setup that is used for important live performances - the Audio and MIDI stacks on Windows were a mess compared to those in OSX.


As a former sound engineer I can state for a fact that this is not true. We stopped using Macs while I was still doing it and it was back in like ~98 when we moved for real to digital studios. Depending on what we were doing the only thing the Macs gave us were bigger purchase and support bills. What Apple had going for them was history so people didn't know as much about how to support the setup if it wasn't Macs. The non-Apple software was just fine and the hardware, at the same price point, way better.


I remember working with some Windows 7 setups and having to deal with stuff like WASAPI Drivers and ASIO4All, having to daisy chain MIDI through the out on synths (which introduced jitter) because there is only one system MIDI in and out, etc. Perhaps there was a way to get these things working smoothly, but on the Mac all this stuff just worked out of the box. My situations were probably edge cases in terms of how much gear was connected to a single computer, but still.

I think you're right that legacy, mindshare/knowledge etc played a big part in it, and I'm sure Windows is much better with this stuff now. Although the handful of studios I've been to in the last few years were still running Macs, but again maybe that's just mindshare.

A funny thing I realized recently while trying to setup OBS for video conferencing on my Mac at home (I work as a teacher occasionally): There is no way out of the box to capture system audio, you either need to do it through external hardware or use a hacky solution like Loopback. On Windows, this "just works".


1998 was before OS X was released and before Apple purchased Logic. This is when things changed for the better, while in Windows things kind of remained the same audio-wise.


This era was kind of a low point for the Mac. The dying days of both PowerPC and MacOS 9, overpriced underperforming hardware, and the slow painful transition to OS X. Things started to improve after 2006 when Apple switched to Intel.


That’s 22 years ago. Whole other era. Steve Jobs had just returned, so macs weren’t yet intel and they weren’t running OS X yet, using the NEXT architecture.

In other words, you and OP may both be right. Three absolutely massive transitions at apple between your era and theirs: hardware, software and management quality.


If you stopped using Macs before CoreAudio was invented, I don't really even know what to say, other than I'd have run away from running digital audio on an Apple II, too :)

Your bad luck was that you bailed out of the MacOS audio subsystems just when they started to get good.


"Audio and MIDI stacks on Windows were a mess compared to those in OSX" What do you mean? That sounds like its harder to write drivers for soundcards for windows.. But the user is not going to be writing their own drivers.

Any user who needs to perform audio with any computer will need a soundcard in order to have any sort of decent quality, for #1 Decent quality inputs and outputs #2 The correct input and output types like XLR (or fibre optic or whatever) #3 Acceptable latency for live performance.

Soundcards are available for mac or windows, the makers provide drivers which deal with the "Audio and MIDI stacks". From a users perspective theyre both fine. After that, you care about the machines performance, CPU, how much RAM, how fast is the RAM, how fast is the SSD.


It means that Windows/Microsoft doesn't provide anything to driver writers or DAW authors.

MS tried a bunch of tech in the past (WinMM, MCIWnd, DirectSound, WaveOut, WASAPI, XAudio2, etc), but none of them ever worked for professional Audio.

The proper solution was always to use ASIO, which is third-party technology by Steinberg. It works but it's not integrated with Windows: it goes directly to the audio interface, bypassing stuff in the Kernel.

This bypass causes some limitations, such as not being able to use the system mixer (which prevents from using media players or multiple audio apps), or sometimes having different audio-interfaces not work well with each other.

There are workarounds to those issues, but they have to be handled by the interface manufacturer when writing the drivers. Also, you can't have low-latency with built-in soundcard unless you use something like ASIO4ALL, which is not super stable IME. This sucks when you want to work on-the-go with headphones.

Of course, it can be very stable when you use the right combination of DAW, drivers and sound interfaces, but when you don't you have problems.

On macOS and iOS? CoreAudio is native and it has super low latency by default. It's mostly plug and play and all apps use it. It even provides APIs to use Audio Plugins or reroute Audio, so DAW writers don't even have to write it themselves (unless they want to). Do you have multiple Audio/MIDI interface? In macOS there's a built-in app to "link" them together.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: