Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm surprised he doesn't bring up minor features that lead to huge compile time issues, like operator overloading and imports being module sized vs file or folder wide and type inference in some cases. Not to mention how the language doesn't actually scale that well with core count vs. many, many other programming languages. Throwing 64 cores / 128 threads at a C++ code base speeds up builds in a linear fashion during the compile step, in swift it does not and it tends to effectively max out at around 8 to 16-ish build threads. Even with 3 view toy apps made with SwiftUI on a M1 macbook is slow to build & run relative to it's size!

Honestly you would have gotten %80-%90 of the benefits of swift by making Obj-C use a new syntax that looked like swift and continued to improve Obj-C as a language than what you would have gotten with swift. A lot of the ugly of obj-c could have been translated away with very simple 1:1 syntactic sugar macros. And you would actually have fast, responsive compile and indexing times, unlike swift.



> Honestly you would have gotten %80-%90 of the benefits of swift by making Obj-C use a new syntax that looked like swift and continued to improve Obj-C as a language than what you would have gotten with swift. A lot of the ugly of obj-c could have been translated away with very simple 1:1 syntactic sugar macros.

This is a common talking point I hear in the Objective-C community, but nobody has come up with a credible design or implementation of "incrementally evolve Objective-C and drop the C part" beyond stating that it's trivial, etc.



“Hypothetical” being the key word here, though. These proposals are all thin on details, and if it was so easy someone would’ve made such a language by now.


No one is going to write a detailed estimate to paint your house if you've made it public knowledge you're not interested in your house being painted.


That's a bit bad faith, making a language is never easy and takes a ton of resources.

Swift is the bandwagon at Apple due to how Chris Latner started the project and his clout and I think it would be politically untenable to make ObjectiveSwift going against that, especially now at Apple.

Nobody else but apple would make ObjectiveSwift too, because for better or worst ObjC & Swift are languages that are for the apple platform and nothing else. If you don't need to make something for apple, and you're going to do it without major tech company sponsorship, then you have freedom and you go make things like Rust, Nim or Elm instead.

With all of Swift's problems, I would never recommend it for a server side platform, and it's adoption shows that reality.

Because of all of the above, you'll only see hypothetical proposals, no implementations.


> Nobody else but apple would make ObjectiveSwift too, because for better or worst ObjC & Swift are languages that are for the apple platform and nothing else.

What about all the work done to support Swift on Linux? I had thought that was mostly done by the community rather than from Apple itself, but I could be wrong


It is as successful as the work done to support Objective-C and GNUStep.

It is interesting that it works, and some folks might even create some products that use it, but it won't ever take the world by storm.

Similarly like Mono was never that much relevant, with Miguel and others ended up creating Xamarin and focusing on mobile instead.

.NET nowadays has a good story on Linux, because now it matters to Microsoft to make it relevant, and yet most UNIX shops would rather go with Java, Rust, Go,....


I agree that it isn't super commonly used, but I was responding to the assertion that no one would bother making an Objective-C replacement for non-Apple platforms. Clearly some people are interested in spending time extending Apple languages for non-Apple platform purposes, so the idea that the only thing stopping people from making a "better" Objective-C is lack of support for third party platforms seems kind of strange to me. If it really were not that difficult to make a better Objective-C and enough people were interested, it seems like it would have happened regardless of official support by Apple for third party platforms. Either the interest isn't there in the first place, or it's not nearly as easy a problem as suggested.




The main point of Swift was to create a memory-safe language Apple could use across its OSes. Trying to change Obj-C's syntax doesn't help with that goal.


Swift does not provide memory safety for concurrent code. It has comparable pitfalls to Go. (You can use Thread Sanitizer to try and diagnose these issues, but that's best-effort and runtime-only; it does not make your code totally safe.)


Concurrent memory safety is definitely a goal. Try the new ‘-warn-concurrency’ flag to see what I mean, it is comparable to rust and quite different than thread sanitizer. There’s also a new runtime sanitizer this year with swift intrinsics, not best effort like tsan was.

That said. Swift is in the tough position of trying to be a lot of things at once to users with competing needs. Applications, systems, performance, education, prototyping, etc. While there’s broad agreement concurrency safety is important, not everybody thinks it is important enough to bury your working build under a thousand errors (though that view is represented)

Ultimately swift’s philosophy is that safety is practice and not theory. Some people do turn on ‘-warn-concurrency’ and fix their errors, others would want to ignore them and find some escape hatch to squash them which doesn’t appreciably improve safety, still others might not upgrade if that was required and maybe the ecosystem as a whole becomes less safe for it. Swift feels responsible for these kinds of outcomes.

It’s a tough problem but it does lead to interesting ideas that make safety more practical and productive. Remains to be seen how much of both worlds you can have, but swift/clang/llvm have a long history of doing stuff like that better than you expect.


That is true, but the great majority of memory safety issues are not concurrent ones. In practice Swift and Go are huge improvements over C, C++, and Objective-C in terms of memory safety.

I'd also say they are in a reasonable space in the state of the art of safety in general. While they do not make concurrency safe and Rust does, in Rust you need to use unsafe to write things like doubly-linked lists and graphs (or resort to things like indexes-in-an-array), which you can do safely in Swift and Go. So there are interesting tradeoffs all around - our industry has not found a perfect solution here yet.


In my experience post-ARC almost all memory safety issues are thread-safety related.


Except ARC only works for Cocoa like classes, everything else is traditional C like memory management.


Yes, but most of the time you’re not using that in your Objective-C code.


If I had Go's build speed in Swift, I wouldn't have minded a clean break language like Swift.


It does now with actor based concurrency


What do you mean specifically about memory safety? A clean break new-syntax Objective C could have dropped the C part of ObjC, added strict nullability, dropped header files, add non ABI breaking features like enum ADTs and a bunch of other tweaks and still kept most of the same language compiler code without all the downsides of swift. Swift became a silver bullet homer car as far as languages go.


That would not have helped with use-after-free, though, which Swift does solve.


That's solved by ARC, not Swift.


It was really solved by reference counting, way back when Foundation was introduced. The ARC increment was tiny.


Tiny? It made it so you’d never have to deal with an overrelease ever again…


Yes, tiny.

Not sure what you're on about with the "overrelease" bit, but all ARC did was automate things with additional compiler support that were already automated.

And it caused additional crashes in code that shouldn't even be able to crash:

https://blog.metaobject.com/2014/06/compiler-writers-gone-wi...


For Cocoa like classes, it did not solve anything else for the rest.


My stack-allocated integers don't need it.


Yeah, but those heap allocated structs surely do.

And although there is now partial ARC support for that scenario, partial is the keyword here, as it needs to obey specific access patterns to actually work.


But why on earth would I heap-allocate structs in Objective-C? That's what I have objects for. If I am going to dumb things down to structs, the reason is that I don't want heap allocation.

Unless I have no clue whatsoever as to what I am doing.

As in, I just wrote pure C (no Objective- at all) and for some reason changed the extension of the file to .m

Anyway, not using the solution that is there is not the same as the solution not existing, it certainly doesn't qualify as "the rest".


You "mpweiher" might not do it, but I assure you plenty of enterprise programming cogs do.

They code mostly in a C like way, and only use Objective-C at the level it is required to call into Apple specific APIs.

When using C++, then their code looks like what I call C+.

The same set of people that are to blame for Objective-C conservative GC never working in practice, when mixing Frameworks compiled with different modes.


Er...no.

I mean, sure, I believe you that you've seen this. But I've looked at quite a number of iOS/macOS projects and this was never a problem. Not even close. If anything, people were extremely hesitant if not actually afraid of using C features outside of the very basic ones like control structures, arithmetic and assignment.


Doesn't change the fact ARC doesn't work with structs, and people actually allocate them on the heap, hence why on 2018 Objective-C got some support for also applying ARC to structs.

Maybe time to also review the WWDC session and the points made there?


Oh right, good point. Other types of memory safety are more relevant here then.


C++ pays an extraordinary price for the fully parallel builds, concealed in the "One Definition Rule".

The price is, if you violate the ODR the standard says your program is not valid C++ (and thus has no defined meaning) but there is no requirement for a diagnostic (ie a compile or link error) and the build might complete.

This has sometimes been described as "False positives for the question is this a program?" and is a pretty serious penalty to pay for the benefit of improved compile time.


Modules mitigate in a big measure ODR violations. It will be quite more difficult. Compile times will also be better, at least incremental ones.


Could you have ODR and proper warnings / errors handling the issue when it pops up without much penalty?


No. ODR allows the compile step to be embarrassingly parallel. Since all the separate definitions of X are, by fiat, identical we needn't detect inconsistency at all and so there's no interaction, it's like merge sorting the files, you can spin up as many threads or processes as you like and process more source files.

If we decide "Oh, but I do want to detect inconsistencies, my users would want me to warn them about that" then we can't have the parallelization because we need to rendezvous constantly to verify the definitions are consistent.

There are a bunch of tricks that people do today to get some semblance of ODR checking for not too high a cost, and to avoid some ODR pitfalls that might defeat the checks they have. C++ programmers have accepted this danger, if you don't like it then C++ isn't the language for you.


This is a whole series of articles, and he does address type overloading in one of them, which is maybe what you're looking for.

https://belkadan.com/blog/2021/08/Swift-Regret-Type-based-Ov...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: