Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Really nice explanation. Having read almost all papers on this endless debate "Classes vs Prototypes" you'd get a feeling that prototype-based object system wins in any aspects (expressiveness, flexibility, whatever), and yet almost all prototype-based languages in the end include crude class implementations (as Class for a signle namespace to contain all related traits, constructor method and/or prototypical instance for cloning) - interesting, why so? Is this model ingrained in our brains by our education or we do really think that way (i.e. with sets of related entities).


Classes are less flexible so they're easier for programmers to reason about. That makes them better in any situation where you don't need that extra flexibility, which is most of them.


”wins in any aspects (expressiveness, flexibility, whatever)”

One ‘whatever’ where it doesn’t win is in robustness. Code will make assumptions about various kinds of animal and fail when meeting a sheep with 5 legs, or a pig that can fly.

You can solve that by testing for capabilities, but you would have to do that in many places, and make sure to cover all of them.

And yes, that comes at the cost of flexibility. Your class model won’t support flying pigs until you consciously add such support.


All of what you've said is applicable to classes as well, and even more so. This line of argumentation was used _against_ classes by Lieberman (if I remember clearly)


I love the idea of prototypes, and self, but I always use classes. There's this idea of the self prototyping environment you can extend, which is great for prototyping (perhaps), but when it comes to professional programming we want a program to deploy and no one really touches the internal structure, so classes and deployment win out. The same for smalltalk, you don't want to deploy an environment that your users can extend - except perhaps in research environments.

The other thing I suppose is multi developer environments - classes and compilation win out over prototyping because they're easier to split up. There might be other reasons, but that's why I always come back to class based programming, the idea of prototypes is very enticing though.

Edit: though I have started to use a pseudo prototyping environment - breakpoints and evaluate when things get complex, though its not the same.


> you don't want to deploy an environment that your users can extend

What is so bad about that?

And isn't it common practice around computer games for example, which can be modded and extended by custom user content?


I always used to be a bit lackluster about the idea of deploying smalltalk applications for a similar reason - too powerful introspection and debug tools that are too easy for an end user to open accidentally, and be confused by.

But now, switching between a terminal that uses shift-ctrl-c for copy, and outlook/o365 in chrome, I often accidentally open developer tools in my email "application"... And wonder how much better we might be off with a solid smalltalk (or strongtalk, self) system -- than the current mix of crappy web apps and crappy electron behemoths...

I should note that it's generally possible to strip out/hide and disable most of the introspection stuff from smalltalk applications and ship more end-user style applications.


This was the view of "what personal computing would be like" held by the original LRG team at PARC. There would be no distinction between "user" and "programmer," since using the system would at some level involve some type of programming. All of today's computing culture and the systems we use heavily reflect the opposite view. It's hard for us to imagine, since programming is a kind of scribal trade.


I disagree that all of our computing systems keep them apart. We’ve got systems like Bash, and Excel. Even when I’m not doing “programming”, I’m doing programming. I wish more systems had that flexibility. Even when the average user isn’t writing programs in it (like the web), all of the long-lived platforms today are programmable.


> all of the long-lived platforms today are programmable

Sure, programmable by what today we'd call "programmers" but certainly not by "users." This is because programming has become a niche trade (as I said before, like a scribal culture).

I'll give an example. Most users' experience with their host Operating System involves prodigious use of buttons. They know how buttons work and they know how to interact with them. But in all these OSes it is extremely difficult for a user to make a button that does something they want, then, say, place it on their desktop for future use.

In the past there were really good attempts at authorship in computing media (which is more like what the LRG was going for with Smalltalk etc), including Hypercard. In the latter system, you could pop open a button and see how it worked in a comprehensible scripting language. You could copy the button and paste it somewhere else. You cannot do any of this with buttons in the major OSes. The capability is not there.*

The only option available to "users" is to learn a full fledged, general purpose programming language with all the pitfalls that entails, which means learning build systems and all the rest. At that point they have to become what today we call a "programmer," ie a scribal-programmer.

Our dominant systems have been explicitly designed for a strict delineation between scribal-programmers and consumer-users. That's the rub.

* - AppleScript is a so-so attempt at this, but even that has been allowed to die on the vine.


I have a dream of such system http://sergeykish.com/live-pages - edit in browser, simple code, inspectable with its own controls. It's fun, it's not enterprisy, it's unpredictable and sadly not polished (and not published)


yes probably - but I suppose the limit there is controlling what they can extend? The other thing with games is performance - prototype languages are slower in general.

I'm not against them - these are reasons I can see for not using prototype languages.

Edit: I suppose to the environment a developer wants isn't the environment that a game user wants, like a game user wants to just extend a few things - but having the whole self environment for arguments sake would be fairly intimidating I'd imagine.


The nicest games to mod are the ones with languages that allow reflection to replace anything and everything.


A good trade-off is to have a prototype-based object system but use it as a class-based system. You can benefit from a rigid system structure, and organization like Smalltalk has (it is easier to make good tools for it) but use prototype-based features during debugging and where it really makes sense.


It might be interesting for you to look at this issue from an functional programming point of view.

With FP you can simulate both class based approaches and prototype based approaches. The slightly 'warped' perspective of the FP lens might give you more insights into your question.

(I can expand, if you are interested.)


I don't think I've ever seen any purported examples of the expressiveness and flexibility of prototype-based OO where the advantage actually comes from being prototype based rather than class based. Rather, the flexibility comes from being a dynamic language with a first-class metamodel. Or, to put it more concretely: in what way is Self more expressive or flexible than Smalltalk?


It's the Rule of Least Power in action. [0] You don't want to have extreme degrees of polymorphism the vast majority of the time.

[0] https://en.m.wikipedia.org/wiki/Rule_of_least_power




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: