Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Object Model of Self (github.com/pavel-krivanek)
144 points by xkriva11 on April 3, 2020 | hide | past | favorite | 30 comments


Anyone interested in prototype-based languages should also checkout io [1], which is a really clever implementation that doesn't get enough love. And unlike Self it will work with your favorite teletype emulation environments.

[1] https://iolanguage.org/


I've always been really troubled by io's `clone` not not actually cloning the object but creating a sub-object / subtype instead. As well as the prototypes being an array (so manipulating prototypes is not really a thing).

Oh and it uses the same lingo as javascript, which is a net negative, I think the Self lingo makes much more sense: a Self prototype is the object you copy in order to get more instances, once the copy is done there is no relationship between the prototype and the object just as there is no strict relationship beyond kinship between your car and its original prototype. I guess "template" might have been an even better term there.

The shared behaviour (and potentially data) lives in traits and mixins.


Really nice explanation. Having read almost all papers on this endless debate "Classes vs Prototypes" you'd get a feeling that prototype-based object system wins in any aspects (expressiveness, flexibility, whatever), and yet almost all prototype-based languages in the end include crude class implementations (as Class for a signle namespace to contain all related traits, constructor method and/or prototypical instance for cloning) - interesting, why so? Is this model ingrained in our brains by our education or we do really think that way (i.e. with sets of related entities).


Classes are less flexible so they're easier for programmers to reason about. That makes them better in any situation where you don't need that extra flexibility, which is most of them.


”wins in any aspects (expressiveness, flexibility, whatever)”

One ‘whatever’ where it doesn’t win is in robustness. Code will make assumptions about various kinds of animal and fail when meeting a sheep with 5 legs, or a pig that can fly.

You can solve that by testing for capabilities, but you would have to do that in many places, and make sure to cover all of them.

And yes, that comes at the cost of flexibility. Your class model won’t support flying pigs until you consciously add such support.


All of what you've said is applicable to classes as well, and even more so. This line of argumentation was used _against_ classes by Lieberman (if I remember clearly)


I love the idea of prototypes, and self, but I always use classes. There's this idea of the self prototyping environment you can extend, which is great for prototyping (perhaps), but when it comes to professional programming we want a program to deploy and no one really touches the internal structure, so classes and deployment win out. The same for smalltalk, you don't want to deploy an environment that your users can extend - except perhaps in research environments.

The other thing I suppose is multi developer environments - classes and compilation win out over prototyping because they're easier to split up. There might be other reasons, but that's why I always come back to class based programming, the idea of prototypes is very enticing though.

Edit: though I have started to use a pseudo prototyping environment - breakpoints and evaluate when things get complex, though its not the same.


> you don't want to deploy an environment that your users can extend

What is so bad about that?

And isn't it common practice around computer games for example, which can be modded and extended by custom user content?


I always used to be a bit lackluster about the idea of deploying smalltalk applications for a similar reason - too powerful introspection and debug tools that are too easy for an end user to open accidentally, and be confused by.

But now, switching between a terminal that uses shift-ctrl-c for copy, and outlook/o365 in chrome, I often accidentally open developer tools in my email "application"... And wonder how much better we might be off with a solid smalltalk (or strongtalk, self) system -- than the current mix of crappy web apps and crappy electron behemoths...

I should note that it's generally possible to strip out/hide and disable most of the introspection stuff from smalltalk applications and ship more end-user style applications.


This was the view of "what personal computing would be like" held by the original LRG team at PARC. There would be no distinction between "user" and "programmer," since using the system would at some level involve some type of programming. All of today's computing culture and the systems we use heavily reflect the opposite view. It's hard for us to imagine, since programming is a kind of scribal trade.


I disagree that all of our computing systems keep them apart. We’ve got systems like Bash, and Excel. Even when I’m not doing “programming”, I’m doing programming. I wish more systems had that flexibility. Even when the average user isn’t writing programs in it (like the web), all of the long-lived platforms today are programmable.


> all of the long-lived platforms today are programmable

Sure, programmable by what today we'd call "programmers" but certainly not by "users." This is because programming has become a niche trade (as I said before, like a scribal culture).

I'll give an example. Most users' experience with their host Operating System involves prodigious use of buttons. They know how buttons work and they know how to interact with them. But in all these OSes it is extremely difficult for a user to make a button that does something they want, then, say, place it on their desktop for future use.

In the past there were really good attempts at authorship in computing media (which is more like what the LRG was going for with Smalltalk etc), including Hypercard. In the latter system, you could pop open a button and see how it worked in a comprehensible scripting language. You could copy the button and paste it somewhere else. You cannot do any of this with buttons in the major OSes. The capability is not there.*

The only option available to "users" is to learn a full fledged, general purpose programming language with all the pitfalls that entails, which means learning build systems and all the rest. At that point they have to become what today we call a "programmer," ie a scribal-programmer.

Our dominant systems have been explicitly designed for a strict delineation between scribal-programmers and consumer-users. That's the rub.

* - AppleScript is a so-so attempt at this, but even that has been allowed to die on the vine.


I have a dream of such system http://sergeykish.com/live-pages - edit in browser, simple code, inspectable with its own controls. It's fun, it's not enterprisy, it's unpredictable and sadly not polished (and not published)


yes probably - but I suppose the limit there is controlling what they can extend? The other thing with games is performance - prototype languages are slower in general.

I'm not against them - these are reasons I can see for not using prototype languages.

Edit: I suppose to the environment a developer wants isn't the environment that a game user wants, like a game user wants to just extend a few things - but having the whole self environment for arguments sake would be fairly intimidating I'd imagine.


The nicest games to mod are the ones with languages that allow reflection to replace anything and everything.


A good trade-off is to have a prototype-based object system but use it as a class-based system. You can benefit from a rigid system structure, and organization like Smalltalk has (it is easier to make good tools for it) but use prototype-based features during debugging and where it really makes sense.


It might be interesting for you to look at this issue from an functional programming point of view.

With FP you can simulate both class based approaches and prototype based approaches. The slightly 'warped' perspective of the FP lens might give you more insights into your question.

(I can expand, if you are interested.)


I don't think I've ever seen any purported examples of the expressiveness and flexibility of prototype-based OO where the advantage actually comes from being prototype based rather than class based. Rather, the flexibility comes from being a dynamic language with a first-class metamodel. Or, to put it more concretely: in what way is Self more expressive or flexible than Smalltalk?


It's the Rule of Least Power in action. [0] You don't want to have extreme degrees of polymorphism the vast majority of the time.

[0] https://en.m.wikipedia.org/wiki/Rule_of_least_power


I really like Perl's setup where objects are nothing more than namespaces/packages (usable and helpful in non-oo Perl) and one function...bless()[1].

It makes it very easy to follow exactly how the OO works, what it is, and is not.

[1] https://perldoc.perl.org/functions/bless.html


Came here to read a psychology article..


I learned the object model of Smalltalk at university and after that I couldn't take most class based OOP languages seriously anymore.

Everything felt like a hack.

Protoypical OOP was the only savior left, haha.


Very nice article. It seems most developers don't likes how prototype based object model works in JS. I wonder how Javascript object system compare to Self, or similar language IO?


I learned several prototype OO languages (Self, LambdaMOO and variants) before I learned JS back in the mid-90s. I even wrote my own compiler and VM for my own. My take is that JS's implementation of prototypes isn't terrible, but it's also not clear what it's doing all the time. Like many other things in JS the semantics are often odd (strange scoping rules, functions as construtors in certain contexts only, etc.)

Many of the warts have smoothed over time, but JS got a reputation for being an ugly hack language early on, and unfortunately that also rubbed off on the idea of prototype OO. I remember hearing a lot of griping about how JS isn't a "real OO" language (!#@!@#!@) because it only had prototypes and not classes. And a lot of rejoicing when classes were proposed for the language. Which to me was a sign of defeat.

Prototypes are more expressive and can express classes but classes cannot express prototypes.

FWIW I used to talk to and correspond with Steve Dekorte (author of Io) back when he was starting out. Looks like he's stopped working on it, which is unfortunate. It was always a nice language, though not suited for my purposes. And I think he chose a bad name as it wasn't easily Googleable. I'd like to see what he could do with it now in the era of Wasm...


I agree, interface is arcane but once it clicks... And that may be a problem - world divided by those who grasped that Object.__proto__ === Function.prototype and other. I've heard similar stories in XSLT land.

What if interface matters? Just recently I've discovered how to clear some confusion:

    Object
    Object.proto === null

    object = new Object
    object.proto === Object
    object.toString === Object.toString

    fun = new Function
    fun.proto === Function
It's almost io (just don't touch 'constructor')

    Object = Object.prototype
    Function = Function.prototype

    syntax new = function (ctx) {
      let ident = ctx.next().value
      return #`new ${ident}.constructor`
    }

    // bonus point - make it more like io
    Reflect.defineProperty(Object, 'proto',
      Reflect.getOwnPropertyDescriptor(Object, '__proto__'))

And io is great!


+1 for highlighting Io [0]

I've never done any real work with Io, only read about it. First came across it in 7 languages in 7 weeks[1]. Compared to JS, it seems like a much more coherent realisation of protypes. It recognises the difference between "types" - descriptions of things - and "objects" - exemplars of those descriptions. But the only difference (that I recall) is that types are capitalised by convention, objects aren't. On my "todo list" for further exploration st some point.

[0] http://iolanguage.com/

[1] https://pragprog.com/book/btlang/seven-languages-in-seven-we...

EDIT: fixed book title & corrected case


If I recall correctly, Steve was very influenced by Lua. So he wanted to make a compact and embeddable interpreter but with nice consistent OO semantics, like Smalltalk or Self.

(My interest in prototype OO languages back then was specifically as authoring languages for shared virtual worlds so I wanted something with security baked in. The language I implemented accomplished this through very strong encapsulation, among other things.)

Like I said above, it'd be nice to see something like Io done with Wasm in mind, as a cleaner alternative to JS, say.


> I wonder how Javascript object system compare to Self, or similar language IO?

It's limited, confusing and muddled, because the prototypal stuff was really there for the ease of implementing an object system (until ES6 the prototype was the red-headed stepchild of the language standards-wise) and ctors were tacked on for familiarity with Java but actual inheritance support (actually supporting JS-level subtypes) was all half-assed and very confusing before ES6. Basically pre-ES6 it was a complete mess unless you used non-standard extensions (e.g. `__proto__`).

ES6 both dramatically surfaced the prototypal inheritance (Object.create, Object.getPrototypeOf, Object.setPrototypeOf, Object.defineProperty) and made it even more of an implementation detail of the "class" syntactic sugar.

Nowadays you can pretty much use JS as a class-based language… and most people do that because frankly it's not worth messing around with the underlying prototypal system, you'll just confuse both your colleagues and your editor.


I think you're mixing up your timelines. ES5 brought `Object.create` and other prototype helper methods. This was a heyday, of sorts, for JS as a prototypal language with Crockford and others trying to explain how prototypes differ from inheritance based classes. However even then the C++/Java model had become too well ingrained by that point, especially at certain very large tech companies.

The `class` keyword was introduced in ES6, which is indeed a kludge on top of prototypes.


> I think you're mixing up your timelines. ES5 brought `Object.create` and other prototype helper methods.

You’re right, I did. So the actual timeline was somewhat less dim than I remembered.

> The `class` keyword was introduced in ES6, which is indeed a kludge on top of prototypes.

I wouldn’t say that it’s a kludge. I’d anything it’s more of a realisation of the original vision of JavaScript (which you could fairly call a kludge): using a prototype system to underlie a langage approaching java semantics.

Which may well be the dumbest way to approach it as you get the flexibility of a class-based system and the performances of a prototype-based system rather than the other way around, but at the same time JS’s prototype system was kneecapped from the start.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: