When I was a TA, I'd have students constantly try to add me on Facebook. The conversation would go nearly exactly like you'd describe:
> Are you on Facebook? Is this your profile?
> Yes, but I'm not gonna accept your friend request.
The reply was usually a bit snarky with a bit of a chuckle, and that did it for most college-aged people. The few times someone persisted, I explained that Facebook is an aspect of my social life, not my professional life, and I intended to keep it that way.
This doesn't account for the whole picture. The tax benefit is dwarfed in comparison to the losses in social security people at and below the poverty line will suffer.
If the health care costs in the U.S. were the same as the next most expensive OECD country, health care costs would be $1 trillion less. The trump tax cuts amount to $150 billion a year, so just by bringing health care costs in line with the second-most expensive country they could probably meet all of the shortfall without anyone needing to lose anything. It's possible to make smart cuts to medicare and medicaid spending without affecting quality of care or coverage.
Now, knowing republicans that's not how it will pan out though. They aren't known for making smart cuts, they typically reduce benefits/expenditure while keeping profiteering in place.
> How can you blur the line? It's either online multiplayer or it isn't, there's not really an in-between. You can't "kind of" have other people connected to your game. They're either there or they're not.
Dark Souls does precisely this in a fantastic way in two major mechanisms. The first is the shared player "graffiti", wherein players can write messages that are then propagated into others' games. The second is the shared "death ghosts', which play back a player's last moments before death in others' games as a ghost you can't interact with. These make the game feel like you're playing with others in a large world without having to actually put you in the same instance.
Not to mention phantoms, where you can see other players's ghostly images occasionally moving around the world.
Other subtle cross-over features exist too. In Dark Souls 1 you are able to hear other players ring a bell after defeating an early boss. A rare mob can also spawn in your world when a player dies with many souls (currency), or loses a powerful item.
It definitely does much to make the world feel less lonely.
It seems like this could be fairly easily resolved by the management having a rigid "we pay everyone the same, deal with it" position, perhaps up and and including a formula for salary computations.
You don't have to pay everyone the same. Just be ready to justify why one gets paid more than other. CEOs have pay transparency but somehow the world doesn't go under.
Went to your profile out of curiosity to see if your employer is public. The linked site in your profile is... odd. Has the site been compromised or am I missing the joke?
Shareholders hire CxO level people to deal with day-to-day minutiae of running the company. There're other metrics (see SEC filings) that help shareholders gauge whether CxO people are doing their job.
Get ready to see a) people gaming the formula and b) talent leaving because they feel like they are paid the same as people who contribute less.
Edit: Not that it isn’t currently gamed and not that this doesn’t already happen: I’m pro pay transparency but there aren’t any magic fixes here, this is a really complicated social issue.
Something like Buffer's formula (https://open.buffer.com/transparent-salaries/) isn't really "gameable", as far as I can tell. And the counter argument might be that you'd attract talent that appreciates the transparency.
I'm curious what talent would appreciate the transparency? The only thing I can come up with is "average" talent.
The high performers will view it has a hard cap on compensation and look elsewhere, in my experience at least. Obviously this cannot be entirely correct, but it's hard for me to get into the mental space of someone who wants to be paid exactly as much as their peer sitting next to them. In my career I've always wanted to outperform the next guy - and be compensated accordingly.
I say this as having been both a high performer and an under performer. In the hard personal years where I am underperforming I would love transparency and know what to expect. In the years I was "crushing it" I'd have set myself back financially by a decade+ had I simply been okay with the average compensation for my position.
I mean, as long as we're trading unverifiable anecdata, I've known high performers who were being underpaid because they were recently out of school or came out of non-traditional backgrounds. I suspect they would have appreciated this kind of system.
And the only people I can think of who would prefer the existing opague system would be those who benefit from it: management and people receiving outsized salaries for some anti-merofratic reason.
That was me! I can say without a doubt I quickly learned to only work for small companies where I reported to the owner directly. This enabled me to get 50% raises YoY when starting out - where my corporate job was limited to the typical "well, that wouldn't be fair to the rest of the group" style politics.
I would say it's vastly more important to the hypothetical vulnerable high-performer than the guy graduating MIT or Harvard working for Google. They start at an extremely high salary and will do fine no matter what. The high performing high school dropout needs rapid massive raises just to eventually get to par with the other group - assuming similar performance.
I can almost guarantee you that if salaries were transparent it would have been a huge scandal in a few of the companies I worked at since I was making 2-3x the wages of those next to me. It would have been untenable for those managers to pay me that, even though/if I were worth it - they'd have done nothing but deal with politics and fallout from it. I still believe I was underpaid in most of those positions compared to the folks they had working there.
Egos are easily bruised. When that 19 year old high school dropout is making more than the 45 year old with a degree people start to complain. Loudly. They don't even look at work output or performance - it's utterly irrelevant to most.
I see both sides to this, but I'm relatively certain salary opaqueness helped me through the start of my career. Now? Maybe not as much. It's much harder to stand out at an exceptional level once you reach a certain point.
Discriminating wages on arbitrary criteria like geographic location goes against equal work = equal pay and that compensation should be based on contribution [1].
Location base: For the other 65% of the base, we factor in each location’s cost of living using Numbeo together with data from Payscale and Glassdoor, which we then use to have a base salary for that particular location (say New York or Cape Town).
Ok, but that's not really my problem. That's bad management, because they're apparently carrying around a bunch of dead weight and not doing anything to correct that problem.
I'm not sure why you think that. At least in Germany, if an employee can't do the work they are obligated to do by their employment contract, they are in violation. After they have been notified of the violation (or if it is obvious, e.g. because they didn't even go to work), it is their duty to correct that. If the employee is unable to satisfy the requirements, they can be terminated with an appropriate notice period.
The process may not be fast, but for a useless employee, it would be difficult to claim that the termination was unjustified, so the outcome is essentially guaranteed. I would call that easy (as opposed to simple).
Yes ... and this is a major drain on EU companies that make them less competitive than companies with "at will" employment. They can't fire low-performers easily, and they can't afford to take a gamble on people since firing them is so difficult if they don't work out.
That's mostly propaganda and the fear of "socialist" Europe its actually quite easy to remove people for poor performance assuming it is really poor performance or trying to avoid redundancy.
If its so easy to fire in the USA why is hr so paranoid about it
In the UK you can be fired within the first two years without reason (unless it's connected to a protected characteristic). After two years you can still be fired - the company just has to follow a simple procedure of giving you some warnings before firing you.
Employer tribunals are not free and are intimidating to use, so many people who are wrongfully dismissed don't seek justice.
Having had to fire someone who had been at a company for more than two years, I can say it was anything but a simple procedure: it involved a performance improvement plan which lasted interminably (and actually sapped a really surprisingly large amount of my own time...) and it became an awful lot more complicated when it turned out the guy was on anti-depressants. In the end, HR suggested I take the guy to the pub for lunch and they told me a series of things I could say to encourage him to quit but which couldn't possibly be construed as constructive dismissal; thankfully over lunch he told me he'd got another job offer, and I walked him out of the office the following day with a great sense of relief.
That's not a problem with equal salaries. Even if salaries are opaque, underperforming or lazy colleagues lower morale. Incentives can still be given by allowing people to quickly move up the ranks if they perform well.
Paying everyone the same is the right policy if everyone is equally productive. That equilibrium might be reached by the “overpaid” ones getting better (or leaving) or by the “underpaid” ones getting worse (or leaving). One can fairly easily predict which of those scenarios is more likely.
Which is the precise reason I fled union work when I was younger.
I got sick of being compensated the same (typically worse) as the worst employee on the floor, while doing 5-10x the productive output.
All these policies do over time is ensure you lose top-tier talent and eventually get stuck with a lot of middling folks who are content with the status quo. This can be a good or a bad thing, depending on who you are.
I assume gp is referencing the necessity of training the spam filter.
I didn't have big problems with it, except for a monthly student loan payment confirmation email that fastmail refused to classify as not spam no matter how many times I so marked it.
In general, I found gmail's spam filter to be the best. If, however, you don't need to regularly interact with lots of new email addresses, fastmail will work fine for you.
This is exactly it. I found myself having to manually classify spam/notspam for the first time in over a decade.
This might be fine for some or even most people but if like me you’d prefer to spend less time on email not more it may not be the best choice as an email provider.
Nothing against the service, just be prepared to manually train a spam filter
Don't forget the part where those graduate students do some 80% of the work and make <20% of the income of their their supervisor (and, in many cases, <10% or none at all in several), and the university hosting the lab absorbs some 50% of all grants "off the top." A half-million dollar grant pays a single graduate student less than 20k/year for maybe 5 years, which, in the context of other companies that get these grants, is absurd. Independent research firms working from the same grants pay their employees competitive wages and still accomplish research. If these companies could issue Ph.D.s, academia would shrivel and die: imagine working at a company for 4 years, getting a salary 3-4x what a graduate school would offer, doing real work in a professional setting, doing enough research to write a dissertation, and receiving a degree. The entire incentive to attend a university would melt away.
That's the Anglo-Saxon model. In Sweden/Norway/Switzerland/Netherlands, a Phd student costs at least 100k dollars/year to a lab, with full-pension payments. It's a (relatively well-paid) job here.
I don't think many people (even graduate students and postdocs) realize how much money universities suck in with grants. For every RO1 a professor gets, the university gets to add a substantial (typically greater than 50%) indirect costs. The PIs don't care because their budget is the same, but what it means is that there are fewer grants being handed out. Then the PIs somtimes pay for the "training" that graduate students and postdocs get using their own funds. For a top university, this can be 50K/year.
That's not exactly how it works. If an agency wants to give a $1 million grant, for example, and the PI's university has a 60% overhead rate, then the PI will draw up a $625,000 budget (as 625*1.6=1000). So the overhead definitely matters to the PI---except there is little the PI can do about it after joining a university. Most big universities have similarly high indirect cost rates, which they steadily increase over time.
UC Berkeley (2016): 57%
MIT (2018): 59%
Harvard (2018): 59%
Stanford (2018): 57%
This story, from 2013, gives some of the context and history, as well as averages for universities and other research insitutions (which can have much higher overheads):
And all that overhead money is going to 3 things: increasing healthcare costs for the burgeoning retiree population of staff, increasing the number of administrators subservient to the king (I mean, provost), and new construction. The political class sees these additional jobs from the last two as the core justificatiin for the government increasing student loans for students and overhead for grants. They don't comprehend the science or frankly give a shit about it. Maybe when they get cancer they'll have a vague sense of interest in their organ system of choice, but that's about it.
Indirect costs cover things that we aren’t allowed to write into grants. And at my institution at least, it’s questionable as to whether or not that indirect rate actually fully covers the cost of research.
In the UK at least, independent research at pharmaceutical companies or Contract Research Organizations typically costs 1.5- to 2-fold what it costs at a University. Mostly because they pay higher wages than PhD students get, but also because they have heavier overheads. They're also typically less nimble as their research programmes are larger with more layers of management.
How, exactly, is commercial research taxed? Only profits are taxed, and research is all expenses, and can be deducted from profits for tax purposes in the year they occur.
In the UK a commercial profit making company, even if it is doing research would still be subject to a form of local property tax called business rates[0]. Whereas a university would likely get relief from such taxes.
They have this in some European countries like Denmark where it's called an "industrial PhD." However, they also compensate graduate students more fairly and seem to have more systematic and closer collaborations between academia and industry.
Such an independent research firm could grant Ph.D.s. Getting a Ph.D. in US universities like where I went is primarily a matter of satisfying a committee of existing Ph.D.s that the student has done suitably creditable work.
A firm with good researchers, whose newly minted students also did good work, would soon have the suitable reputation. And those letters after your name are only as valuable as the reputation of the institution.
At least my university does offer such an option, it's not often used, but sometimes is - if you've done (and published!) an appropriate amount of research in the industry, then you can apply to the committee to defend a thesis and get the degree without doing a PhD program. You still need to write the thesis, though, and it's much easier to do it while being paid as a PhD student/researcher.
As someone with years of experience writing interpreters, this seems a rather sloppy solution to a straightforward problem. There are two canonical solutions, and he uses neither:
1. The first canonical solution, which the author even describes, is persistent environments. Unfortunately, this requires passing the current environment as part of the recursive pattern, which means heavily modifying the existing code. He doesn't do this because it would require revisiting too much.
2. The second canonical solution, which you would find in any modern compiler, is to uniquely rename all the variables. His "resolver" traversal has ample opportunity to do this, and would provide a far cleaner solution (with less space overhead than his expr/scope depth lookup table).
Instead, the author creates a stack of environments and annotates variables with information akin to de Bruijn indicies for scopes. Compared to the alternatives above, this is over-engineered and inelegent, plus it complicates reasoning about scopes far beyond what's necessary.
As an aside, this assertion is also completely absurd:
> Shadowing is rare and often an error so initializing a shadowing variable based on the value of the shadowed one seems unilkely [sic] to be deliberate.
> Unfortunately, this requires passing the current environment as part of the recursive pattern, which means heavily modifying the existing code. He doesn't do this because it would require revisiting too much.
Right. When I was first writing the code for this interpreter, I implemented it using persistent environments (basically the typical Scheme association list approach).
It worked, but it had some strikes against it:
1. The persistent approach is not right for global variables, which are dynamically bound. So I ended up needing two Environment classes, a Map-based one for the global scope, and then a persistent one for locals. That, of course, also requires an interface so that most code can work polymorphically with both types.
2. The Interpreter class and its visit methods are introduced several chapters before Environment. So all of those preceding visit() methods have to be redone to have the extra environment parameter passed along with passing it through some other methods.
Storing the current environment in a field helped, but the code for updating that field still looked grungy to me. With a book, every bit of extra boilerplate feels really heavy and I try to keep the code clean and simple.
3. It becomes really unclear why we want persistent local environments. Since we would have to introduce them well before the point in the book when closures can actually cause problems when not having them, it ended up feeling like the code was poorly motivated.
If I did persistent local scopes when locals are first introduced, there's no way to show them a sample program that would go back without having them — we don't have functions, function calls, or closures yet.
The current organization lets the reader go down a naive obvious-seeming path (and, better, one that reuses all the code we already need for global scopes) and then lets them viscerally experience the problem caused by thinking about blocks as a single scope. Then once they feel that pain, they get the solution.
There are some benefits to the current approach:
1. It gave me an opportunity to show an example of a semantic analysis, and adding another pass to a compiler. Those are generally useful techniques, and I think it's worth walking readers through one.
2. It lets the reader cover more ground. When we get to the second interpreter, it takes a different approach to local variables. Variables are resolved during parsing and stored directly on the interpreter's stack, and accessed by index. The block scopes are discarded and have no runtime representation.
I've tried to add other differences between the two interpreters too, just to reduce the amount of redundancy between them. For example, the Java one lexes the entire file to a list of tokens while the C one lexes on demand, driven by the parser. The Java one is an AST walker, the C one compiles to bytecode, etc.
Of course, if you are an experience language hacker, it means some stuff in the Java interpreter may seem weird because it's not the "normal" way to do things. (Though, for what it's worth, I have seen plenty of interpreters that do create hashtable-based environments for each lexical scope.)
I hope that seems reasonable. The way environments are represented in the Java interpreter was the most difficult design decision I made. I went back and forth on it a lot and I'm still not certain I made the right choice. But, ultimately, if the book is ever going to exist, I had to just pick and move forward.
> 2. The second canonical solution, which you would find in any modern compiler, is to uniquely rename all the variables. His "resolver" traversal has ample opportunity to do this, and would provide a far cleaner solution (with less space overhead than his expr/scope depth lookup table).
I considered having a problem exercise to do effectively that, but I felt like it might be reaching a little too far for a first-time language implementer.
> As an aside, this assertion is also completely absurd:
>
> > Shadowing is rare and often an error so initializing a shadowing
> > variable based on the value of the shadowed one seems unilkely [sic]
> > to be deliberate.
What is absurd about it? Did I not word this well? The point I was trying to get across is that code like this is not common:
var a = "global";
fun foo() {
// Initialize a same-named local variable based on the global.
var a = a;
}
I can't recall ever seeing code like this in the wild, and if I ever saw it in a code review, I would certainly tell the author to rename the local variable.
Given that, it seems reasonable to me to not take the approach of deferring putting the local in scope until after its initializer has run.
Note that the above (or equivalent) code is a compile error in Java and C#. In JavaScript, using "let", it's an error. In C, it accesses the uninitialized new variable (!).
Honest question, why have local scopes at all? IIRC, Lox is a dynamic object-oriented language.
Python has a flat scope per function, and closures aren't all that common in idiomatic Python code. Objects are much more common.
JavaScript also had a flat scope per function until ES6. On the other hand, closures are very idiomatic in JavaScript, and objects are a little weak.
Looking at the previous chapter [1], from someone familiar with OOP, the makeCounter and Point examples just seem to be awkward ways of writing classes, no?
Particularly for an educational language, why have two ways of doing the same thing?
> Python has a flat scope per function, and closures aren't all that common in idiomatic Python code.
Closures are common in "idiomatic" Lox code. Or, at least, I want Lox to be a language that doesn't have pitfalls around using closures and local variables.
Because Lox is syntactically C-ish, I also think it's important that its scoping rules roughly follow C and friends.
> JavaScript also had a flat scope per function until ES6.
Right, which is evidence that not having local scopes was a mistake. Adding "let" was a very big deal and wouldn't have been done unless the semantics of "var" really were not what users wanted.
I talk a little more about function scope in the context of implicit declaration here:
> Looking at the previous chapter [1], from someone familiar with OOP, the makeCounter and Point examples just seem to be awkward ways of writing classes, no?
>
> Particularly for an educational language, why have two ways of doing the same thing?
I think it's possible to go too far down the rabbit hole when deciding two things are the "same". Since you can implement integers using functions [1] why have both numbers and functions? You can implement control flow using closures and dynamic dispatch, so why have "if" [2]?
The pragmatic answers are:
1. Users probably don't think of functions and classes as "the same" even though only one is required for Turing completeness.
2. Most interpreters do not implement one in terms of the other, so having both lets us show the implementation techniques that are unique to each.
Are there any examples of the book of closures that couldn't be better written as classes? (Or perhaps you disagree about the Counter/Point examples.)
I'm not sure I agree about "let" being evidence of local scopes being necessary. I believe the primary problem with "var" is the "hoisting" behavior, i.e. roughly speaking code "later" can affect variable references now.
In Python this isn't as big of a deal, because you get an UnboundLocalError for code like this:
x = 1
def f(x):
print(x)
x = 2
print(x)
So instead of a silent bug, you'll get a crash, which is easy to debug. In JavaScript I believe similar examples will lead to subtle bugs, which "let" fixes. Also in JS "x = 1" is global but "var x = 1" is local.
-----
"let" also added local scopes, but I don't believe it proves that you needed them. Anecdotally, coming from C/C++ to Python, I've never missed local scopes, simply because I keep my functions small.
Mild tangent: Here is something I realized recently about the perennial large functions vs. small functions debate: It depends on whether you're a C programmer or not.
You can find some documents by John Carmack and Jon Blow arguing for large functions. That's because they are C programmers, and in C there is a very large cost to creating a function: reasoning about ownership/memory management (and the lack of ability to return non-primitives without out-params.) It's easier to keep things all in one function, especially if you're only calling the function in one place.
In Java, JavaScript, and Python, there is no such cost. Garbage collection takes care of it for you. You can return as big an object as you like. And indeed the idiomatic style in those languages is small functions.
So I claim that local scopes are much less necessary in Java, JavaScript, and Python. C doesn't have true functions, which makes local scopes quite useful.
-----
I understand your point about going overboard making things the same (there was an old Paul Graham post pondering getting rid of integers in his defunct Lisp dialect ARC).
But I think it's a different story for classes and closures. There is obviously an efficiency problem with integers as functions, just like there is an efficiency problem in Haskell with strings being lists of characters. In engineering you obviously care about efficiency.
I know this is programming language heresy, but I honestly don't see a real need for closures if you have classes. (And if you have the Dart-like or Wren-like constructor initialization shortcuts for classes, which I plan to add to the language I'm designing.)
I'm sure I'm biased because I'm coming from C and Python, and neither really has closures. (Python didn't have the ability to mutate variables in an enclosing scope until the 'nonlocal' keyword was introduced several years ago -- and I've never seen it used in the wild.)
JavaScript is the only language I've really used with closures and I don't use it that often (and Scheme in college, but doesn't really count). I just looked through some JS code of mine, and I pretty much always have an explicit model of the page state and pass it explicitly using a dependency injection style. With closures, state is implicit. I don't like the fact that your outer function can have 8 locals but only 2 of them are captured, and you have to search up to see which ones they are.
There's one use of closures for an onclick handler, but I would solve that with __call__ in Python (C++ also has this as "functors", but Java doesn't).
-----
I also asked a similar question about prototypes vs. classes here [1], which was a pretty good discussion.
I liked the Wren design because it's like JavaScript/Lua but with classes instead of prototypes.
And I liked your red function/green function post about async, which is another important use of closures. And there was a recent Ryan Dahl interview [2] where he admitted that the Go style was better for servers, and conceded the "callback pyramid" problem with closures.
-----
So what's left for closures then? If one agrees that Counter/Point are naturally classes (which you might not), if you want to make an explicit model of state in GUI code (e.g. Elm advocates this strongly), and if you believe in the Go-style async is better, then what are some natural uses of closures? This is an honest question -- as I said I could be biased coming from languages that don't have them.
If closures just "fell out of" implementing classes, I would probably implement them in the language I'm designing. But this chapter shows that there are some non-trivial issues so I'd be inclined to leave them out.
My pet theory is that the industry learned how to use classes "correctly" around 2005 or so. From perhaps 1995 to 2005, you were more likely than not to encounter a mess. (Although Go might be late to the party [3].)
This is another contrarian opinion, but I actually think classes relate more strongly to functional programming than closures (though closures have more of a historical relation). Classes are more rigorous about state (random local vars aren't captured), and functional programming is also about being rigorous about state. I use classes but I think of it like functional-programming-in-the-large [4]. There is a false dichotomy between FP and OOP -- the modern styles of both are converging (explicit state params, dependency injection).
Sorry for the long post -- tl;dr I would like to see some examples of code in the book which are more natural for closures than classes :)
tl;dr #2 -- If you have a short syntax for initializing class members for constructor params, and if you have a way of bridging classes and functions, like __call__/operator(), -- then I claim you don't really need closures.
> Are there any examples of the book of closures that couldn't be better written as classes?
The book itself doesn't really have a lot of "representative" Lox code since it's so focused on implementing the interpreters themselves. But certainly, in other languages with first-class functions, callbacks are very common and would be painfully cumbersome to do with classes. Note, for example, that Java has long supported anonymous inner classes, but still added lambdas later because the former were so annoying to use in many cases.
> Anecdotally, coming from C/C++ to Python, I've never missed local scopes, because I simply keep my functions small.
C and C++ have local scopes. Would you expect this to not have an error?
main() {
for (int i = 0; i < 10; i++) {
// ...
}
printf("%d\n", i); // <-- ?
}
> in C there is a very large cost to creating a function: reasoning about ownership/memory management (and the lack of ability to return non-primitives without out-params.)
There is also the overhead of the call itself. In the kind of performance-critical code often written in C/C++, that can matter too. Good compilers will inline when it makes sense, but those heuristics aren't perfect.
> I know this is programming language heresy, but I honestly don't see a real need for closures if you have classes.
You don't need them, but once you get used to them, they sure are handy. Implicitly closing over locals in the surrounding scope is a little magical, but it's a really convenient kind of magic than generally seems to help more than it harms.
All the world's Smalltalk, Scheme, C#, Ruby, Dart, Scala, Swift, JavaScript, Kotlin etc. programmers probably aren't wrong in liking them. (Although, as a language designer, it's of course fun and potentially rewarding to deliberately try to get off the beaten path.)
> if you believe in the Go-style async is better, then what are some natural uses of closures? This is an honest question -- as I said I could be biased coming from languages that don't have them.
The bread and butter use cases I see are:
1. Modifying collections. map(), filter(), etc. are so much clearer and more declarative than imperatively transforming a collection.
2. Callbacks for event handlers or the command pattern. (If you're using a framework that isn't event based, this may not come up much.)
3. Wrapping up a bundle of code so that you can defer it, conditionally, execute it, execute it in a certain context, or do stuff before and after it. Python's context stuff handles much of this for you, but then that's another language feature you have to explicitly add.
I totally agree with first class functions, and I probably agree with the Python-style ability to read outer variables (especially in the case that the inner call doesn't survive longer than the outer call).
What I don't agree with is capturing locals to rebind them -- this is the explicit vs. implicit state argument.
So for #1 map/filter, I don't really see this as a use case for closures. It's more about function literals and first-class functions.
#2 I am on the fence about... I would be interested in examples. Like I said, with my somewhat limited JS experience, I understand why people like them, but I think you can do OK without mutating the surrounding scope. There's a distinction between calling a mutating method on an object in the surrounding scope, and actually rebinding something in the surrounding scope.
#3 might be convincing although I would need examples. The Go-style defer is scope-based which seems more limited than general closures. Python's context managers are sort of a syntactic sugar/protocol around using certain kinds of classes -- much in the same way that iterators are.
The other problem with closures it's not really one language feature -- they're different across languages. There is more than one issue with capture like the one with loops you mentioned in another comment. And, C++ now has closures, but there are a few different options with regard to LValues and RValues that I don't remember at the moment. It doesn't feel that solid to me, but I'll continue to play with it, and this chapter and some of the comments will certainly help.
On the one hand, it seems like a ton of code has been written in C++, Java, and Python without closures. On the other hand, C++ and Java both added some closure-like features in the last decade, and Python added nonlocal, so that's probably a trend. (But like I said I've never actually seen anyone use nonlocal, it feels like something done "for completeness" rather than based on actual usage.)
> I can't recall ever seeing code like this in the wild, and if I ever saw it in a code review, I would certainly tell the author to rename the local variable.
This is somewhat common in Go, specially when capturing a for-loop-scoped variable in a closure. It's even recommended in the official FAQ [1].
That's because Go chose to reuse the loop variable in each iteration instead of binding a fresh one each time.
C# figure out that was a mistake years ago and fixed it in 5.0 [1]. Dart has always bound a new variable in each iteration. I don't know why Go made the choice they did.
> 1. The persistent approach is not right for global variables, which are dynamically bound. So I ended up needing two Environment classes, a Map-based one for the global scope, and then a persistent one for locals. That, of course, also requires an interface so that most code can work polymorphically with both types.
Can you give an example of what you mean?
"Global variables" are simply variables declared at a top-level scope. They don't need to be "dynamically bound" if you unravel the top-level into a sequences of statements that thread their environment (which is a bog-standard practice for interpreters, and how you should handle blocks, too), unless you'd like to be able to reference variables declared below you in the global scope. And that can be resolved by passing a dynamic environment in addition to the lexical one. Or by making a single pass over the global space, grabbing all the declared names, and making that your initial lexical environment. But, of course, this is all speculation without an example.
> I considered having a problem exercise to do effectively that, but I felt like it might be reaching a little too far for a first-time language implementer.
Uniquely renaming variables is, in my opinion, a far simpler concept to grasp than nesting scope resolution. It would also be far easier to implement, with notably less code change, and continue to allow you to add another compiler pass and allow the reader to cover ground. The actual downside is that it achieves your goal without needing to explain the whole mess, which means the chapter doesn't get to spend time explaining how to think about resolving variables.
> code like this is not common:
Your toy example is, obviously, a situation where you should reconsider what you're doing. However, a program such as
lookup :: Env -> Ear -> Value
lookup = ...
f :: Env -> Expr -> Expr
f env exp =
let lookup = lookup env
in ...
is a fairly common pattern in automatically curried languages, or languages with functional-style loops (see [0] for such a usage of shadowing). I don't think that your stance is wrong, but I think decrying other stances isn't particularly fair to the language design world, where there are many languages where such shadowing is neither rare nor often an error. Of course, there is some play here insofar as treating function declarations differently from other variables, which is not the case in many languages where this behavior is accepted.
Also, for what it's worth, translating the above code to JavaScript yields a stack overflow error if you invoke lookup inside of `f`.
> unless you'd like to be able to reference variables declared below you in the global scope.
It's this part. Unlike ML which uses Queinnec calls a "hyperstatic" top level environment, Lox follows Scheme where the top level environment is dynamic. This gives you a nice way to support mutual recursion at the top level.
> Or by making a single pass over the global space
That works when running from a file, but not in a REPL session. Lox also supports REPL sessions like:
> fun foo() { return bar; }
> foo();
Undefined variable 'bar'.
[line 1] in foo()
[line 1] in script
> var bar = "ok";
> print foo();
ok
> And that can be resolved by passing a dynamic environment in addition to the lexical one.
True, that would avoid the need for a single polymorphic environment type, but it's again still more code complexity.
> is a fairly common pattern in automatically curried languages, or languages with functional-style loops (see [0] for such a usage of shadowing).
Interesting, I wasn't aware of that. I can reword it by saying something more like "shadowing is usually in error in imperative languages".
> We ate on the balcony and, as we shared a bottle of wine and listened to the chorus of insects, I began to think that the year of groundwork I had put in was about to pay off. Marylin stayed the night.
> Are you on Facebook? Is this your profile?
> Yes, but I'm not gonna accept your friend request.
The reply was usually a bit snarky with a bit of a chuckle, and that did it for most college-aged people. The few times someone persisted, I explained that Facebook is an aspect of my social life, not my professional life, and I intended to keep it that way.