Hacker Newsnew | past | comments | ask | show | jobs | submit | samth's commentslogin

You don't need 3000W, 1kW is plenty. I have a Yuba Mundo (one of the biggest long-tail cargo bikes) and my Bafang motor tops out around 1kW and it's plenty even for the biggest hills here in Bloomington (which is quite hilly).


I tried this also, and it was totally garbage for me too (with a similar refusal as well as other failures).


No, this is exactly what is meant by soundness. Using the `Any` type in TypeScript can result in values that have type `integer` being actually strings, which is unsoundness.


The predictions from this post have almost entirely turned out to be wrong. Chez Scheme upstream decided to merge in Racket's changes entirely, and to make the lead developer of Racket (Matthew Flatt) a core Chez developer. These days Matthew is the most active Chez developer. Over the past few years, half of the serious Chez committers are people who come from Racket.


It's not true that you need to use CPS to implemented first-class continuations. There are plenty of slow ways to do it, and even if you want to be fast you can do multiple different things. Dybvig describes a number of options in his thesis: https://www.cs.unc.edu/xcms/wpfiles/dissertations/dybvig.pdf


Sam will definitely know more about this than I will, so if he contradicts me, listen to him.

If I am not mistaken, the racket language does not convert to CPS during compilation. Instead, when you want to get the continuation, I think you just get a pointer to the stack frame that you want. All I know for sure is that it uses something called a-normal form, which is kind of like SSA in some ways, and that the continuation is 2x 64/32-bit words depending on your architecture.


The main implementation of Racket today is built on top of Chez Scheme, which uses the techniques described by Dybvig that I linked to.

In the earlier implementation of Racket, indeed it doesn't convert to CPS but does use something like A-normal form. There, continuations are implemented by actually copying the C stack.


The post doesn't mean there are 20 unemployed people, but that there are 20 interchangable pest control contractors who all do an adequate job, and so the one they fired was easy to replace.


No, macros and eval are quite different. You can see this for example in Python or JavaScript, which have eval but not macros.


You can make macros in Python: https://github.com/lihaoyi/macropy (note that that project was started for a class taught by Sussman)

There's also a PEP to make them first-class: https://peps.python.org/pep-0638/


That's a different meaning of first-class from Strachey's definition of a first-class citizen[1] - ie, one that can be passed as an argument, returned from a function, or assigned to a variable.

Syntactic macros are still second-class, like Lisp macros, but an improvement over text-replacement style macros.

For something macro-like which is first-class, there are fexprs[2] and operatives (from Kernel[3]) - these receive their operands verbatim, like macros, so they don't require quotation if we want to suppress evaluation. fexprs/Operatives can be passed around like any other value at runtime.

[1]:https://en.wikipedia.org/wiki/First-class_citizen

[2]:https://en.wikipedia.org/wiki/Fexpr

[3]:https://web.cs.wpi.edu/~jshutt/kernel.html


Stratchey defined "first-class objects". This was by analogy with "first-class citizens" in a legal/political sense, since they are treated just as well as any other object and have no additional limitations. If we extend the analogy to syntax then I think it's clear enough that it means that it is a piece of syntax which is treated the same as any other and does not require special treatment or impose additional restrictions.

Thank you for the clarification and the additional information, I think having macros as first-class objects is a cool (but separate) idea.


They aren't that different. Fexprs are essentially additional eval cases.


The big difference between SBCL and Racket today is support for parallelism, and that's about decisions made by both projects a very long time ago. Racket has incrementally added significantly more parallelism over the years, but supporting lightweight parallel tasks that do IO (as in a web server) is still not something Racket's great at.

(Source: I'm one of Racket's core developers.)


This article is mostly whining that evidence-free speculation about how to write good software is no longer publishable in top conferences. And the major evidence cited is that there's a specific citation style required, a standard feature of every kind of publishing since forever. I promise (having reviewed many times for the specific conference under discussion) that no one's paper is rejected (or even denigrated) for failing to use appropriate citation style, people comment on it the same way they would comment on any other style issue.


I think that's a pretty uncharitable take; I thought there were several interesting questions raised by the author:

1. Should conference "service" be something we expect of postdocs (and even PhD candidates) rather than established experts?

> Often, as a result, the PC is staffed by junior, ambitious academics intent on filling their résumés. Note that it does not matter for these résumés whether the person did a good or bad job as a referee! [...] I very much doubt that the submissions of Einstein, Curie, Planck, and such to the Solvay conferences were assessed by postdocs. Top conferences should be the responsibility of the established leaders in the field.

2. Should programme chairs strive to maintain exclusivity of their conference track, or look for important ideas that deserve to be communicated?

> As a simple example, consider a paper that introduces a new concept, but does not completely work out its implications and has a number of imperfections. In the careerist view, it is normal to reject it as not ready for full endorsement. In the scientific view, the question for the program committee (PC) becomes: is the idea important enough to warrant publication even if it still has rough edges? The answer may well be yes. [...] Since top conferences boast of their high rejection rates, typically 80% to 90%, referees must look for reasons to reject the papers in their pile rather than arguments for accepting them.

3. Is computer science suffering from a focus on orthopraxy rather than scientific method?

> What threatens to make conferences irrelevant is a specific case of the general phenomenon of bureaucratization of science. Some of the bureaucratization process is inevitable: research no longer involves a few thousand elite members in a dozen countries (as it did before the mid-1900s), but is a global academic and industry business drawing in enormous amounts of money and millions of players for whom a publication is not just an opportunity to share their latest results, but a career step.

What do you think about these?


I think it's not holding up that well outside of predictions about AI research itself. In particular, he makes a lot of predictions about AI impact on persuasion, propaganda, the information environment, etc that have not happened.


Could you give some specific examples of things you feel definitely did not come to pass? Because I see a lot of people here talking about how the article missed the mark on propaganda; meanwhile I can tab over to twitter and see a substantial portion of the comment section of every high-engagement tweet being accused of being Russia-run LLM propaganda bots.


Agree. The base claims about LLMs getting bigger, more popular, and capturing people's imagination are right. Those claims are as easy as it gets, though.

Look into the specific claims and it's not as amazing. Like the claim that models will require an entire year to train, when in reality it's on the order of weeks.

The societal claims also fall apart quickly:

> Censorship is widespread and increasing, as it has for the last decade or two. Big neural nets read posts and view memes, scanning for toxicity and hate speech and a few other things. (More things keep getting added to the list.) Someone had the bright idea of making the newsfeed recommendation algorithm gently ‘nudge’ people towards spewing less hate speech; now a component of its reward function is minimizing the probability that the user will say something worthy of censorship in the next 48 hours.

This is a common trend in rationalist and "X-risk" writers: Write a big article with mostly safe claims (LLMs will get bigger and perform better!) and a lot of hedging, then people will always see the article as primarily correct. When you extract out the easy claims and look at the specifics, it's not as impressive.

This article also shows some major signs that the author is deeply embedded in specific online bubbles, like this:

> Most of America gets their news from Twitter, Reddit, etc.

Sites like Reddit and Twitter feel like the entire universe when you're embedded in them, but when you step back and look at the numbers only a fraction of the US population are active users.


something you can't know


This doesn’t seem like a great way to reason about the predictions.

For something like this, saying “There is no evidence showing it” is a good enough refutation.

Counterpointing that “Well, there could be a lot of this going on, but it is in secret.” - that could be a justification for any kooky theory out there. Bigfoot, UFOs, ghosts. Maybe AI has already replaced all of us and we’re Cylons. Something we couldn’t know.

The predictions are specific enough that they are falsifiable, so they should stand or fall based on the clear material evidence supporting or contradicting them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: