You're gonna get downvoted, but there is truth in your comment. I wouldn't have made money with Bitcoin if I had paid attention to all the negative comments on HN.
I know I know, just because I profited on Bitcoin doesn't rule it out as a Ponzi scheme or whatever HN calls Bitcoin nowadays. I and many others genuinely believe that Bitcoin provides great value through its underlying technology and also has the potential to become something greater.
> First, because some random idiot on a blog post does not know whether the person "performed poorly at their job" in general. He just has a specific gripe. The other person might have done miracles in other parts of Cortana. There are such things as shipping priorities, and they're not determined by random blog posts or comments.
I'd say the _Product Manager_ of Cortana has performed poorly at their job if the released product (Cortana) is bad. And yes, Cortana is bad.
> Second, because people deserve chances for improving. Anybody at any current position has "performed poorly" at this or another project earlier on. Terminating or transferring them only makes sense if they don't get to improve, which a random blog post can't determine.
Cortana was first released over 3 years ago. That is plenty of time for a chance at improvement.
> That's why companies don't base their decisions on random posts or comments.
Companies should base their decisions on customer feedback, and so far all the feedback I've seen on Cortana is that it's downright terrible.
The difference between autocomplete=off and the rest of your examples is that there are actually positive UX use cases for disabling autocomplete on certain inputs (e.g. when you are an admin editing existing users)
Ad blockers have false positives as well. And there's a use case for blocking the user from closing the tab (onbeforeunload), such as prompting them to save/submit what they're working on. But for all of those, the browser is still in control and the question is what provides the most benefit for the user.
So, along the same lines, it may make sense to improve the UI for autocompleting users, or for hinting about the use of the field, to make it easier for sysadmins. But that shouldn't break the more common case of handling sites that just think they're Too Special or Too Important to allow saving login information.
Videos of monks performing self-immolation are enough proof to me.
These monks practice mindfulness meditation for decades, and as a result, they have such control over their minds that they are literally able to "ignore" the pain of being on fire and sit still through the entire duration.
And that is proof of what ? That some forms of mindfulness can overcome some physical or mental stimuli ? I think this can cause an increase in self-destructive behaviour as much as affirmative ones (as your example shows). Why would that be proof of the positive mental effects ? (I think it's what this general discussion is about at least.)
It at the very least shows that mindfulness gives you more control of your reaction to your thoughts. A normal human being gets constantly bombarded by random thoughts, a lot of them negative, and it tends to affect his or her behavior. Mindfulness teaches you to be more aware of your thoughts and gives you the choice of allowing them to affect how you behave or not.
I'm pretty sure I could do that too without much practice. Assuming it was an intentional choice, anyway.
Your body can only handle so much pain before it basically shuts off, and then you don't feel much of anything. The real pain comes later, when you're healing.
Burning to death only hurts briefly, when the nerves have had enough damage you can't feel. Adrenaline helps block pain, as well. Besides, self-immolation is not exclusive to those who have practiced mindfulness.
Ever had a severe accident, like a motorcycle accident, or been ran over by a car? The pain comes after, not in the moment when the trauma occurs.
Yes, the pain is only there before your nerves get burnt out, but even just a minute of being burned alive is enough to make any normal human being, including you and me, react involuntarily to the pain.
And yes, I have been in high pain incidents. I recall the pain happening during and after.
Meditation (at least mindfulness meditation) is meant to rid you of inner monologue though. It can be immensely helpful considering depression is often caused by constant negative thoughts.
I'm disappointed to see RiotJS not getting much attention lately. I've been using it for a B2B SaaS application and it has been intuitive and elegant to use, and easy to pick up for any Javascript developers.
I tried learning Angular but found it to be too monolithic, and I'm currently using React Native to develop the mobile apps. If React Native is any indicator of how React web works, I don't think it compares to RiotJS in terms of developer happiness.. i.e. React has a tougher learning curve, the code is less readable and not as concise, a lot of things feel unnecessary, whereas RiotJS is significantly more lightweight, more intuitive, and faster to get things working (from my experience)
I've never heard of death by a thousand control variables.
What's exactly wrong with data controlled by relevant variables such as education, hours worked, and experience? If you find a combination of variables that reject the hypothesis, maybe you should look further into it, no? Dismissing it because you think it's not objective for some absurd reason is rather unscientific. "Trying all possible combinations of variables" does not disqualify an analysis from being objective--not sure where you got that idea from.
There is nothing wrong with controls when they're relevant and have a convincing motive behind their inclusion.
What's not okay is sequentially trying all of the possible analyses and then stopping the moment you find the exact combination of variables that tells you what you had already assumed to be true. Especially so when simple analyses point to A, and you keep adding new variables until you get to B, which is exactly the case here. That is a very well known abuse of statistics. There is a reason all the well known and popular Information Criterions (which measure model quality) are parameterized by the number of parameters in the model.
And while adding control variables isn't per se bad, there are proper precautions to take when doing so, which become exponentially more costly the more you add. Such as segmented sampling, non-linearity transformations, and even controlled experiments. Because these fraudsters have a motive, the model only needs to be as rigorous as necessary to secure their predetermined conclusion. The "keep adding variables" model almost always ends up as a way to lie with statistics.
Wow, I've seen so many comments by climate change deniers using arguments so similar to yours that I literally had deja vu.
The horseshoe nature of politics will never cease to amuse me, though; thanks for the example.
Also, if a hypothesis is supported by simple studies but falls apart under more complex ones, it might be too simplistic a hypothesis. Almost as if sexism (and sexism guilt-slinging) wasn't an entirely black-and-white problem. Who'd have thought?
Climate Change Deniers are some of the biggest users of the flawed analyses that I'm talking about. For example, several climate change deniers argue that if you control for urban heat island effects, the magnitude of the human component decreases substantially. Such willy-nilly expansions of model complexity fall apart when modeled more rigorously [0][1].
You'll notice that I've never claimed that new parameters universally make the model worse. All of the common Information Criterions [2][3][4] are numerically capable of improvement of model quality with increases in parameter space...it's just unlikely. Such p-value hunting might give you the p-value you're looking for, but it is very unlikely to improve the model.
Sexism is a hard problem. Throwing variables wantonly at models until sexism disappears isn't doing anything for the problem...it's nothing more than a pseudoscientific way of pretending it doesn't exist.
Why? Guys who want to work with little children are discriminated against even more - some parents are evenjoying afraid of them. I don't think it was that bad comparison.
We're not talking about job discrimination here, we're talking about learning something because of personal interest. Being discriminated against in employment is a separate issue that comes later after you're actually qualified for such a job (or maybe after you've got the job).
There's plenty of men, I believe, who would be interested in working with little children, but who avoid those professions because of the social stigma. It's sensible to avoid a profession if you think you're going to have a very hard time gaining employment in it, or will suffer a lot socially for it (many people seem to think such men are pedophiles).
But we're talking here about people learning something out of personal interest. You don't need anyone's approval to learn programming, nor many other things. But many things do require more resources to learn; aircraft piloting for instance. You're not going to ever do that for free, though you can learn some of it with a software simulator fairly cheaply. Sailing is another that comes to mind; that's pretty impossible to learn I think without having an actual sailboat. But programming is comparatively easy; all you need is a computer and internet connection, which these days are considered cheap and ubiquitous. The software is all free; you can download absolutely everything you need, including an OS. There's an absolute plethora of websites and forums to go to to learn more (e.g. StackExchange). The barrier to entry is ridiculously low. But you do need time, and personal interest and drive. Now maybe you'll have some trouble gaining employment after you've learned it, esp. since you don't have a college degree in a related field, but that's another subject. The original article here was about someone already working in programming, who didn't bother to learn another language.
Looking at it as objectively as you can, what advantages does dynamic typing have over static typing?
The only potential candidate I can think of is 'more flexibility'.
However, undefined behaviour is not a desirable trait when designing programs, and languages with static types have ways to provide polymorphic functions without unhandled behaviour (such as pattern matching on function arguments).
Some may argue that dynamic languages are more readable, but there are languages with static types that are both concise and readable (Elm being a good example), so I wouldn't class that as a benefit.
Some may argue that the speed of prototyping a solution is a benefit, but the time saved putting together a prototype is often negated by the time spent debugging as the prototype matures.
So what advantages am I missing? There must be something that makes dynamic languages popular. What reasons are there to use a dynamically-typed language over a statically-typed one?
Static types limit the expressivity of your code to what the type system is able to prove. Depending on what you're trying to do, you can spend more time arguing with the type system than getting things done.
This really starts happening with a vengeance when you do increasingly lispy things, like creating dsls that move the language closer to your problem domain by writing programs that write programs.
Consider e.g. the way activerecord introspects the database schema to enrich your model declarations without further work from you.
Now there are ways to get similar effects in statically typed systems, but it's much more work, particularly if the dynamism comes from the execution environment (rather than compilation).
Okay, can you give me one example of a useful macro that you'd write in a dynamically typed language. I'd like to see what challenges there are recreating it in a statically-typed language.
Also, regarding activerecord, it seems you're hinting at the benefits you get from composibility, is that correct?
For me, the benchmark is parsing a heterogenous data structure in JSON. For example, an array of inventory items. We get around this by shoehorning them all into a common structure, but in a dynamic language with a dynamic datastore we can store them and access them in a more native (to the problemspace) manner.
That looks interesting. Is there an analogous Haskell or OCaml feature to F#'s active patterns?
Regardless, I still think it's a moot point. I understand that static typing can be great for catching bugs sooner rather than later. But doing that takes time and work. Basically the thing that all static typing advocates neglect is that sometimes I just don't want to put that time and work in up front.
Most people would agree they'd want to put that work in before shipping their software to millions of people, but most code is only used by a few people a few times. The line between development and production is blurry. Most of the time, I would much rather run code that mostly works _now_ and has tons of bugs than have to put in more work. Even if it's not much more work, it's still not nothing. I own my computer and tell it what to do. But a compiler rejecting my code b/c there _might_ be an edge case that has an error is unacceptable. Which is why I would love static typing if you could simply turn it off. I know of some research in gradual typing, but I've never seen it in any mainstream languages.
> but there are languages with static types that are both concise and readable (Elm being a good example)
Elm's to me embodies everything that drove me to Ruby over functional languages: Terseness in what to me is all the wrong places, coupled with too verbose typing.
> So what advantages am I missing?
The ones you have glossed over: More flexibility, and ability to be terse while readable. They matter more to many of us than you might think.
What finally sold me on Ruby was when I as an experiment rewrote a piece of queueing middleware we were using from C to Ruby and added significant number of features while cutting the number of lines to 10% of the original. Maybe I could achieve similar compact code with a statically typed language, but the likely candidates at the time at least were either extremely verbose or languages I considered absolutely unreadable (Haskell being top of my list of offenders - most of these languages have syntax that is clearly designed by people who are inspired by maths, unaware or not caring that this will push away most people).
I would love a "more static" Ruby, but I would not be willing to lose the terseness or expressiveness or readability to gain it. Maybe Crystal will get the balance right over time, though personally I believe you can get a lot more performance out of Ruby too without a lot of the sacrifices Crystal is making (but it will take a lot of work).
>"Elm's to me embodies everything that drove me to Ruby over functional languages: Terseness in what to me is all the wrong places, coupled with too verbose typing."
Elm can use type inference to work out types. The difference between this and dynamic languages is that it does it at compile time so you pay no runtime overhead.
No, I'm speaking from having read a bunch of Elm code.
> Elm can use type inference to work out types. The difference between this and dynamic languages is that it does it at compile time so you pay no runtime overhead.
I know that. It does not change what I wrote as the outcome is that Elm code still includes more type annotations than I'm willing to deal with.
Yes. There's a reason you practically never find people using those things in real Ruby projects, despite the huge number of such libraries that exist.
I think the real reason might be lack of decent tooling, and developer culture.
Most attempts to add static types to ruby code don't come with complementary tools to give us some of the advantages that would immediately gain developer support. In your editor, for example, code completion based on type annotations would be a huge plus, but I don't know of any tools that give me that in ruby (if they exist, I'm unaware). In most cases, your code will still run regardless, unless you use the separate tool to type-check, and are disciplined about its feedback. It's unlikely that all your team will _always_ run the type checker, and although you can have it configured to run the type checker on every save just like you'd do with tests, it's an inconvenience.
In terms of culture, I'm mainly thinking of Rails here, but I can think of more than a few ruby projects/libraries as well. Ruby is, simply put, a dynamic language. Even if you use typed.rb, you won't get much information about the libraries you'll be using. Code completion is mostly based off comments/documentation and only in some editors, and in many cases might not even be there. I also feel that many ruby developers simply don't like types, and that's the end of the story. I'm convinced that most don't see the advantages. For instance, in teams where we've added Rubocop to simply lint our codebases, I've noticed developers complain about warnings and errors that the linter reports, whenever the developer thinks they know better. I imagine the same happening with type annotations in ruby codebases. It would become a task that you run before committing or before a merge request (like tests, in some teams). When they're confronted with a bunch of errors, they'll go into flight mode: "but i've tested this manually and it works, why is this linter complaining, and why is my type checker complaining as well?". Needless to say, the type checker might have just flagged a _potential_ bug for a use case you might not have manually tested yet.
edit: I think tooling can help shift the developer culture aspect. Better tooling provides a better developer experience and in the end that's all we want. Flow and TypeScript are perfect examples (which I adore).
Ruby is dominated by Rails, but Rails sidesteps the issues of undefined state by 'convention over configuration'.
As for other Ruby projects, static typing would help with performance issues, why developers choose not to add types when hitting performance bottlenecks is not something I fully understand, perhaps it's just seen as normal to rely on C modules to increase performance. That said, Graal/Truffle does seem to offer hope that Ruby performance can be greatly improved.
I understand that there are indeed cases where statically typed languages make a lot of sense. But anecdotally, I use ruby for web apps, and honestly in millions of lines I have written, I have rarely run into issues because the language is dynamically typed. Is it because I am so familiar with this style? I mean it isn't just me, other ruby (and previously perl) developers I've chatted with just don't have issues like I commonly hear described.
> Some may argue that the speed of prototyping a solution is a benefit, but the time saved putting together a prototype is often negated by the time spent debugging as the prototype matures.
Most prototypes fail to gain traction and are discarded well before they mature and maintenance costs start rising.
I'm not sure there's an objective truth in programming language comparison. There's just the right tool for the right job. I love ruby because it's a joy to write, that doesn't mean you can or should use it for everything.
I know I know, just because I profited on Bitcoin doesn't rule it out as a Ponzi scheme or whatever HN calls Bitcoin nowadays. I and many others genuinely believe that Bitcoin provides great value through its underlying technology and also has the potential to become something greater.