Hacker Newsnew | past | comments | ask | show | jobs | submit | Elte's commentslogin

Germany here, I have a fixed rate for 30 years.


Fixed rate for the full duration is also standard in France. People only refinance if the new rates are advantageous (and they find a bank which accepts). Variable rates for mortgage are virtually unheard of.


Long ago in high school, I entered a LEGO robot competition with some friends. Tests were line following, collision detection, etc. One of the tests involved the bot being sent on a collision course with a wall. It had to detect running into it and turn around. This was one of the easiest ones to complete, but shortly before the test started we realized that our pressure sensor was malfunctioning and didn't send any more signals. There was no more time to swap it out, I don't even think we had a spare to be honest.

Not wanting to give up points on an easy test, we gauged the distance the bot had to cover in the test, and quickly uploaded some new software. At the start of the test, our bot moved forward for 4 seconds, stopped, then turned around. Full points on that one!

Some times things just need to work and we can worry about them working _correctly_ later...


You... Literally made a test defeat device.

In other words, you pulled a Dieselgate, in LEGO form.

Were I to judge your implementation, not only would you sacrifice those points, I'd have disqualified you from the competition on ethical grounds.

There is never an excuse for smoke and mirrors. Never.


Ah yes. I often use Maps when I have a rough idea where to go, but I just need to commit the street names of a few turns to memory. Finding those names indeed tends to be a frustrating experience.


You could ask for directions and look at the steps.


That some times works, but the references I'm interested in aren't always the one directly on the route. For instance, the name of the street just before the one you need to turn into can be helpful to remember. Also, if it's a list of directions I no longer have the context of the actual map, so it's a subpar solution regardless.


It's good to know I'm not the only one. Whenever I've had an occasion recently where I thought an LLM might be useful (either I didn't know the framework or I didn't quite understand the subject matter) and asked ChatGPT or Bard, they always just hallucinated crap that didn't work and couldn't be made to work. Just yesterday Bard hallucinated a method that was called `something_that_exists_somethingIAskedAbout()`, switching to camel case halfway through, that obviously didn't exist. For the same problem, ChatGPT came up with an existing method, but modified its parameter list to make it do things it didn't do.

I struggle to think of a single instance where it's been helpful for me, and I keep trying, because I don't enjoy doing useless grunt work any more than the next guy, and I want to at least try to learn to ask it the "right" questions.


I had to do a regex once in the past 3 months. It helped.


I've gone through the process some 4 to 5 times in the past two years or so and never had any issues with it. The delays are annoying, but I get part of my money back no problem.


I currently have YouTube Music, and have looked into getting Premium a couple of times, because I watch enough YT content to want the ads disabled everywhere. But for some reason they won't let me. If I go to the YT Premium page, they tell me I can't do anything there, and I have to go to my account page, where of course there is nothing to do either. I suppose I could cancel my YT Music subscription and then try to sub YT Premium, but with notoriously bad customer support and a number of family members using it also, I kind of don't want to go deal with that also not working and me having to go through the hassle of getting everybody in again.

If they want to sell premium so badly, maybe they should actually let me buy it.


The documentation reads like a tutorial, which is fine the first time you read it, and really annoying the next 99 times when you're just trying to find something. My biggest gripe though is that the majority of classes / methods aren't locally documented with comments, or only minimally. If I don't understand how a certain parameter behaves (or even what a function does), I have to go online and search for examples, or look through the docs hoping that it's explained. And don't get me started on Facades, which are a code discovery dead end...

All that being said, my overall experience of working with PHP / Laravel is quite pleasant, probably more so than other technologies I've worked with in recent years. Everything has its issues I suppose.


Yep, the tutorials/guides are really good, but as you say, the details aren't really covered. There are so many examples like "relationsToArray(): Get the model's relationships in array form."[1] Just an expanded version of the method name with no context or detail.

[1]: https://laravel.com/api/10.x/Illuminate/Database/Eloquent/Mo...


The good ol' "you thought docs, but actually generated no information gain and no docs, ha tricked you! but look at how fancy our docs website looks" kind of documentation style.


> The documentation reads like a tutorial, which is fine the first time you read it, and really annoying the next 99 times when you're just trying to find something.

Ah yes, the Ansible approach. I've used it for a decade, and I routinely get lost in its utterly terrible by-example documentation.

They are the golden standard on how not to write documentation.

God, I hate the Ansible docs so much, they are the reason I burned 30% of my Kagi search quota this month.


I think more documentation teams need to know of the concept of Diataxis [1] so they can invest in the 4 different kinds of documentation developers turn to for help when picking up a new technology:

- tutorials;

- how-to guides;

- technical reference and;

- explanation.

1: https://diataxis.fr


Thanks for this. This is something I kind of knew but would have been hard pressed to articulate, especially on the spot. Seeing it laid out like this is very useful.


This looks very nice and is something I have been searching for but didn't know existed. Thank you!


See also the C4 model "for visualising software architecture". https://c4model.com/


What is wrong with the Ansible documentation?! Almost all Ansible module documentation pages follow the same structure: a one-sentence synopsis, a list of OS packages needed to be present on the machine where Ansible runs and on the target machine, a table of parameters including aliases, default values and other hints, a list of attributes exported, some notes, and real-world examples.

It doesn't get more clear than that.


Outside of modules documentation, the rest of Ansible docs are examples. For instance, there is no page where all ways of accessing inventory variables are listed. Or supported jinja filters. They are all scattered in a myriad of examples, which you have to read, carefully, to find what you need.

https://docs.ansible.com/ansible/latest/playbook_guide/playb...

On this page there is no quick index of all the functions available, their argument and a short summary of how they work. You need to synthesize this information yourself by reading through ALL the examples, and hoping your niche use case is listed.

There are more than one type of documentation, with different use cases. There's the tutorial/list of examples, which Ansible excels at, and is ideal for a first timer reading the docs from cover to cover. Then there's the API reference with quick index, for intermediate to advanced users, where they know roughly what they need, they just need to find it. In this, Ansible's docs fail dramatically.


I am certainly not one to defend the new ansible docs, but part of the woes that you're describing are due to the fact they just doubled down on `ansible-galaxy install` based setups, meaning there isn't "an answer" to what filters are available in ansible

The authoritative answer to what filters are currently available in your distribution is by running `ansible-doc -t filter --list` which does include a summary line, although for some of them it's "geez, thanks" just like any open source collection of disparate modules glued together

I used to actually build the ansible docs locally with singlehtml because I despised that chopped-up view, but now that they're all "galaxy all the things" it's practically useless again (although I will also say that building it locally and eliding all their tracking bullshit makes the pages load like a bazillion times faster, so ... still valuable in that way)


For stuff like Ansible (and other lacking software) docs i find that ChatGPT can provide the missing pieces.


I've started to use ChatGPT for semi high-level questions only this week, and I'm with you. It has hallucinated quite a few nonexistent functions, and it's generally unhelpful slightly more often than it is helpful. Just now I asked it why an `<input />` and a `<select />` with the same CSS had different heights (I've managed to avoid CSS for a while, shoot me XD), and gave it the CSS applied. It suggested the default browser styling was the culprit, and I should set the same height to both elements. I replied they already had the same height set. Then it suggested setting the same padding - they already had the same padding. Then it put `box-sizing` in an example and I promptly realized that `<input />` and `<select />` must have different default values for `box-sizing`. I asked if that was correct, and it said yup!

Based on what I've seen elsewhere, I really feel like it should've been able to answer this question directly. Overall this matches my experience so far this week. Not saying it's never useful, just regularly I expected it to be...better. Haven't had access to GPT-4 yet though, so I can't speak to it being better.


I remember asking it how to remove hidden files with rm and it hallucinated an -a option. Sometimes the hallucination makes more sense than reality.


Googling "why an <input /> and a <select /> with the same CSS had different heights" and pressing "I'm lucky" would give you the correct answer in seconds.

The AI is just a great waste of time in almost all cases I've tried so far. It's not even good at copy-pasting code…


Also not OP, but I literally just learned about Hedy [1] today. No experience except from clicking through it for 20 minutes, but it looks quite interesting, taking somebody from a language with a very simple syntax (and limited functionality) to full blown Python, one level at a time, by making the language gradually more complicated (and more powerful).

[1] https://www.hedycode.com/


I also quickly went through the basic tasks in 17 levels of Hedy in about 20 mins. (I just know a little programming.) Hedy is text-based and introduces ideas such as: print, entering variables, if, else, repeat, ... I really liked the gradual approach, which keeps you going forward onto the next level.

There are additional tasks at each level (see tabs at top) which I didn't try. It seems that these tasks are best done from left to right in order to get the basic idea of what is required.


Do kids who start programming with Hedy get confused about when a piece of text is interpreted as a variable vs. a string?

https://www.hedycode.com/hedy/2#default

The way it automatically detects variable within strings seems to magical. OTOH AIUI Hedy has been developed alongside research on what works for kids.


IIRC one of Hedy’s unique features is that it gradually increases in complexity as you “level up” including introducing what we’d call “breaking changes”. At level 4, they start allowing and requiring you to quote string literals: https://www.hedycode.com/hedy/4#default


Nice!

I'm excited for my son to try it out once he's gotten comfortable with scratch.

At the moment, he's more interested in the visual design part of scratch than the programming, so I probably need to find some cool existing animations to inspire him.


i found it very confusing that the introduction at each level links to the next level but does not tell you to try the exercises. i didn't even realize that the tabs were exercises per level as i consider tabs a higher level hierarchy compared to the previous/next buttons. (i expect those to work within a tab, and not switch to a different row of tabs)

and also, why introduce an echo command in level 1 only to drop it in level 2? they could have waited and introduced ask in level 2 or 3 even.

i love the quiz questions though, they even make me, as an experienced programmer, think


I've got a couple of templates where I've aliased the value of a TypeScript enum into a local variable before using it in the template, so I'm pretty certain at that point I was unable to use the value of the enum directly (that's also how I remember it). However, I just tried it and...it seems to just work now. So thanks for making me revisit that :D.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: