Hacker Newsnew | past | comments | ask | show | jobs | submit | weiliddat's commentslogin

IME strato is OK for cheap VPSes as long as you don’t have high expectations.

5-6 years ago, I remember trying to scale up to the 16/32GB tier was miserable and oversubscribed; moving to Hetzner at the same price point brought a huge performance boost (mostly on CPU and disk access speed).


Don’t want to ding strato exclusively, likely the case for most $1-$4/month type hosting.

Not bad as long as you know what you’re getting yourself into.


Has anyone tried using HTMX + some realtime query layer like Convex?

Here's a real-time-ish planning poker written in Go + Htmx in ~500 LoC

App (can take a few seconds to spin up if dormant): https://estimate.work/

Source: https://github.com/weiliddat/estimate-work


I mean I built a pretty featureful P2P planning poker app using React and it's around 1300 lines of typescript.

More, but I don't think it's a mind-blowing difference and I wasn't playing code golf when I wrote it. I wouldn't have used redux if I was!

https://github.com/ceuk/planning-poker


Oh cool, very interesting approach!

Did a quick test, since before this I also used some very ad-heavy p2p solution, and I see similar issues there. Not sure if you're looking for feedback, but these were all issues I considered before settling on a server-based HTMX long-interrupted-polling approach, which if you think about having server + client + realtime-ish features in the context of "just htmx" + tiny LoC is pretty cool (well I think it's pretty cool :D)

In the WebRTC p2p approach, without some sort of sync protocol that validates the state of data:

- the host must be online / already there to join a room; the host leaving the room means everyone gets kicked!

- if you rejoin a room and don't receive updates, you get a partial view of the data

- if you have data connectivity issues, you get a partial view of the data

- you must have a WebRTC capable browser and Internet connection


Yeah I set out to build something with WebRTC rather than to build a planning poker app if that makes sense. Just wanted to test out the tech.

Having said that, I do still use this off and on and personally the limitations don't bug me too much. Would be a nightmare for more mission-critical software though


Yeah the default doesn't do a 1:1 display to pixel ratio.

Just to be pedantic it is integer scaled (from 1440x900 to 2880x1800 but then resampled down to the native resolution of the MBA 2560x1600 via something better than bilinear).


Major difference is one you're watching something without interacting with it and the other is responding to your action; one you have your gaze relatively still, taking in the entire frame, the other your eyes are tracking an object as you interact with it via some sort of input device.

In tracking motion your eyes/brain can see improved motion resolution (how clear the details are in an object moving across the screen) up to 1000Hz.


Your body & nervous system processing has input lags on the order of 100ms and variance on the order of 10’s of ms though.


But your eyes can track a moving object (like a car, or a ball, or a cursor or text on a scrolling webpage); they don’t stay 100ms behind it.


That is predictive motion isn't the same thing.


> It's much harder to write simple code than complex for reasons that boil down to entropy -- there are simply many more ways to write complex code than simple code, and finding one of the simple expressions of program logic requires both smarts and a modicum of experience.

Also effort, there are smart people who couldn't be bothered to reduce extraneous load for other people, because they already took the effort to understand it, but they don't have the theory-of-mind to understand that it's not easy for others, or can't be bothered to do so.

> I have only made this letter longer because I have not had the time to make it shorter. - Blaise Pascal

Good rule of thumb I find is, did the new change make it harder or easier to reason about the change / topic?

If we go back to the concept of cognitive load, it's fine cognitive load goes up if the solution is necessarily complex. It's the extraneous bit that we should work to minimize, reduce if possible.


If Grey Beard doesn't relent

Project Manager: "Can we ship an order to multiple addresses? We need it in 2 weeks and Grey Beard didn't want to do it"

Eager Beaver: "Sure"

  if (order && items.length > 1 && ...) {  
    try {  
      const shipmentInformation = callNewModule(order, items, ...)  
      return shipmentInformation  
    } catch (err) {  
      // don't fail don't know if error is handled elsewhere  
      logger.error(err)  
    }  
  } else {  
    // old code by Grey Beard  
  }


... and then that `callNewModule` has weird bugs like mysteriously replacing `+` with spaces, sometimes labels are empty but only if they are shipped to a specific company, sometimes the invoices are generated multiple times for the same shipment, after 1 year after Sales has already sold this multi-item shipment feature to massive companies it suddenly stops working because the new module wasn't properly hooked for auto-renewing credentials with a specific service and the backlog of unshipped items but marked as shipped grows by the second...

Of course Eager Beaver didn't learn from this experience because they left the company a few months ago thinking their code was AWESOME and bragging about this one nicely scalable service they made for shipping to multiple addesses.

Meanwhile Grey Beard is the one putting out the fires, knowing that any attempt to tell Project Manager "finding and preventing situations like this was the reason why I told my estimate back then" would only be received with skepticism.


Of course, why reuse existing logic when we can (vibe) code new modules and functions from scratch every time we need it!

/s


Exactly, cognitive load is dynamic not static, and you can actually hold many more things in working memory than the oft-repeated 3-7 items (that's more if you're trying memorize and recall unrelated, novel items).

Once you commit a particular concept to long-term memory and it's not "leaky" (you have to think through the internal behavior/implementation details), then now you have more tools and ways to describe a collective bunch of lower-level concepts.

That's the same feeling programmers used to more powerful languages have to write less powerful languages — instead of using 1 concept to describe what you want, now you have to use multiple things. It's only easier if you've not grokked the concept.


While I support the goal of article, reducing extraneous cognitive load, I think some of the comments, and the article are missing a key point about cognitive load — it depends on the existing mental model the reader/author/developer has about the whole thing. There is no universal truth to reducing cognitive load like reducing abstractions / not relying on frameworks.

Reducing cognitive load doesn't happen in a vacuum where simple language constructs trump abstraction/smart language constructs. Writing code, documents, comments, choosing the right design all depend upon who you think is going to interact with those artifacts, and being able to understand what their likely state of mind is when they interact with those artifacts i.e. theory of mind.

What is high cognitive load is very different, for e.g. a mixed junior-senior-principal high-churn engineering team versus a homogenous team who have worked in the same codebase and team for 10+ years.

I'd argue the examples from the article are not high cognitive load abstractions, but the wrong abstractions that resulted in high cognitive load because they didn't make things simpler to reason about. There's a reason why all modern standard libraries ship with standard list/array/set/hashmap/string/date constructs, so we don't have manually reimplement them. They also give a team who is using the language (a framework in its own way) common vocabulary to talk about nouns and verbs related to those constructs. In essense, it is reducing the cognitive load once the initial learning phase of the language is done.

Reading through the examples in the article, what is likely wrong is that the decision to abstract/layer/framework is not chosen because of observation/emergent behavior, but rather because "it sounds cool" aka cargo cult programming or resume-driven programming.

If you notice a group of people fumble over the same things over and over again, and then try to introduce a new concept (abstraction/framework/function), and notice that it doesn't improve or makes it harder to understand after the initial learning period, then stop doing it! I know, sunk cost fallacy makes it difficult after you've spent 3 months convincing your PM/EM/CTO that a new framework might help, but then you have bigger problems than high cognitive load / wrong abstractions ;)



Yes, and I hope I made a point that it's not the right approach always.

> If you've internalized the mental models of the project into your long-term memory, you won't experience a high cognitive load ... If you keep the cognitive load low, people can contribute to your codebase within the first few hours of joining your company.

Yes, if you really are having new joiners often, and you expect them to be onboarded quickly. That happens for a specific segment of the software industry, not generally.

If you're working with the same people all the time, and it's the nth time someone made a mistake in a huge frontend JS codebase (pre-Angular/React) trying to wrangle with plain events and DOM state, a new framework (like React and co) that abstracts away events and DOM state management was perhaps the right choice.

This way of thinking and abstraction is so popular now that it's taken almost for granted by people who started learning programming for the web recently. For these people, using plain JS might be higher cognitive load, so

> Lots of cool architectures, fancy libraries and trendy technologies were used. In other words, the author had created a high cognitive load for us.

that's not a universal statement.

Do I personally believe there are better levels of abstraction, and simpler frameworks than React, Express, Django, Spring, Laravel, etc.? Yes, but if the developers you are hiring/teaching/working with are familiar with those, and it's working for them, then maybe that is the right choice to use.

The main statement I agree with in the article (apart from the goal of reducing cognitive load) comes much later, in

> So, why pay the price of high cognitive load for such a layered architecture, if it doesn't pay off in the future? Plus, in most cases, that future of replacing some core component never happens.

where the main problem is having the wrong reasons or wrong choice of architecture. It's not necessarily that abstraction/architecture is bad. I feel like the message of "you should keep it simple" is overemphasized in the rest of the article. Maybe that's in response to overengineering in the industry or the author's experience.



Stake in the ground , as smart phones did ; chargpt will make future dumber + once parasitic dependancy goal accomplished

Schools/ universities getting obliterated student performace tanking. Their mission is parasitic dependence induction "helping the masses" .

Like Google they have successfully pulled the football from Charlie brown on any responsibility over the quality of information provided ... all the other llms are just mirrors of chatgpt conception of an ai interaction experience ... they just announced it gov grants openai perpetual subscription $ fee from every federal employees (so they would be collecting as the employer all of the employees internal deliberations as chatgpt logs which court stated could not be deleted) , it really feels like they are trying to digitize the teachers union lock on student thought and automate out white collar wealth that nip at the heels of capital class).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: