> WhatsApp is end-to-end encrypted, for instance, and it's used by billions.
and it causes no end of pain when you switch phones (esp. if you loose one).
Of all the chat services i use, Telegram is the only one that NEVER, EVER LOST ANY OF MY MESSAGES. Maybe for some people privacy is more important ; for me, not loosing any message I have under absolutely no circumstance is the n°1 baseline requirement for something to even be called a chat app.
Wow I really thought this would be a compile error. The implicit cast here really is a footgun. Looks like '-Wrange-loop-construct' (included in -Wall) does catch it:
> warning: loop variable 'v' of type 'const std::pair<std::__cxx11::basic_string<char>, int>&' binds to a temporary constructed from type 'std::pair<const std::__cxx11::basic_string<char>, int>' [-Wrange-loop-construct]
11 | for (const std::pair<std::string, int>& v: m) {
Why is this a perf footgun? As someone who doesn't write a lot of c++, I don't see anything intuitively wrong.
Is it that iterating over map yields something other than `std::pair`, but which can be converted to `std::pair` (with nontrivial cost) and that result is bound by reference?
It is not a cast. std::pair<const std::string, ...> and std::pair<std::string,...> are different types, although there is an implicit conversion. So a temporary is implicitly created and bound to the const reference. So not only there is a copy, you have a reference to an object that is destroyed at end of scope when you might expect it to live further.
I guess this is one of the reasons, why I don't use C++. Temporaries is a topic, where C++ on one side and me and C on the other side has had disagreements in the past. Why does changing the type even create another object at all? Why does it allocate? Why doesn't the optimizer use the effective type to optimize that away?
> Why does changing the type even create another object at all?
There's no such thing as "changing the type" in c++.
Function returns an object type A, your variable is of type B, compiler tries to see if there is a conversion of the value of type A to a new value of type B
Each entry in the map will be copied. In C++, const T& is allowed to bind to a temporary object (whose lifetime will be extended). So a new pair is implicitly constructed, and the reference binds to this object.
Yes and no, you can use c in situations where there's no "assembly", for instance when synthesizing FPGAs. You target flow graphs directly in that case IIRC.
To be honest I've never worked in an environment that seemed too complex. On my side my primary blocker is writing code. I have an unending list of features, protocols, experiments, etc. to implement, and so far the main limit was the time necessary to actually write the damn code.
That sounds like papier mache more than bridge building, forever pasting more code on as ideas and time permit without the foresight to engineer or architect towards some cohesive long-term vision.
Most software products built that way seem to move fast at first but become monstrous abominations over time. If those are the only places you keep finding yourself in, be careful!
There are a wide number of small problems for which we do not need bridges.
As a stupid example, I hate the functionality that YouTube has to maintain playlists. However, I don't have the time to build something by hand. It turns out that the general case is hard, but the "for me" case is vibe codable. (Yes, I could code it myself. No, I'm not going to spend the time to do so.)
Or, using the Jira API to extract the statistics I need instead of spending a Thursday night away from the family or pushing out other work.
Or, any number of tools that are within my capabilities but not within my time budget. And there's more potential software that fits this bill than software that needs to be bridge-stable.
But the person I replied to seemed to be talking about a task agenda for their professional work, not a todo list of bespoke little weekend hobby hacks that might be handy "around the house".
You assume they were talking about a single product. at my job there is essentially endless amounts of small tasks. We have many products and clients we have many internal needs, but can't really justify the human capital. Like I might write 20 to 50 Python scripts in a week just to visualize the output of my code. Dead boring stuff like making yet another matplotlib plot, simple stats, etc. Sometimes some simple animations. there is no monstrosity being built, this is not evidence of tagging on features or whatever you think must be happening, it's just a lot of work that doesn't justify paying a bay area principal engineer salary to do in the face of a board that thinks the path to riches is laying off the people actually making things and turning the screws on the remaining people struggling to keep up with the workflow.
Work is finite, but there can be vastly more available than there are employees to do it for many reasons, not just my personal case.
The vision is "being compatible with protocols used in my field". There's hundreds over hundreds of those. Example: this app supports more than 700 protocols, hardware, etc. (https://bitfocus.io/connections) and still it's missing an AWFUL LOT and only handles fairly basic cases in general. There's just no way around writing the code for each custom bespoke protocol for whatever $APPLIANCE people are going to bring and expect to work. Even if each protocol fits neatly in a single self-contained class or two.
I don’t want to imply this is your case, because of course I’ve no idea how you work. But I’ve seen way too often, the reason for so many separate features is:
A) as stated by parent comment, the ones doing req. mngmt. Are doing a poor job of abstracting the requirements, and what could be done as one feature suddenly turns in 25.
B) in a similar manner as A, all solutions imply writing more and more code, and never refactor and abstract parts away.
My guess would be that the long list is maybe not self contained features (although still can be, I know I have more feature ideas than I can deliver in the next couple years myself), but behaviors or requirements of one or a handful of product feature areas.
When you start getting down into the weeds, there can be tons and tons of little details around state maintenance, accessibility, edge cases, failure modes, alternate operation modes etc.
That all combines to make lots of code that is highly interconnected, so you need to write even more code to test it. Sometimes much more than even the target implementations code.
I don't read this as when open-source was invented, but when it happened for the corporate world. In 2002 it was a very reasonable choice for $BIG_COMPANY to use a proprietary web server, e.g. IIS. In 2008 that would have been really be weird.
But why did that make development cheaper? An enterprise copy of Windows with IIS cost maybe a thousand bucks, right? Maybe there were more costs, my knowledge is, y'know, 23 years out of date.
You decide you need a web server. Ask management chain for approval. Ask IT dept for approval. Ask finance for approval for the expense. Contact Microsoft sales. Buy it.
Now you can start developing on it…
With open source it’s not just the cost of software you save, but also potentially all the other bureaucracy that you save due to not having to pay money to do something. You also get a lot of transparency on the technical side about the products you may choose to use.
If MySQL and Postgresql had been acceptables choices 25 years ago, our company at the time would've saved SO MUCH money that now went to fund Larry Ellison's yacht(s).
Both existed, but not in a way anyone could sell to a) customers b) C-staff making the final call.
That’s interesting. I would have said the opposite. I’ve never used any of the social features, but the technical aspects (including integrations) are where the value is.
It does break and go down; and GHA are a real pain in the ass. But the basic hosting and PR workflow are fine.
The PR workflow is fine if you don’t care about stacked PRs, you don’t write reviews, you don’t read nontrivial reviews, and you don’t need the diff viewer.
The site UI has been going downhill these years. It's become heavy and slow, and the buttons are more and more randomly placed. Like after you search for something in the repo, to go back to the repo front page you needed to click on the most unexpected button.
It's still getting things done, for sure, but no longer pleasant to work with.
I think Github has a nice UI.....when the contents finishes loading.
That's the real problem with Github these days. Too much critical information behind throbbers that take their sweet time. I find Codeberg much more responsive, despite being an ocean away and having the occasional anti-AI-scraper screen.
Some competitors like Gitlab have reduced friction by offering "Login with Github", so if you've already got a Github account, the bar for signing up some alternative forges is low.
I help with one of the most popular projects on Codeberg, Fuzzel. I can say we get no shortage of issues and feature requests from being on an alternative forge. Indeed, we have plenty!
What is the value of the social network? I discover code by looking for a package in my language via a search engine. Whether it’s GitHub/GitLab/Gittea/etc doesn’t matter as long as it’s indexed by the search engine.
Just a couple weeks ago a bogus update was pushed to Ubuntu 24 which completely broke Nvidia as they pushed a different version of the 580 drivers and user space libraries
and it causes no end of pain when you switch phones (esp. if you loose one). Of all the chat services i use, Telegram is the only one that NEVER, EVER LOST ANY OF MY MESSAGES. Maybe for some people privacy is more important ; for me, not loosing any message I have under absolutely no circumstance is the n°1 baseline requirement for something to even be called a chat app.
reply