Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For people finding this thread via web search in the future:

screen.studio is macOS screen recording software that checks for updates every five minutes. Somehow, that alone is NOT the bug described in this post. The /other/ bug described in this blog is: their software also downloaded a 250MB update file every five minutes.

The software developers there consider all of this normal except the actual download, which cost them $8000 in bandwidth fees.

To re-cap: Screen recording software. Checks for updates every five (5) minutes. That's 12 times an hour.

I choose software based on how much I trust the judgement of the developers. Please consider if this feels like reasonable judgement to you.



Yea, it seems like the wrong lesson was learned here: It should have been "Don't abuse your users' computers," but instead it was, "When you abuse your users' computers, make sure it doesn't cost the company anything."


That's a good summary and explains many ills in the software engineering industry.


$8000 for 2 petabytes of traffic is pretty cheap for them also.

There are plenty of shitty ISPs out there who would charge $$ per gigabyte after you hit a relatively small monthly cap. Even worse if you're using a mobile hotspot.

I would be mortified if my bug cost someone a few hundred bucks in overages overnight.


It got one of their customers booted off of their ISP; they did cover that person's overage costs though (and hopefully that person could get their account back).


> their software also downloaded a 250MB update file every five minutes

How on earth is a screen recording app 250 megabytes


Because developers can suck.

I work with developers in SCA/SBOM and there are countless devs that seem to work by #include 'everything'. You see crap where they include a misspelled package name and then they fix it by including the right package but not removing the wrong one!.


The lack of dependency awareness drives me insane. Someone imports a single method from the wrong package, which snowballs into the blind leading the blind and pinning transitive dependencies in order to deliver quick "fixes" for things we don't even use or need, which ultimately becomes 100 different kinds of nightmare that stifle any hope of agility.


In a code review a couple of years ago, I had to say "no" to a dev casually including pandas (and in turn numpy) for a one-liner convenience function in a Django web app that has no involvement with any number crunching whatsoever.


Coincidentally, Copilot has been incredibly liberal lately with its suggestions of including Pandas or Numpy in a tiny non-AI Flask app, even for simple things. I expect things to get worse.


There's a ton you can do with sqlite, which is in the Python standard library. You just have to think about it and write some SQL instead of having a nice Pythonic interface.


To push back on this, I consider pandas/numpy so crucial to Python as a whole they are effectively stdlib to me. I wouldn't blink at this because it would happen sooner or later.

Unless is was absolutely critical the server have as small as a footprint as humanly possible and it was absolutely guaranteed there would never need to be included in the future of course. However, that first constraint is the main one.


You forgot the "/s"?


Automated dependency resolution has made it so the default is frequently

> Someone imports a single method from the RIGHT package

and hundreds of megabytes come in for what might be one simple function.


and when that fails #pragma once, oh the memories!


>> their software also downloaded a 250MB update file every five minutes

> How on earth is a screen recording app 250 megabytes

How on earth is a screen recording app on a OS where the API to record the screen is built directly into the OS 250 megabytes?

It is extremely irresponsible to assume that your customers have infinite cheap bandwidth. In a previous life I worked with customers with remote sites (think mines or oil rigs in the middle of nowhere) where something like this would have cost them thousands of dollars per hour per computer per site.


> It is extremely irresponsible to assume that your customers have infinite cheap bandwidth

Judging by the price of monitor stands, I wouldn't be surprised for Apple to make such assumptions.


For a long time iOS did not have features to limit data usage on WiFi. They did introduce an option more recently for iPhone, but it seems such an option is not available to MacOS. Windows supported it as far as I could remember using it with tethering.


screen studio is pretty great, it has a lot of features and includes a simple video editor


Or.. Why on earth you need to check for updates 288x per day. It sounds and seems more like 'usage monitoring' rather than being sure that all users have the most recent bug fixes installed. What's wrong with checking for updates upon start once (and cache per day). What critical bugs or fixes could have been issued that warrant 288 update checks.


A 250MB download should be opt-in in the first place


> A 250MB download should be opt-in in the first place

I've read on HN that a lot of people have 10Gb Ethernet at home. /s


I got 8 :)


Do you mean 8 homes with 10Gb Ethernet, or 1 home with 8 10Gb Ethernet connections?


8 of the 10 Gbits I meant :) sorry I see it reads a bit weird yes. So 8 gbit for single line is the max currently. But huge competition on the horizon, so I expect soon more :)


8 people who have 10Gb Ethernet at home


I read it as 1 x 8GB connection But that’s only because I think 8GB is offered in my area. I’ve limited my current service to 1.5GB / 1GB fibre, because well I only run gbit Ethernet … so more sounds totally unnecessary


It sounds right, and this is the kind of thing I'd expect if developers are baking configuration into their app distribution. Like, you'd want usage rules or tracking plugins to be timely, and they didn't figure out how to check and distribute configurations in that way without a new app build.


> they didn't figure out how to check and distribute configurations in that way without a new app build.

Any effort to use their brain shall be drastically punished. /s


What's wrong with checking for updates upon start once (and cache per day)

For me that would also be wrong, if I cannot disable it in the configuration. I do bot want to extend startup time.


Wait until you learn about non-blocking IO. And threads.

It's a whole new world out there.


If you're expecting the guys shipping the 250mb bloated app to get this right i might haave a bridge to sell you


Pretty snarky and useless comment. It is clear I mean also, for example, noot use BW for that.


They probably just combined all phoning home information into one. Usage monitoring includes version used, which leads to automatic update when needed (or when bugged...).


Unpacked, it's actually 517M on disk:

   517M  ─┬ Screen Studio.app                     100%
   517M   └─┬ Contents                            100%
   284M     ├─┬ Resources                          55%
   150M     │ ├── app.asar                         29%
   133M     │ └─┬ app.asar.unpacked                26%
   117M     │   ├─┬ bin                            23%
    39M     │   │ ├── ffmpeg-darwin-arm64           8%
    26M     │   │ ├── deep-filter-arm64             5%
    11M     │   │ ├─┬ prod                          2%
  10.0M     │   │ │ └── polyrecorder-prod           2%
    11M     │   │ ├─┬ beta                          2%
  10.0M     │   │ │ └── polyrecorder-beta           2%
  10.0M     │   │ ├── hide-icons                    2%
   9.9M     │   │ ├─┬ discovery                     2%
   8.9M     │   │ │ └── polyrecorder                2%
   5.6M     │   │ └── macos-wallpaper               1%
    16M     │   └─┬ node_modules                    3%
    10M     │     ├─┬ hide-desktop-icons            2%
  10.0M     │     │ └─┬ scripts                     2%
  10.0M     │     │   └── HideIcons                 2%
   5.7M     │     └─┬ wallpaper                     1%
   5.7M     │       └─┬ source                      1%
   5.6M     │         └── macos-wallpaper           1%
   232M     └─┬ Frameworks                         45%
   231M       └─┬ Electron Framework.framework     45%
   231M         └─┬ Versions                       45%
   231M           └─┬ A                            45%
   147M             ├── Electron Framework         29%
    57M             ├─┬ Resources                  11%
  10.0M             │ ├── icudtl.dat                2%
   5.5M             │ └── resources.pak             1%
    24M             └─┬ Libraries                   5%
    15M               ├── libvk_swiftshader.dylib   3%
   6.8M               └── libGLESv2.dylib           1%


Is it normal to include the Electron framework like that? Is it not also compiled with the binary? Might be a stupid question, I'm not a developer. Seems like a very, very heavy program to be doing such a straightforward function. On MacOS, I'm sure it also requires a lot of iffy permissions. I think I'd stick with the built-in screen recorder myself.


F** Electron


So looks like the app itself is about 10MB but there are multiple copies of it, a bundled ffmpeg and all kinds of crap like wallpaper?


I'm not sure why there are both app.asar and app.asar.unpacked. I did just run `npx @electron/asar extract app.asar` and confirmed it's a superset of app.asar.unpacked. The unpacked files are mostly executables, so it may have something to do with code signing requirements.


Looks like a one person shop, lots of things not optimised.


As I recall, it’s an Electron app. I just checked and the current version of Google Chrome is 635 MB, with its DMG being 224 MB.

So yes, it’s insane, but easy to see where the size comes from.


firefox is only about 200MB less.


“Only” is doing some heavy lifting there. 200 MB is a lot, both in absolute and relative terms. It means Firefox is a full third smaller.

Regardless, that’s absolutely irrelevant to the point that this app’s size is explained by Chromium’s (and thus Electron’s) size.


Tauri has been a thing for a while, it baffles me people still choose Electron without a good reason to do so.

Also webapps are just great nowadays most OS support install PWA's fairly decently no?

ffs


Tauri is not as platform-agnostic as Electron is because it uses different web views depending on the platform. I ran into a few SVG-related problems myself when trying it out for a bit.

For example, on Linux, it uses WebKitGTK as the browser engine, which doesn't render the same way Chrome does (which is the web view used on Windows), so multi-platform support is not totally seamless.

Using something like Servo as a lightweight, platform-independent web view seems like the way forward, but it's not ready yet.


> Tauri is not as platform-agnostic as Electron

I suspect the real reason electron got used here is that ChatGPT/Copilot/whatever has almost no Tauri example code in the training set, so for some developers it effectively doesn't exist.


>on Linux it uses WebKitGTK

It's about time Linux desktops adopt some form of ${XDG_WEB_ENGINES:-/opt/web_engines} convention to have web-based programs to fetch their engines as needed and play nice with each other.


It has: /dev/null /s


We're talking about a MacOS App. Platform-agnostic is irrelevant.


It's relevant in the broader context, cross-platform is a significant reason people choose Electron, and lighter alternatives like Tauri still have some issues there.


seconded -- tried to use tauri for a cross-platform app but the integrated webview on android is horrendous. had to rewrite basic things from scratch to work around those issues, at which point why am I even using a "cross-platform" framework?

I imagine if you stick to desktop the situation is less awful but still


> Tauri is not as platform agnostic as Electron is

Found this a few months ago: https://gifcap.dev/

Screen recording straight from a regular browser window, though it creates GIFs instead of video files. Links to a git repo so you can set it up locally.


Thanks, didn't knew about Servo, hopefully we'll get there Electron really is bloated and any app using it eats my ram whatever how much of it i have


> Also webapps are just great nowadays most OS support install PWA's fairly decently no?

I would say no, and some are actively moving away from PWA support even if they had it before.

Plus, electron et al let you hook into native system APIs whereas a PWA cannot, AFAIK.


There’s never a good reason to chose electron.


The app itself is probably much bigger than 250mb. If it is using Electron and React/other JS library like a million other UIs just the dependencies will be almost that big.


For context, the latest iOS update is ~3.2GB, and the changelog highlights are basically 8 new emojis, some security updates, some bug fixes. It makes me want to cry.


That 3.2G is some sort of compressed OS image though, right? So it’d be of a constant size relative to whatever changes or updates it brings.


Just my hypothesis: some softwares includes video tutorial accessible offline. A short but not-compressed-high-res video can easily go big.


It was probably written by the type of programmers who criticise programmers like me for using "unsafe" languages.


You probably deserve to be criticized if you think this is the culprit.


"How can I make this about me and my C/C++ persecution complex?"


I don’t use their software but if someone has they should be able to decompile it.


It's an electron app.


I would bet money it's electron


I would be so embarrassed about this bug that I would be terrified to write it up like this. Also admitting that your users were forced to download 10s or 100s of gigabytes of bogus updates nearly continuously. This is the kind of thing that a lot of people would just quietly fix. So kudos (I guess) to blogging about it.


Not everyone even has an Internet connection that can reliably download 250MB in 5 minutes.

Yes, even in metropolitan areas in developed countries in 2025.


Even doable on very long range ADSL, guess there are still some dialup users.


That's 6.5 megabits/second, plus overhead. Many DSL circuits exceed this, but not all.


Most DSL I’ve seen has been way slower than 6.5 megabits/s. If you’re that close to infrastructure you can likely get cable etc.

1.5megabits/s is the still common, but Starlink is taking over.


Not dialup. Just bad last-mile wiring, as far as I can tell.

Apparently such service is still somehow available; I found https://www.dialup4less.com with a web search. Sounds more like a novelty at this point. But "real" internet service still just doesn't work as well as it's supposed to in some places.


I struggle to get close to 6mbps on good days... some of us are still stuck on DSL monopolies.


My current AirBnB has only cellular backed WiFi which would struggle to download 250MB at peak times.


Germany?


Canada. But yes, I've heard the stories about Germany, and Australia too.

In point of fact, I can fairly reliably download at that rate (for example I can usually watch streaming 1080p video with only occasional interruptions). The best case has been over 20Mbit/s. (This might also be partly due to my wifi; even with a "high gain" dongle I suspect the building construction, physical location of computer vs router etc. causes issues.)


Microsoft InTune WUDO has a similar bug costing my department 40000 € internal charging per month for firewall log traffic of blocked tcp 7680 requests. 86000 requests per day per client, 160 million per day total. MS confirmed the bug but did nothing to fix it.


> MS confirmed the bug but did nothing to fix it.

They are building features right now. There are a lot of bugs which Microsoft will never fix, or it fixes them after years. (Double click registered on mouse single clicks, clicking "x" to close the window, closes also the window underneat, GUI elements rendered as black due to monitor not recognized etc).


how? Do you investigate each blocked packet as separate alert?


Yes, all packets get logged (metadata only). Otherwise we wouldn’t know there is an issue.

Those packets consume bandwidth and device utilization, too but this is flat fee, whereas log traffic is measured per GB so we investigated where an unexpected growth came from.


It's probably their way of tracking active users without telling you so, so it makes a lot of sense to "check for updates" as frequently as possible.


Little Snitch catches these update request checks and I realize now that it should have an additional rule meta which is *how often* this endpoint request should be allowed (LS should allow throttling not just yes / no)


murus+snail?


Obviously five minutes is unnecessarily frequent, but one network request every five minutes doesn't sound that bad to me. Even if every app running on my computer did that, I'm not sure I'd notice.


People complaining about 5 minute update checks hopefully don't use Windows 10/11.

A while ago I did some rough calculations with numbers Microsoft used to brag about their telemetry, and it came out to around 10+ datapoints collected per minute. But probably sent in a lower frequency.

I also remember them bragging about how many million seconds Windows 10 users used Edge and how many pictures they viewed in the Photo app. I regret not having saved that article back then as it seems they realized how bad that looks and deleted it.


Try installing Adobe's Creative Cloud and/or any of its related products. I ultimately setup an instance of AdGuard just to block Adobe's insane telemetry traffic.


Pi hole adobe checks lol


> but one network request every five minutes doesn't sound that bad to me

Even if it is made to CIA/GRU/chinese state security ? /s


When I built an app that “phones home” regularly, I added the ability for the backend to respond to the client with an override backoff that the client would respect over the default.


Seems like the proper fix would have been to remove the file from the server when they realized the increased traffic. Then clients would just fail to check the update each time and not tie up bandwidth.


Wish people would actually do things like this more often.

Plenty of things (like playstation's telemetry endpoint, for one of many examples) just continually phones home if it can't connect.

The few hours a month of playstation uptime shows 20K dns lookups for the telemetry domain alone.


Why not just use http retry-after? then you can use middleware/proxy to control this behavior. Downside here is that system operation becomes more opauqe and fragmented across systems.


Because the client in this case is not a browser.


There is a standard HTTP header for this: Retry-After.


Could you expend on what is an "override backoff" ?


The client might have a feature to retry certain failures, and it’s using a particular rate, probably not retrying n times one right after the other in rapid succession. This is called backoff.

The server can return an override backoff so the server can tell the client how often or how quickly to retry.

It’s nice to have in case some bug causes increased load somewhere, you can flip a value on the server and relieve pressure from the system.


Exactly. Without going too deep into the architecture, the clients are sending data to the backend in real time, but often that data is not actionable during certain periods, so the backend can tell the clients to bundle the data and try again after a certain amount of time, or just discard the data it's currently holding and try again later (i.e. in 5/10/n seconds)


Thanks for your responses. I’m used to "throttle", seems to be a synonym right?


sure, you could say throttle.


Presumably the back end could tell the client not to check again for some amount of time. Sounds similar but different to cache TTLs, as those are passive.


From the article:

> Add special signals you can change on your server, which the app will understand, such as a forced update that will install without asking the user.

I don't like that part neither.


Several months ago I was dealing with huge audio interruption issues - typical sign of some other, blocking, high-priority process taking too long.

Turns out Adobe's update service on Windows reads(and I guess also writes) about 130MB of data from disk every few seconds. My disk was 90%+ full, so the usual slowdown related to this was occurring, slowing disk I/O to around 80MB/s.

Disabled the service and the issues disappeared. I bought a new laptop since, but the whole thing struck me as such an unnecessary thing to do.

I mean, why was that service reading/writing so much?


Thats only half as bad as a certain company that had all thier users download an unwanted OS upgrade on the theory that one day they might click the install button by accident.

"We will stop filling your drives with unwanted windows 14 update files to you once you agree the windows 12 and 13 eulas and promise to never ever disconnect from the internet again."


Every 5 minutes is too often yes, but it hardly matters for a tiny HTTP request that barely has a body.

So yes it should only be once a day (and staggered), but on the other hand it's a pretty low-priority issue in the grand scheme of things.

Much more importantly, it should ask before downloading rather than auto-download. Automatic downloads are the bane of video calls...


I don’t know this software, but my sense is that this would be exactly the type of desired functionally in order to bypass rejected user metric sharing by parsing update request metrics, but maybe you are right and the Developers really do believe you can’t go more than 5 minutes on an out-of-date version…


Well designed software does not poll for anything - everything is event based.

In this case, that means an update should have been sent by some kind of web socket or other notification technology.

Today no OS or software that I'm aware of does that.


So your conclusion is all software that polls is badly designed?

Keeping a TCP socket open is not free and not really desirable.


Most platforms offer other notification channels - ie. Web push. Those truly are free.


No, those are abstraction over a TCP socket and introduce more complexity than you'd need for something like this. There's nothing wrong with occasionally polling for updates.


Web push, FCM, APNS, etc are free because they only have a single systemwide TCP channel open - and that channel is already open whether or not your app chooses to use it.

Your app can also be ready to receive notifications even when the app isn't running - using zero RAM. Inetd on Linux allows similar stuff (although no ability to handle ip changes or traverse NAT makes it fairly useless in the consumer world).

This stuff is important because polling dominates power use when idle - especially network polling which generally requires hundreds of milliseconds of system awakeness to handle tens of network packet arrivals simply for a basic http request.

Did you know, a typical android phone, if all polling is disabled, has a battery life of 45 days?


An android phone on airplane mode has a battery life of 45 days?


Airplane mode and the scheduler disabled, yes (ie. So Apps don't wake up every 5 mins and attempt to contact the network).

It's actually required by the qualification process for lots of carriers. The built in apps have pretty much no polling for this reason.

During the qualification test, it's actually connected to both LTE and WiFi, but not actually transferring any data.

They cheat a little - the phone is not signed into a Google account, which makes pretty much all Google apps go idle.


How does a user disable the scheduler?


That's quite oem specific, but usually 'battery saver' does it.


That's a lot of abstraction to implement a simple update check and I suspect it's very much not worth it to save a minuscule amount of battery life on a laptop. This is ignoring that you're incorrect about how Web Push works, so you'd need that extra TCP connection anyway and at that point, there's no point in bothering with Web Push. FCM is the same deal (and now you get to pay for extra stuff for a trivial problem, woo) and APN seems like the wrong solution to auto-updates.

Just poll every launch or 24 hours and move on.


* 12 times per hour per user.


[flagged]


> To re-cap: Screen recording software. Checks for updates every five (5) minutes. That's 12 times an hour.

The tone might be somewhat charged, but this seems like a fair criticism. I can’t imagine many pieces of software that would need to check for updates quite that often. Once a day seems more than enough, outside of the possibility of some critical, all consuming RCE. Or maybe once an hour, if you want to be on the safe side.

I think a lot of people are upset with software that they run on their machines doing things that aren’t sensible.

For example, if I wrote a program that allows you to pick files to process (maybe some front end for ffmpeg or something like that) and decided to keep an index of your entire file system and rebuild it frequently just to add faster search functionality, many people would find that to be wasteful both in regards to CPU, RAM and I/O, alongside privacy/security, although others might not care or even know why their system is suddenly slow.


For contrast: Chrome, a piece of software which has a huge amount of attackable surface area, and lives in a spot with insane stakes if a vulnerability is found, checks for updates every five hours, last I read.


Noone is commenting on the actual bug. The fact that it auto downloads 250mb updates is user-hostile. On top of that, checking every 5 minutes? What if I'm on a mobile connection?

Why not just follow every Mac app under the sun and prompt if there's an update when the app is launched and download only if the user accepts?


I think the critique here is not directed to 1 individual, the guy who actually wrotw the code. That would be ok, can happen. Here we are talking about the most valued company in the world, which hopefully has many architects, designers and literally an army of testers… and then make such a brutal error.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: