I agree with the sentiment; I also want to control where things are installed. But the framing of the technical problem here is totally backwards.
The operating system or application manager within the operating system (what does flatpak consider itself?) should decide where all application state goes. The application shouldn't ever prompt the user for this, it should just assume a path inside a sandbox. That path inside the sandbox will get mapped to where-ever outside the sandbox, and that's where the user exercises control.
We already see this pattern emerging with docker images. Everything assumes `/data` is a good place to store things, and `/config` is a good place to read configuration from. I want every application to do this. If I want it to store state, then I'll decide to map those to directories that are persisted.
But what about for state that the application doesn't really "own"? e.g. I want to open a PDF in an editor.
The PDF is in my documents folder, and I don't want to expose all of my documents to the application in its sandbox.
Fine grained access to single files should be given out using a file picker. The application manager passes in a socket to the application sandbox. The application connects to that socket using a known hard-coded path. It sends a message (client->server) over the socket, the listening file picking process opens a new GUI window to prompt the user to select a file. The user picks a file and a file descriptor is sent over the socket to the application (server -> client).
Spoken like a Linux software developer, I suppose. As a Windows user of generic software (image editing, video players, games, etc.), I want to be able to control such crap. I have a media player that can easily fill a small HD with its mindlessly huge DB, for instance. Rather than manually cleaning periodically, or upgrading my $y$tem, it's easiest to say "Software, store your data here".
These techniques are pretty general purpose. They aren't merely a cultural artifact of Linux or FOSS.
Anyone trying to sanely deal with authorization is going to stumble upon sandboxing, and capabilities, and the principal of least authority, etc. if they look hard enough.
Configuration should go in a defined place. /etc and ~/.config on linux, registry and %appdata% on Windows. A common location makes management, synchronization and backups easier, and space is rarely a concern for configs. Cache directories should go in a defined place. /var/ and ~/.cache on linux, %localappdata% on Windows.
But application files have a huge size range depending on the assets the program needs (typical sizes range from the tens of MB to the tens of GB, with large outliers in either direction). I have multiple tiers of storage (a terabyte of SSD, multiple TB of HDD, tens of TB of network storage) and allocate my software to the desired storage tier depending on my needs
And this isn't just a thing on Windows, Android does the same by allowing you to moving apps to the SD card, provided you have one. Management is just greatly simplified in that case because you have at most two meaningful storage locations on an Android location, while desktop or laptop might have any number of them
But I also want to be able to decide to not adhere to that standard when it gets in the way. It's my machine, there's no reason why I can't make these decisions myself.
You mean how on linux "make install" just installs to whatever directory (/usr/bin/, /usr/local/bin/, /opt/?), and if you want to change it you have to do ./configure --prefix=whatever?
Firejail on Linux and Sandboxie on Windows means I can let programs install whenever they want, and I always know it'll be inside some other top-level directory I specify.
Probably better to not let programs spray all over your filesystem anyway.
Ah yes, because people should be forced to use some random third-party program because developers went out of their way to write their own installer that lacks a feature every standard installer has had since the 1980s.
After many years of working with computers, I'm still confused about what exactly installing means. Whenever I have to set up something by hand (e.g. some SDK), there's a good chance of spending hours trying to make everything work everywhere, mostly dealing with environment variables, where to set them, and how to make sure all the tool who need to use the thing see it. Not to speak about the slight differences between operating systems. Maybe I just suck at this.
Well the problem with that is that the past few generations of laptops have taken a page from chromebooks and only give you 128GB of storage, maybe 256GB if you're lucky, unless you add more, assuming that's even possible. Storage has never been cheaper, yet most laptops ship with barely enough storage for a base Windows install.
Same here. I don't care where the application goes; I care where the data goes. One of my initial annoyances when I started using PlatformIO was that it assumes a default workspace directory and as a new user, it took me forever to figure out where it was putting my files. I have a standard directory structure that I use and this was really messing me up.
I used a Win10 debloater last year about 2 weeks after install. It deleted Minecraft, which is apparently a UWP too, and it deleted my world, because UWPs apparently store userdata in program directory.
Always chose custom settings when installing anything. Not only you always get to pick target location but often you can avoid installing some additional garbage. I see no problem here though. Maybe i am too used to these things and some people just never learn and click on every button that is presented to them?
The most annoying instance of this is installers in Windows that just assume you want to go into `C:/Program Files`, which nowadays requires admin to be modified
This is very annoying on company machines where you may not have admin, since now there's red tape with your IT because the installer was poorly written.
Half the reason I use the WSL is because you at least get "root" on it, so permissions are never an issue
Edit: there may be something lost in translation. This post is in reference to software your IT already approves, which happens to only install to program files.
It's a feature. You shouldn't be installing software on your work computer. Your IT department should be vetting it, deploying it, and keeping it up-to date for you.
Maybe you can tell the difference between report.pdf and report.exe, but too many people can't, so unfortunately we can't let everyone install anything.
> Your IT department should be vetting it, deploying it, and keeping it up-to date for you.
There are not enough IT staff at my organization to do this. They have an approved list of software that may be installed. Some common installations are automated, others are niche-enough that it's DIY.
We don't live in a perfect world where the IT staffing ratio is 1:20 (or whatever arbitrary number you would consider "good"), so this is how my organization does it.
> unfortunately we can't let everyone install anything.
> Your" IT department should consider giving you your own admin account. But it's their call.
Seems like a bit of an extreme solution for one-off installations that are rare enough to not be worth bothering to automate.
Good example of this is scientific software like Gaussian (a "common" quantum mechanics package): needs admin, expensive and strict license that gets audited. It's approved, but we have a single digit number of people using it. It's just not worth the time to automate a script around an install that only happens once every year or so on average, when they can just temporarily elevate the user.
> You shouldn't be installing software on your work computer. Your IT department should be vetting it, deploying it, and keeping it up-to date for you.
If I actually had to depend on IT to do all that, it would take forever to get anything done.
In a Windows environment this can be managed with AppLocker, or an endpoint management solution, or 3rd-Party tool like Threatlocker.
It becomes less about controlling the users and more about stopping any bad guy dead in their tracks. If nothing but what has been implicitly authorized can execute, then 99% of ransomware attacks will be stopped immediately even after the user clicks the link.
Your company software procurement process shouldn’t be so onerous that people turn to Shadow IT. You have to work with people where they are.
No, that's the default behavior in Windows. If you install to, say, app data it's fine. If you install to program files, you need admin because it is a protected folder.
> The company does NOT want you installing random crap on their machines.
Why do you immediately jump to the conclusion that the post is about installing "random crap?"
Where did I write that it was not approved in advance...?
The post is about requiring admin to install to Program Files. Even if it is an approved piece of software, you're still going to need admin to install it.
I am really more annoyed about config files that are cluttering up everything. Even resolve and other big packages started to clutter the Documents folder on Linux.
Can you be more specific about which platform you're talking about?
On a Mac, I've never wanted an application to go anywhere but the default /Applications. I don't ever recall being asked if I want another location, nor would I want to be.
Is it different on Windows or Linux? And why would you want a different install location?
> Is it different on Windows or Linux? And why would you want a different install location?
On Linux, I want to be able to choose my install location because I may want it installed on removable media, or on a different drive for space reasons, or because I want to keep executables used for a single project with the rest of the files that comprise that project.
The same things apply to Windows, but also on Windows (which I only use at work) I specifically want to avoid the use of the standard locations for things so that I don't have to fight with OneDrive about them. Putting them somewhere else means OneDrive will leave them alone.
Not the OP, but I do sometimes install macOS applications to ~/Applications. This is especially useful at work where /Application requires elevated permissions. I used to have to request admin rights for a period of time every time VS Code needed do something. In ~/Applications, that’s not an issue.
Of course, 90% of macOS apps (outside of the App Store) don’t have installers. You just drag the app where you want it, so it’s a moot point.
You've probably been asked about it, but it's often phrased differently than simply asking where to install the app. "Would you like to install theAppInQuestion for all users of this computer (install in /Applications) or just you(install in ~/Applications)?"
You might want to use another drive because there's space there or because it's faster, and what you're installing is a game that's sensitive to load times.
It really depends on the system. In Linux for example it is pretty conventional to ask. But, the asking is often done by querying some environment variables.
If a program prompts the user for a directory instead of querying the appropriate environment variable, that is a violation of the stated user preference.
More options, more mess. The user will always do something funky, then don't remember about it and then complain to the software publisher that it's not working correctly
If software allows changing install location, it should work from everywhere. And there's no excuse for hardcoding Program Files anymore with multiple package managers on windows.
I feel your pain as a developer but as a user, sometimes I need the choice. Try to hide it so non-techies people can click Next Next Next but put it in advanced options somewhere.
Not really, the common pattern is to use a well known default and then allow advanced users to modify their install location. Installation location has been a choice for well over 30 years now and a lot of the initial problems ironed out.
You do realize that most laptops ship with almost zero storage available on the C: drive, right? You have actually looked at the specs of real hardware in use in the real world, right?
The operating system or application manager within the operating system (what does flatpak consider itself?) should decide where all application state goes. The application shouldn't ever prompt the user for this, it should just assume a path inside a sandbox. That path inside the sandbox will get mapped to where-ever outside the sandbox, and that's where the user exercises control.
We already see this pattern emerging with docker images. Everything assumes `/data` is a good place to store things, and `/config` is a good place to read configuration from. I want every application to do this. If I want it to store state, then I'll decide to map those to directories that are persisted.