I've never understood why people care so much about the linter settings. It's so obviously bikeshedding, just make a choice, run the linter automatically and be done with it. I'm too busy doing actual software engineering to care about where exactly everything goes - I promise after a week you'll just get used to whatever format your team lands on.
> I've never understood why people care so much about the linter settings.
Source code formatting programs are not the same as lint[0] programs. The former rewrites source code files such that the output is conformant with a set of layout rules without altering existing logic. The latter is a category of idempotent source code analysis programs typically used to identify potential implementation errors within otherwise valid constructs.
Some language tools support both formatting and source code analysis, but this is an implementation detail.
Thank you! I am almost going out of my mind reading this ~200 comment thread with everyone just casually saying "linter" when they mean "formatter". Do people really not distinguish between these two very different programs?
I guess the line might feel fuzzy to some people, since nowadays many tools bundle both linting and formatting. And with modern IDE integrations, you might not even run them explicitly — the editor just does both automatically in one go.
Lately I've switched to the terms "check" and "fix" because more and more formatters (at least in my bubble of Python and Go) are incorporating fixes. So not just rearranging code and maybe adding a comma here and there.
Formatters, if you want to be specific, are even worse.
They slyly add git noise and pollute your audit trails by just going through and moving shit around whenever you save a file.
And sometimes, they actually insert bugs - string formatting errors are my favorite example.
It's for people who think good code is a about adhering to aesthetic ideologies instead of making things documented and accountable.
This is most noticeable in open source contributions. Sometimes I'll get a pull request with like 2 lines of change and 120 lines of some reformating tool.
> just going through and moving shit around whenever you save a file.
This only happens because the file doesn't already adhere to the rules it's implementing. These are normally highly configurable, and once your code complies to a standard, the tool prevents future code from pulling you away from that standard.
> And sometimes, they actually insert bugs - string formatting errors are my favorite example.
Do you have a concrete example?
> Sometimes I'll get a pull request with like 2 lines of change and 120 lines of some reformating tool.
Is your existing code formatting at least consistent?
> You think I accept that?
This is a social issue rather than a technical one. You can tell people in your development readme to use specific style rules, or even a project-wide precommit hook. If your own code is formatted with one of these tools, you can even (to my understanding) set up automated checks on GitHub's side.
But of course you are free to reject any PR you want.
I don't keep examples around to defend my stance on this, sorry.
I left out my largest critique - spacing is semantic both for the compiler and the human.
Often I police the whitespace very thoughtfully usually in code that also requires long comments for clarity.
I care deeply about maintainability and legibility of code and try to consider future human readers everywhere.
Then the formatter says "haha, fuck that!"
That's my biggest personal gripe with it. It's consistency over clarity, conformity over craft.
This all depends on what kind of code you're writing. Standard backend crud code in python with sqlalchemy? ok, pydantic with linters and formatters. But that should be written by llms these days anyways, if you're still doing it by hand you're doing it wrong.
Honestly the jobs I sign up for demand a kind of care - mostly experimental and frontier work, so I am really frustrated when I'm prevented from exercising my professional judgement and doing what I think is best due to some bureaucratic red tape.
I don't want the Wild West, I want disciplined senior engineers using professional consideration and judgement and not a bunch of onerous restricting tools assuming I don't know what I'm doing or why I'm doing it
The language designers, compiler engineers, they are the actually competent people in the room and if they allowed for the flexibility I'll side with them over some hacked out set of rewriting regexs by some kid vibe coding on GitHub
+1 to this. Why throw away an extra dimension of expression available by choices in formatting in the name of consistency? I find it helpful to use blank lines to separate code into "paragraphs". A complex conditional may be made more readable by extra space in some places an not others. A series of single line if statements might be clearer formatted lined up vertically, like a table.
You have it entirely backwards. Enforcing a consistent format is useful precisely because it avoids pointless git noise from different people changing formatting differently as they go.
Running random formatters on random subsets of your code is not a good idea. If you want code in a repo to be formatted a certain way, you need to have one set of settings and enforce it, and yeah, reject anything that just has spurious formatting changes that someone else has run.
> This is most noticeable in open source contributions. Sometimes I'll get a pull request with like 2 lines of change and 120 lines of some reformating tool.
This wouldn't happen nearly as much if you had a defined set of formatting rules plugged into CI instead of chaos
If the rule is not enforced in ci it isn't a rule. I've made that a mantra for a long time now and it helps. For a while I did verify formatting in ci but eventually we decided that formatiing wasn't as important as getting builds done fast. We still run other test and linters in ci and if they break fix the build but formatting isn't really that imbortant so we don't care. Yes our formatting is somewhat a mess but it isn't really that bad even after a decade of agreeing not to care.
Formatters breaking code is not something that happens in all language ecosystems; I think it's mostly a C++ and occasionally JS issue, but for gofmt and many other formatters just don't break code. It's also not really that common anyways.
You can solve the Git noise issue by enforcing formatting in CI and keeping formatter configuration in repo. This is what most high quality open source projects will do. The purpose of this is not about "adhering to aesthetic ideologies", it's about not bothering people with the minutiae of yet another pointless set of formatting conventions. Most developers couldn't give a shit less where you think braces should go, or whether you like tabs or spaces, or whatever else, they care about more important things like data structures and writing more correct code. Having auto formatting enables them to effortlessly follow project norms without needing to, for every single repo they work in, carefully try to adhere to the documented formatting (which usually winds up being inconsistent eventually anyways, in projects without auto formatting, because humans are fallible.)
The reason why people submit code with a huge formatting diff is usually because your project didn't ship a formatter manifest but their editor is configured to format on save. That's because probably most of the projects people work on now do actually use some form of automatic formatting, be it clang-format, gofmt, prettier, black, etc. so it winds up being necessary to special case your project to not try to run a formatter. It's still a beginner's mistake to actually commit and PR a huge reformatting, but it definitely happens by accident to even experienced devs when working on projects that have weird manual formatting.
I break formatting all the time for the sake of clarity.
Sometimes my comments are paragraphs long with citations and things are carefully broken down with interstitial comments and references and then the formatter fucks it all up and the linter says "wah this oblivious pedant rule isn't followed"
The problem is it doesn't treat me like an adult And I'm not in this industry for dumb Nanny tools that scold me because they don't understand things
> Sometimes I'll get a pull request with like 2 lines of change and 120 lines of some reformating tool.
The reformatting tools should be CI-enforced so you'll only end up with sudden massive changes like this once when you start using auto-formatters.
Regardless, tell your teammates to separate out formatting changes vs logic changes into separate commits (preferably separate PRs). Since they're auto-formatters it wouldn't even be any additional work, just:
git fetch origin
git checkout origin/main
git checkout -b formatting
./run_the_autoformatter.bash
git commit -a -m "Ran the auto-formatter, which should have been enforced by the CI."
git push -u origin formatting
Before picking up Go with its formatter and format-on-save norm, I mostly worked in contexts where a program would scan your source code and complain about style violations, but not actually fix them. We called those linters.
Because I spent the vast majority of the time I spent on code reading it, and the layout matters to me in terms of how much time it takes for me to read code.
Yes, I can get used to other layouts, but that by no means means all layouts are equal to me in terms of how readable they are, and how well things stand out when they should, or blend in when they should.
I recognise this isn't the case for everyone - some people read code beginning to end and it doesn't matter how its laid out. But I pattern match visually, and read fragments based on layout, and I remember code based on visual patterns.
Ironically, because I have aphantasia, and don't visualise things with my "minds eye", but I still remember things by visual appearance and spatial cues better than by text.
That's interesting. I also have aphantasia and I seem to be the only one around where I work that cares one bit about visual presentation. I suspect it's because people with aphantasia have to rely on waiting for visual recognition to happen so often that we get good at it and rely on it more. I'm also known for figuring out a problem and jumping to the exact file and area in the code that's causing it immediately where others apparently have to read through the code again to find it. I just remember it's maybe 2/3 the way through the file and just past that big switch statement and has a one-liner comment above it and has a distinctive shape.
For me, indenting with tabs and aligning with spaces helps me find the code that I'm looking for as does adequate whitespace and a color syntax highlighting editor. Aligning things with distinct columns where it makes sense helps a lot, too.
some settings have advantages. For example, trailing commas on tables
[
'apple',
'banana',
'orange',
]
has an advantage over
[
'apple',
'banana',
'orange'
]
Because adding a new line at the end of the table (1) requires editing 1 line, instead of 2 (2) makes the diffs in code review smaller and easier to read and review. So a bad choice makes my life harder. The same applies to local variable declarations.
Sorted lists (or sorted includes) is also something that makes my life easier. If they're not sorted then everyone adds their new things to the end, which means there are many times more merge conflicts. sorted doesn't mean there are zero but does mean there are less than "append to the end". So, just like an auto-formatter is there to save time, don't waste my time by not sorting where possible.
Also, my OCD hates inconsistency. So
[1, 2, 3]
{a, b, c}
Is ok and
[ 1, 2, 3 ]
[ a, b, c ]
Is ok but
[1, 2, 3]
{ a, b, c }
Is not. I don't care which but pick ONE style, not two styles!
Yes, everyone has personal opinions about code vanity. When this becomes a holy war I really start to question the maturity of people on the project. I find that people worry about trivial nonsense to mask their inability to address more valid concerns.
All that really matters is consistency. Let a team make some decisions and then just move forward.
Democracy, strictly speaking, would be to periodically elect the most popular formatting policy once every sensible-time-period.
I’ve seen companies with such a large amount of developer churn that literally one person was left defending the status quo saying “we do X here, we voted on it once in 2019 and we’re not changing it just for new people”. 90% of the team were newcomers.
(The better teams I’ve worked on maintain a core set of leaders who are capable of building consensus through being very agreeable and smart. Gregarious Technocracy >> Popular Democracy!)
And this is my problem. My last example, the 2 styles are inconsistent. So when the guideline is "all that matters in consistency" then I take that at face value. You though apparently pull back and believe something more messy. Effectively "All that really matters is a consistently applied style even if that style itself is full of inconsistency"
The same applies to the trailing commas. With them, every line is consistent. Without, the last line is inconsistent with the other lines. So are we applying this rule "All that really matters is consistency" or only applying it sometimes? I would argue whatever heuristic made you say "All that really matters is consistency" should apply to both cases. Consistently apply a style guide and, the style guide itself should be consistent.
> Without, the last line is inconsistent with the other lines.
The last line without a comma imparts the information that this line is expected to be the last line in the list which is something you don't get from it with that trailing comma. It's a small thing, but I don't like putting in that last comma because I'm effectively misrepresenting things for "syntactic sugar".
> All that really matters is consistency. Let a team make some decisions and then just move forward.
Not so! Amount of tokens correlates to perceived code complexity to some. One example is how some people can't unsee or look past lisps parenthesis.
Another example is how some people get used to longDescriptiveVariableNames but others find that overwhelming (me for instance) when you have something like:
userSignup = do
let fullName = userFirstNameInput + userLastNameInput
userName = take 1 userFirstNameInput + take 10 userLastNameInput
saveToDB userName
Above isn't bad, but imagine variables named that verbosely used over and over, esp in same line.
Compare it to:
userSignup = do
let fullName = firstName + lastName
userName = take 1 firstName + take 10 lastName
saveToDB userName
The second example loses some information, but I'd argue it doesn't matter too much given the context one would typically have in a function named `userSignup`.
I've had codebases where consistency required naming all variables like `firstNameInputField` rather than just `firstName` and it made functions unreadable because it made the unimportant parts seem more important than they were simply by taking up more space.
It's one of my pet peeves when some senior engineers are bothered more by these coding semantics in a PR when there are bigger data model/code architectural issues, and don't call that out.
The problem is when 2 people with same level of enthusiasm for linter rules but opposing views collide. If there’s nothing more impactful you could be solving and spending energy and time on than arguing those linter rules, then it’s time to question where the project is at and where is it going.
And if there is something more important, then instead of of micro-optimizing the rules when there is strong disagreement it’s probably best if one of the parties takes the high road and lives with it so you can all focus on what matters.
I guess that's one reason why opinionated tools like prettier or gofmt are popular. They made all the choices for you, they don't have configurable knobs, so you just learn to live with it.
The bad thing about these sort of tools is when you work in a shop where multiple platforms are used for development and one of the platforms doesn't support the tool, or the tool fights with other tooling on that platform. You should for example never use pre-commit to enforce line ending style because git has brain dead defaults (which is to say, unless you have a .gitsettings file in your repo to prevent it, it will change line endings itself, fighting pre-commit). This just creates confusion and wasted time. In aid of what? So some anal so-and-so can get their way about code formatting as though it makes everyone else more productive to format code THEIR way. When in fact it makes others LESS productive as they fight "computer says no" format-nazi jobs in CI that don't even report what is "wrong" with the formatting and rely on tooling that they don't have installed to run locally.
Not to mention the overhead of running these worthless inefficient tools on every commit (even locally).
Tools like this just raise the debate from different opinions about formatting to different opinions about workflows. Workflows impact productivity a lot more than formatting.
You should force them to choose from someone else's style. Don't let them tweak individual settings, choose a complete standard and apply it with both the thing they like and things they don't. A style is useful, the details do not matter that much.
Saying this is clearly superior means you don’t keep your lists sorted. A sorted list is as likely to add something to the beginning as the end, where this solution has the same problem.
I'm just suddenly slightly terrified someone's going to see this and think it's genuinely a good idea and make it part of the next popular scripting language, where lists are defined by starting commas or something :S
You'll be annoyed to know that your last "not okay" style is what's considered standard in ruby (although the curly braces have different semantics, they are, well, either a hash or a code block (which is kind of annoying to me that they're used for two entirely different things) never a list/array).
More seriously, I don't like having lists like that in the code in the first place. I don't want multiple lines taken up for just constant values, and if it turns out to require maintenance then the data should be in a config file instead anyway.
Rule of 3, write/change it once or twice (or seldomly enough with no possible negative impact) and it doesn't need any complexity. More than so.. yeah probably goes into a config.
> Because adding a new line at the end of the table (1) requires editing 1 line, instead of 2 (2) makes the diffs in code review smaller and easier to read and review.
This judgement is rather based on a strong personal opinion (which I don't claim to be wrong, but also not as god-given) on what is one, and what are two changes in the code:
- If you consider adding an additional item to the end of the list to be one code change, I agree that a trailing comma makes sense
- On the other hand, it is also a sensible judgment to consider this to be a code change of two lines:
1. an item (say 'peach') is added to the end of the list
2. 'orange' has been turned from the last element of the list to a non-last element of the list
If you are a proponent of the second interpretation, the version that you consider to be non-advantageous is the one that does make sense.
But the second interpretation only makes sense if the last item somehow deserves special treatment (over, say, the second-to-last item). Otherwise, you should similarly argue that the previous second-to-last item should also show up in the changes as it has now turned into the third-to-last item. (So maybe every item in the list should be preceded by as many spaces as are items before it and succeeded by as many commas as are items following it. Then, every change to the list will be a diff of the entire list.)
first item,,,
second item,,
third item,
fourth item
In my experience, special treatment for the last item is rarely warranted, so a trailing comma is a good default. If you want the last item to be special, put a comment on that line, saying that it should remain last. (Or better yet, find a better representation of your data that does not require this at all.)
> But the second interpretation only makes sense if the last item somehow deserves special treatment (over, say, the second-to-last item).
There do exist reasons why this can make sense:
- In an Algebraic Data Type implementation of a non-empty list, the last symbol is a different type constructor than the one to append an item to the front of an existing non-empty list (similarly how for an Algebraic Data Type implementation of an arbitrary list, the type constructor for an initial empty list is "special").
- In a single-linked list implementation, sometimes (depending on the implementation) the terminal element of the list is handled differently.
---
By the way: at work, because adding parameters at the beginning of a (parameter) list of a function is "special" (because in the code for many functions the first parameters serve a very special purpose), but adding some additional parameter at the end is not, we commonly use parameter lists formatted like
I don't care that much about the specific retained options (though my own gusts of the day are obviously the best taste ever in the whole existence of universe) but having a common linter setting to prevent the noise in every damn PR is a must have.
Yes both git and all these PL are actually damn stupid to take lines at face value instead of something more elegant like Ada does. In my 20+ year career I've been proposed only once a project that involved Ada.
It's hard to come with something elegant and efficient. It's even harder to make it reach top tiers global presence, all the more when the ecological niche is already filled with good enough stuff.
Formatters eliminating long lines is a pet peeve of mine.
About once every other project, some portion of the source benefits from source code being arranged in a tabular format. Long lines which are juxtaposed help make dissimilar values stand out. The following table is not unlike code I have written:
I've often wished that formatters had some threshold for similarity between adjacent lines. If some X% of the characters on the line match the character right above, then it might be tabular and it could do something to maintain the tabular layout.
Bonus points for it's able to do something like diff the adjacent lines to detect table-like layouts and figure out if something nudged a field or two out of alignment and then insert spaces to fix the table layout.
I believe some formatters have an option where you can specify a "do not reformat" block (or override formatting settings) via specific comments. As an exception, I'm okay with that. Most code (but I'm thinking business applications, not kernel drivers) benefits from default code formatting rules though.
And sometimes, if the code doesn't look good after automatic formatting, the code itself needs to be fixed. I'm specifically thinking about e.g. long or nested ternary statements; as soon as the auto formatter spreads it over multiple lines, you should probably refactor it.
I'm used to things like `// clang-format off` and on pairs to bracket such blocks, and adding empty trailing `//` comments to prevent re-flowing, and I use them when I must.
This was more about lamenting the need for such things. Clang-format can already somewhat tabularize code by aligning equals signs in consecutive cases. I was just wishing it had an option to detect and align other kinds of code to make or keep it more table like. (Destroying table-like structuring being the main places I tend to disagree with its formatting.)
I get what you're saying, and used to think that way, but changed my mind because:
1) Horizontal scrolling sucks
2) Changing values easily requires manually realigning all the other rows, which is not productive developer time
3) When you make a change to one small value, git shows the whole line changing
And I ultimately concluded code files are not the place for aligned tabular data. If the data is small enough it belongs in a code file rather than a CSV you import then great, but bothering with alignment just isn't worth it. Just stick to the short-line equivalent. It's the easiest to edit and maintain, which is ultimately what matters most.
This comes up in testing a lot. I want testing data included in test source files to look tabular. I want it to be indented such that I can spot order of magnitude differences.
Those kind of tables improve readability right until someone hits a length constraint and had to either touch every line in order to fix the alignment, causing weird conflicts in VCS, or ignore the alignment and it's slow decay into a mess begins.
It's not an either/or though. Tables are readable and this looks very much like tabular data. Length constraints should not be fixed if you have code like this, and it won't be "a slow decay into a mess" if escaping the line length rules is limited to data tables like these.
so you're basically saying "look this is neat and I like it, but since we cannot prevent some future chap come along and make a mess of it, let's stop this nonsense, now, and throw our hands up in the air—thoughts and prayers is what I say!"?
At best I'd say it's ok to use it sparingly, in places where it really does make an improvement in readability. I've seen people use it just to align the right hand side of a list of assignments, even when there is no tabular nature to what they are assigning.
I like short lines in general, as having a bunch of short lines (which tend to be the norm in code) and suddenly a very long line is terrible for readability. But all has exemptions. It's also very dependent on the programming language.
People have already outlined all the reasons why the long line might be less than optimal, but I will note that really you are using formatting to do styling.
In a post-modern editor (by which I mean any modern editor that takes this kind of thing into consideration which I don't think any do yet) it should be possible for the editor to determine similarity between lines and achieve a tabular layout, perhaps also with styling for dissimilar values in cases where the table has a higher degree of similarity than the one above.
Perhaps also with collapsing of tables with some indicator that what is collapsed is not just a sub-tree but a table.
It is an obvious example where automatic formatter fails.
But are there more examples? May be it's not high price to pay. I'm using either second or third approach for my code and I never had much issues. Yes, first example is pretty, but it's not a huge deal for me.
Another issue with fixed line lengths is that it requires tab stops to have a defined width instead of everyone being able to choose their desired indentation level in their editor config.
This is good, and objectively better than letting the random unbounded length of the function name define and inflate and randomize the indentation. It also makes it easier to use long descriptive function names without fucking up the indentation.
To me, that reads fine, but it has lost the property elevation wanted, which was that it's easy to compare the values assigned to any particular parameter across multiple calls. In your version you can only read one call at a time.
So fix your setup? Why should others with wider screens leave space on their screen empty for your sake?
Especially 80 characters is a ridiculously low limit that encourages people to name their variables and functions some abbreviated shit like mbstowcs instead of something more descriptive.
My main machine is an ultrawide, but I usually have multiple files open, and text reads best top-down so I stack files side-by-side. If someone has like, a 240 character long line, that is annoying. My editor will soft wrap and indicate this in the fringe of course but it's still a little obnoxious.
80 is probably too low these days but it's nice for git commit header length at least.
What do you do about the "oh, I'm the only one who cares about [???]? should I just fucking kill myself then?" Many such cases.
>How about doing it because your colleagues, who you presumably like collaborating with to reach a goal, asks you to?
If a someone wants me to do a certain thing in a certain way, they simply have to state it in terms of:
- some benefit they want to achieve
- some drawback they want to avoid
- as little as an acknowledged unexamined preference like "hey I personally feel more comfortable with approach X, how bout we try that instead"
I'm happy to learn from their perspective, and gladly go out of my way to accomodate them. Sometimes even against my better judgment, but hell, I still prefer to err on the side of being considerate. Just like you say, I like to work with people in terms of a shared goal, and just like you do, in every scenario I prefer to assume that's what's going on.
If, however, someone insists on certain approaches while never going deeper in their explanations than arbitrary non-falsifiable qualifiers such as "best practice", "modern", "clean", etc., then I know they haven't actually examined those choices that they now insist others should comply with. They're just parroting whatever version they imagine of industry-wide consensus describes their accidental comfort zone. And then boy do they hate my "make your setup assume less! it's the only way to be sure!". But no, I ain't reifying their meme instead of what I've seen work with my own two.
> If, however, someone insists on certain approaches while never going deeper in their explanations than arbitrary non-falsifiable qualifiers such as "best practice", "modern", "clean"
You're moving the goalposts of this discussion. The guy I was responding to said "fix your setup" to another person saying "Your table wrapped for me. The short line equivalent looks best on my screen." That's a stated preference based on a benefit he'd like to achieve.
We are not discussing "best practice" type arguments here.
"Best practice" type arguments are the universal excuse for remaining inconsiderate of the fact that different people interact with code differently, but fair enough I guess
Yes, I don't think we should discourage people from using Python or German just because you don't want to learn those particular languages either.
Working together with others should not mean having to limit everyone to the lowest common denominator, especially when there are better options for helping those with limitations that don't impact everyone else.
So haul your wide monitor around with your laptop, you mean? No.
Just use descriptive variable names, and break your lines up logically and consistently. They are not mutually exclusive, and your code will be much easier for you and other people to read and edit and maintain, and git diffs will be much more succinct and precise.
I softwrap so I don't care about line length myself but I read code on a phone a lot so people who hardwrap at larger columns are a little more annoying
I am at the opposite end. Having any line length constraints whatsoever seems like a massive waste of time every time I've seen it. Let the lines be as long as I need them, and accept that your colleagues will not be idiots. A guideline for newer colleagues is great, but auto-formatters messing with line lengths is a source of significant annoyance.
> auto-formatters messing with line lengths is a source of significant annoyance.
Unless they have been a thing since the start of a project; existing code should never be affected by formatters, that's unnecessary churn. If a formatter is introduced later on in a project (or a formatting rule changed), it should be applied to all code in one go and no new code accepted if it hasn't passed through the formatter.
I think nobody should have to think about code formatting, and no diff should contain "just" formatting changes unless there's also an updated formatting rule in there. But also, you should be able to escape the automatic formatting if there is a specific use case for it, like the data table mentioned earlier.
Define high? I think 120 is pretty reasonable. Maybe even as high as 140.
Log statements however I think have an effectively unbounded length. Nothing I hate more than a stupid linter turning a sprinkling of logs into 7 line monsters. cargo fmt is especially bad about this. It’s so bad.
Ugh. 80 is the worst. For C++ it’s entirely unreasonable. I definitely can not reconcile “linters make code easier to read” and “80 width is good”. Those are mutually exclusive imho.
What I actually want from a linter is “120, unless the trailing bits aren’t interesting in which case 140+ is fine”. The ideal rule isn’t hard and fast! It’s not pure science. There’s an art to it.
"Since forever" as in, "since the start of electronic computing"; we started printing the programs out on paper almost immediately. The 132 columns comes from the IBM's ancient line printers (circa 1957); most of other manufacturers followed the suit, and even the glass ttys routinely had 132-column mode (for VT100 you had to buy a RAM extension, for later models it was just there, I believe). My point is, most of the people did understand, back even in the sixties, that 80-columns wide screen is tiny, especially for reading the source code.
Printers aside the VT220 terminal from DEC had a 132 column mode. Probably it was aping a standard printer column count. Most of the time we used the 80 column mode as it was far more readable on what was quite a small screen.
Not only a small screen by modern standards, but the hardware lacked the needed resolution. The marketing brochure claims a 10x10 dot matrix. That will be for the 80 column mode. That works out to respectable 800 pixel horizontally, barely sufficient 6x10 pixel in 132 column mode. There was even a double-high, double-width mode for easier reading ;-)
Interesting here perhaps is that even back then it was recognized, that for different situations, different display modes were of advantage.
> There was even a double-high, double-width mode for easier reading
I'd forgotten that; now that waa a fugly font. I don't think anyone ever used it (aside from the "Setup" banner on the settings screen)
I think the low pixel count was rather mitigated by the persistence of phospher though - there's reproductions of the fonts that had to take this into account; see the stuff about font stretching here: https://vt100.net/dec/vt220/glyphs
Caveat, my personal experience is mainly limited to JS/TS, Java, and associated languages. 120 is fine for most use cases; I've only seen 80 work in Go, but that one also has unwritten rules that prefer reducing indentation as much as possible; "line-of-sight programming", no object-oriented programming (which gives almost everything a layer of indentation already), but also it has no ternary statements, no try/catch blocks, etc. It's a very left-aligned language, which is great for not unnecessarily using up that 80 column "budget".
It’s tricky to find an objective optimum. Personally I’ve been happy with up to 100 chars per line (aim for 80 but some lines are just more readable without wrapping).
But someone will always have to either scroll horizontally or wrap the text. I’m speaking as someone who often views code on my phone, with a ~40 characters wide screen.
In typography, it’s well accepted that an average of ~66 chars per line increases readability of bulk text, with the theory being that short lines require you to mentally «jump» to the beginning of the next line frequently which interrupts flow, but long lines make it harder to mentally keep track of where you are in each line. There is however a difference between newspapers and books, since shorter ~40-char columns allows rapid skimming by moving your eyes down a column instead of zigzagging through the text.
But I don’t think these numbers translate directly to code, which is usually written with most lines indented (on the left) and most lines shorter than the maximum (few statements are so long). Depending on language, I could easily imagine a line length of 100 leading to an average of ~66 chars per line.
> the theory being that short lines require you to mentally «jump» to the beginning of the next line frequently which interrupts flow, but long lines make it harder to mentally keep track of where you are in each line.
In my experience, with programming you rarely have lines of 140 printable characters. A lot of it is indentation. So it’s probably rarely a problem to find your way back on the next line.
I don’t think code is comparable. Reading code is far more stochastic than reading a novel.
For C/C++ headers I absolutely despise verbose doxygen bullshit commented a spreading relatively straightforward functions across 10 lines of comments and args.
I want to be able to quickly skim function names and then read arguments only if deemed relevant. I don’t want to read every single word.
I like splitting long text as in log statements into appropriate source lines, just like you would a Markdown paragraph. As in:
logger.info(
"I like splitting long text as in log statements " +
"into ” + suitablelAdjective + " source lines, " +
"just like you would a Markdown paragraph. " +
"As in: " + quine);
I agree that many formatters are bad about this, like introducing an indent for all but the first content line, or putting the concatenation operator in the front instead of the back, thereby also causing non-uniform alinkemt of the text content.
This makes it really annoying to grep for log messages. I can't control what you do in your codebase but I will always argue against this the ones I work on.
I haven’t found this to be a problem in practice. You generally can’t grep for the complete message anyway due to inserted arguments. Picking a distinctive formulation from the log message virtually always does the trick. I do take care to not place line breaks in the middle of a semantic unit if possible.
Yes, I find the part of the message that doesn't have interpolated arguments in it. The problem is that the literal part of the string might be broken up across lines.
IMO, implicit string concatenation is a bug, not a feature.
I once made a stupid mistake of having a list of directories to delete:
directories_to_delete = (
"/some/dir"
"/some/other/dir"
)
for dir in directories_to_delete:
shutil.rmtree(dir)
Can you spot the error? I somehow forgot the comma in the list. That meant that rather than creating a tuple of directories, I created a single string. So when the `for` loop ran, it iterated on individual characters of the string. What was the first character? "/" of course.
I essentially did an `rm -rf /` because of the implicit concatenation.
I prefer storing plain text and displaying locally preferred syntax, to a degree.
With some expressions, like lookup tables or bit strings, hand wrapping and careful white space use is the difference between “understandable and intuitive” and “completely meaningless”. In JS world, `// prettier-ignore` above such an expression preserves it but ideally there’s a more universal way to express this.
And that's fine, as long as whatever ends up in version control is standardized. Locally you can tweak your settings to have / have not word wrapping, 2-8 space indentation, etc.
But that's the core of this article, too; since then it's normalized to store the plain text source code in git and share it, but it mentions a code and formatting agnostic storage format, where it's down to people's editors (and diff tools, etc) to render the code. It's not actually unusual, since things like images are also unreadable if you look at their source code, but tools like Github will render them in a human digestable format.
You still have to minimize the wrapping that happens, because wrapped lines of code tend to be continuous instead of being properly spaced so as to make its parts individually readable.
I’d agree with you except for the trend over the last 10 years or so to set limits back to the Stone Age. For a while there we seemed to be settling on somewhere around 150 characters and yet these days we’re back to the 80-100 range.
I've never understood why we still look at the plain text representation of code, and not a visualization of the code that makes more sense.
Note that, in my mind, this visualization is not automatically generated, but lovingly created by humans who wish their code to be understood by others. It is not separate from the code, as typical design documentation is, but an integral part of it, stored in metadata. Consider it an extension of variable and function naming.
There is of course "literate programming" [1], but somehow (improvements of) that never took off in larger systems.
> I've never understood why we still look at the plain text representation of code, and not a visualization of the code that makes more sense.
My guess is it is the same reason why the most common form of creating source code is typing and not other readily available mechanisms:
Semantic density
Graphical visualizations are approachable representations and very useful for introductory, infrequent, and/or summary needs. However, they become cumbersome when either a well-defined repetitive workflow is used or usage variations are not known a priori.
An example of both are the emacs and vi editors. The vast majority of supported commands are at most a few keystrokes and any programming language source code can be manipulated by them.
> I've never understood why we still look at the plain text representation of code, and not a visualization of the code that makes more sense.
I suppose this is because nobody has been able to create good tooling for it (the visualization itself, the efficient editing, etc). You'll have to deal with the text version of it at some point if not all tools that we rely on get a version for the new visualization.
Another hypothesis is that it might not matter this much that we work with text directly after all.
> Note that, in my mind, this visualization is not automatically generated, but lovingly created by humans who wish their code to be understood by others.
If you allow manual crafting there, I suspect you'll need some sort of linting too.
Um isn't that what Lisp and its children / siblings have been all about.
I've written a bit of Closure it has a very clear idea that code is data and data is code. Your code is trivially serializable in your mind and by various tools, and because it is lisp - it all kinda makes sense.
I really wish we lived in a universe where a lisp became the lengua franca of the world instead of javascript, as almost happened with Netscape, but alas ...
The "code is data" aspect of lisp seems orthogonal to how code is still written as text, and btw lisp is still written using text. You still need to indent all these parentheses.
Virtually all programming languages are parsed into ASTs, and these ASTs can be serialized back. This is what formatters/"prettifiers" usually do.
We must include the standard I/O definitions, since we want to
send formatted output to stdout and stderr.
<<Header files to include>>=
#include <stdio.h>
@
Not hard to see why nobody really embraced it. And not helped buy the fact that it was published right around the time that best practice was switching toward "don't comment unless absolutely necessary".
First and foremost I was wrong thinking that I was smarter than others — that's not even how intelligence works.
Second I was wrong being so stubbornly pro-tabs / anti-spaces (for example). It doesn't make that much of a difference, so there's no point in being so passionate about it.
And third I was wasting everyone's time (and my persuasion powers) by not choosing my battles more wisely.
My suggestion would be nowadays: let's choose a popular style guide, set up a linter and be done with it.
I learned to love rustfmt but there’s one thing that bothers me: There’s a few times where there are two ways to do something like a one line closure can omit the curly brackets, but multi line closures cannot. Rustfmt prefers to remove those brackets when it can, but I prefer to keep them, which makes editing the code faster since I don’t have a syntax error if I suddenly need a second line.
I can still live with it. And I like the clean, minimal version when I don’t have to edit. Just adding that “style” can have impact beyond how it looks involving ease of editing. And it stinks when your preferences clash with the community.
The problem is that tools like ESlint often come with highly opinionated rules that might not even be applicable all of the time (leading to me having to manually turn them off via annotations)
And there's no centralized idea on best practices.
Recently, I discovered that the ruff linter for Python doesn't like the assert statement, because since it does nothing in "optimized" mode it isn't reliable. But such complaints about unit tests are not particularly useful.
ESLint is the centralized idea I suppose, but getting consensus is difficult.
When it comes to formatting, there's other languages (Go, Python?) that have clear, top-down guidelines applied by tooling, at least for code style. I think that's clever, and besides the odd mailing list post trying to change it because of a personal preference, it minimizes discussions about trivialities over the really important things.
Because 2 vs 4 spaces or line length discussions are ultimately futile; those aren't features, individual preferences don't matter. Codebases have millions of lines and thousands of developers; individual opinions do not matter at scale, consistency does.
Biome has several advantages that make this future unlikely and evidence from similar attempts in other languages seems to support their direction. Such pessimism is unwarranted.
That is true if a set of good linting rules are set up, those that help discover errors or other code smells which are valid issues in 99% of cases, or pure formatting rules when there is no "correct" thing to do. Linting becomes a problem when it is opinionated and has questionable rationale to begin with, and stands in your way instead of help you catch issues. Nobody should be fighting linting rules, but sadly that's what often happens.
I’ve never understood why people care so much about the linter. Just let people write code and don’t worry about the linter. I don’t need to fight a linter which makes my code worse when I could just write it in a way that doesn’t suck. I promise it’ll be fine. I’m too busy doing actual software engineering to care if code is not perfectly formatted to some arbitrary style specification.
I feel like style lingers are horseshoe theory. Use them enough and eventually you wrap back around to just living without them.
99% the linter is not enforcing correctness in my experience. It's just enforcing a bunch of subjective aesthetic constraints. Which import order, max number of empty lines between statement, what type of string literal to use, no trailing white space, etc.
A non trivial part of my day is spent dealing with this giant catalog of dinner etiquette. Not all of it is auto fixable. Also, there are plenty of situations where everyone would agree that violating the rule is necessary (eg. "no use before define" but you need mutual recursion). Also sometimes rules are circularly in conflict (eg you have to change a line but there is no way to do it without violating the max-line-length rule).
If your linter is enforcing subjective aesthetic constraints, then I'd argue it's not really a linter but a formatter at that point, and it should be automatically fixing all that stuff for you rather than have you do that manually. Things like import order, empty lines, white space etc can all be fixed automatically in most languages I've worked with.
Linters enforcing rules that need to be broken is a pet peeve of mine, and I agree with you there. Most linters allow for using comments to explicitly exclude certain lines from being linted. This should ~never be necessary. If it is regularly necessary, then either you're programming bad (always a possibility!) or the rule has too many false positives and you should remove it.
Does your linter not have a fix command for things like import order? Does your editor not auto trim white space?
To be frank, everyone I've worked with that complained about the linter didn't know much about their tooling. They didn't know about the fix command (even though I put it in the readme and told them about it), they didn't know how to turn on lintfix and prettier on save, wouldn't switch on git hooks and didn't know their lint failed until GitHub said so, and none of the people like this were so productive that it made up for this trait.
The point of linters is so the code looks the same regardless of who wrote it. This way it’s easier to read. Some people have horrible style and linters really help.
I find linters make me faster. Sometimes I’m feeling lazy and I just want to pump out a bunch of lines of ugly code with mappings poorly formatted, bad indents, and just have it all synched up when I save.
I see no reason to accommodate to worthless programmers who aren't able to read or format the code that's sent to them. They can lint it themselves if they want.
It's not about accomodating people, it's about consistency in codebases when many people with different preferences or styles are working on it. You just eliminate the cognitive overhead so people can develop intuition about how the code works and flows.
Good, let huge companies do that. Now this lint madness is pushed onto everybody who wants to program, including individual hobbyists. And many of them are trying to learn coding and get their compiling sabotaged by the linter, not knowing it's something they can opt out of.
Some linters find issues you care about. Forgotten print statements or confusing indentations come to mind. I’ve worked with people who easily forget, and I’m one of them myself.
That may be true for you, but I have worked with plenty of devs who cannot even be consistent with naming conventions in a function, let alone throughout the application.
Don’t get me wrong: modern liners often annoy me and devs who spend a lot of time fiddling with those settings tend not to be very good programmers. But sometimes having guardrails is necessary.
Everyone has their own opinion of what format doesn’t suck, so without a consistent code format, you’ll have to review diffs where fights over white space are mixed in with the meaningful change.
There's a python linter named `black` and it converts my code:
important_numbers = {
"x": 3,
"y": 42, # Answer to the Ultimate Question!
"z": 2
}
into this:
important_numbers = {"x": 3, "y": 42, "z": 2} # Answer to the Ultimate Question!
This `black` is non-configurable (because it's "opinionated") and yet, out of some strange cargo cult, people swear by it and try to impose it on everybody.
This is the flip side of "I’ve never understood why people care so much about the linter"!
Why are you caring about formatting? Just write your code, get it working, let Black tidy it up in the standard way. Don't worry about the formatting.
In cases where you're annoyed about some choice the formatter makes, somebody else would be equally annoyed by the choice you would rather make. There is no perfect solution. The whole point is to have a reasonable, plausible default, and to automate it so that nobody has to spend any time thinking about it whatsoever.
Running a standard formatter when code is checked in minimizes the source control churn due to re-formatting. That churn is a pointless waste of time. If you don't run a standard formatter, I guarantee that badly-formatted code will make it into source control, and that's annoying.
I may be unusual in a way I treat my profession and care about my professional output (the code I write), and I take both very seriously.
There's a quote from Steve Jobs (or maybe his carpenter father):
“When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.”
When you say "Don't worry about the formatting", what you're saying is "use a piece of plywood on the back," and I'm just not going to do that.
I don't think we'll ever fully agree, but I'd just like to clarify that I value that kind of craftsmanship too!
I just honestly believe that if you fully automate the formatting, the results are better than if you do it painstakingly by hand; better by virtue of being more consistent. It's using the right tool for the job.
Did you read the example pietnas gave? The changed formatting ruined the communicative intent of his code. Formatters do that a lot, and it makes the code unambiguously worse.
I don’t really care about whether the back is plywood or whatever. I don’t know how to write plywood code. I do care about creating clear, readable code that communicates my intent. Sometimes formatters help with that. Often they hinder, as they reflect the arbitrary aesthetic preferences of their creators.
In this case I think the trailing comma is an improvement, so the formatter is steering you towards a better overall solution. However, even if you dislike the trailing comma, it's more important for the formatting to be consistent and robust, so I still think it's better to work within the limitations of the formatter.
I honestly have no idea where I read pietnas. The comment you identified was the one I was referring to.
I care about consistency, but not foolishly so. I suppose an important question is: under what circumstances is consistency undesirable?
I believe that consistency is not an end goal, but a means to achieve clear communication. When being consistent results in less clarity, it should be abandoned in favor of more effective communication.
That’s an obviously terrible formatting change. A format that prevents scoping comments narrowly is absurd. Why not just tuck all the inline comments at the end of the file so the code is denser while we’re at it?
It works the way you want if you add a trailing comma:
important_numbers = {
"x": 3,
"y": 42, # Answer to the Ultimate Question!
"z": 2,
}
You might complain that that seems a bit obscure, but it only took me 10 or 20 seconds to discover it after pasting the original code snippet into an editor.
The trailing comma is an improvement as it makes the diff clearer on future edits.
Edit to add: occurs to me that I oversimplified my position earlier and it probably looks like I'm trying to have it both ways. I do advocate aiming for clean and clear formatting; I'm just against doing this manually. You should instead use automation, and steer it lightly only when you have to.
For example, I explicitly don't want people to manually "tab-align" columns in their code. It looks nice, sure, but it'll inevitably get messed up in future edits. Better to do something simpler and more robust.
The trailing comma communicates an intent of possibly adding more things in the future. I actually use it quite a lot -- when I have that intent.
In the above example, if I think I have listed all of the `important_numbers`, there is a certain point of not having the trailing comma there.
Here's another terrible example from `black`:
From this:
my_print(f"This string has two parameters, `a` which is equal to {a} and `b` which is equal to {b}",
a=1, b=2)
To this:
my_print(
f"This string has two parameters, `a` which is equal to {a} and `b` which is equal to {b}",
a=1,
b=2,
)
The trailing comma it added makes no sense whatsoever because I can not have an intent of adding more things -- I've already exhausted the parameters in the string!
On the top of it, I don't quite get why I need to change the way I write in order to please the machine. Who should be serving whom?
Edit: changed "print" to "my_print" to not have to argue about named parameters of print ("sep", "file" etc.).
Edit 2: here's a variant that `black` has no issues with whatsoever. It does not suggest a trailing comma or any other change:
my_print(f"This string has two params, `a` which is {a} and `b` which is {b}", a=1, b=2)
So an existence of a trailing comma is a product of string length?
Yes, it gets a trailing comma if it's on it's own line. That way when you add/remove arguments in a multi-line call it's only a one-line diff. This doesn't apply when the diff is only one line anyway.
Who's to say you don't add a new argument to the function in the future, like
my_print(
"This string has two parameters, `a` which is equal to {a} and `b` which is equal to {b}",
a=1,
b=2,
color_negative_red=True,
)
> it gets a trailing comma if it's on it's own line.
Sorry but it doesn't make any sense to me. If your argument is "a trailing comma is a good thing," it should go into any and all function calls/list declarations/etc. Who's to say I won't add this in the future:
There's a very responsive playground at https://black.vercel.app/ and whatever it does looks strange to me, because the underlying assumptions look inconsistent one with the other (to my eye at least.) Specifically, "the length of the string should decide whether there is a trailing comma or there isn't" makes zero sense.
>Sorry but it doesn't make any sense to me. If your argument is "a trailing comma is a good thing," it should go into any and all function calls/list declarations/et
No, the argument is quite specifically that a one line diff to add a new argument/element to the end of a list is preferable to a two line diff to do the same thing. The presence of the trailing comma is necessary to achieve that only when elements are on their own line.
print(
'Hello there from a very long line abcdefghijklmnopqrstuvwxyz',
sep=' ',
end='\n',
file=None,
flush=False,
)
All of the existing named parameters to `print()` function are already provided, and that standard function is highly unlikely to change. Should I add another string to `print`, I will have to do it before the named parameters anyway. There is no sense in the trailing comma here however you look at it.
Edit: sorry for using single quotes, in my 20 years of writing Python it was never an issue, but now with `black` it apparently is.
I think this boils it down to the essence. Whether you use a trailing comma here, and whether you use single or double quotes, is just bike-shedding. If there's an automated tool that can make a consistent choice everywhere, that's worthwhile.
I write perfectly legible code. More legible than a linter infact. Because the rules for what is ideal are not so simple as to be encoded in simple lint rules. Sure it gets like 95%. But the last 5% is so bad it ruins the positives.
If your goal is “code that is easy to read and understand” then a linter is only maybe the first 20%. Lots of well linted code is thoroughly inscrutable.
I 100% believe you. And for god's sake please use linter.
British and American spelling are both 100% legible English. But when multiple people coauthor a book, they should stick to one instead of letting each author use their favorite spelling.
Im sure you write very readable code, but in most companies, there are a bunch of devs who completely rape the codebase with unintelligble bullshit. The linter is the first line of defense against these bozos, unfortunately it must be enforced company wide.
But (at least for a long time), "run the linter automatically" wasn't available, not until Go's gofmt put the idea into people's heads that they could leave it to a tool. I think there were some formatting tools before then, but e.g. jslint/eslint had a lot of gaps which I unfortunately ended up pointing out in code reviews a lot. Which was nitpicking / bikeshedding, in hindsight.
I agree. Linters are one of the more frustrating aspects of modern dev. It's of such little relevance and yet it takes up a sizeable portion of my time when I'm going for a merge. Many editors/language combinations don't give automatic linting out of the box and when they do I can bet that the rules they infer is different from what the CI pipeline infers.
> I promise after a week you'll just get used to whatever format your team lands on.
Arthur Witney formats like this:
C vt[]="+{~<#,";
A(*vd[])()={0,plus,from,find,0,rsh,cat},
(*vm[])()={0,id,size,iota,box,sha,0};
If your code was formatted automatically like that, do you think you'd get used to it after a week?
My point is there is meaning of how code is formatted and there is an effect on understanding for certain people.
I think that at a certain point of "reasonable" and for most "normal" people your statements hold true, but I don't want anyone to think that every person caught up on formatting is just doing it for bike-shedding or other trivial reasons.
I don't know what is actionable if what I say is true, but it feels important to say.
The unreadability of that example has approximately nothing to do with code formatting, which is generally understood to refer to modifying the textual representation of the code while leaving the actual logic more or less unchanged. Can you propose some alternative whitespace or indentation scheme which would make that example significantly more readable?
Same! I have no patience for these kind of arguments about formatting. I don't care that you don't like what the formatter does, it isn't about you. I've written code in several different languages over the years and the main take away is that I can get used to reading anything. It's so important to pick a standard and follow it. As long as that standard is somewhat sane I couldn't care less what the actual standard is.
Another argument that is a pet peeve of mine is significant white-space vs curly braces. It literally doesn't matter. We often get new Python developers coming from a C# background and the amount of bitching about curly braces is so annoying. Just learn the language bro, it's not that hard.
Which defaults? The programming languages I’ve worked with don’t have defaults for everything related to formatting. Editor defaults don’t work, since not everybody uses the same editor. So you have to make a choice somewhere.
A lot of (relatively) recent languages do have defaults. Go and rust both come with an auto formatter out of the box, and defaults that are sane enough to just run with
Ah yeah, in those cases it is possible to just use the defaults.
Come to think of it, I have worked with Deno, which comes with a formatter (and linter and testing library) and I’m a fan. Saves a couple of dependencies, some config files and a bit of mental overhead when creating a new project.
I'm in the same boat. I have not run into any situations where someone's choice on formatting was bad enough that I couldn't read the code so ... just pick a format / standard and let's go. I'll get used to it if I'm not already.
It’s very simple: format code to a standard. Preferably the language default formatting. But it must be a standard that can be auto formatted to with a tool. Now when someone doesn’t like that standard, they can auto format from that standard to one of their liking for local development and back again to the project standard for pushing to the project. This can even be done automatically with gitattributes during checkout and commit. But without strictly enforcing a autoformatable standard this is not possible and you end up with bikeshedding.
I've worked with several Development Leads to actually define these. After the initial adjustment period, everybody's local environment setup properly: No one ever spent time reviewing style and formatting on Pull Requests.
Just decide as a team, auto-apply if possible (less than 5 seconds for big changes), enforce, and be done with it. Stop wasting everybody's time because after weeks you cannot make your mind on it and also don't tell your team/Lead about it.
I can see why people prefer particular styles so it's easier to read but on that note with Perl it was just perltidy flags. I can run perltidy on any code anyone here writes and it's easy for me to read, then I can pass it back to whomever and they can run perltidy with their favorite flags and it's easy for them to read. It probably doesn't quite work this way with all languages. I would imagine python being less flexible in this regard.
Some styles can actively make some people less productive though e.g. I really try to avoid allman braces because I can work a lot better with denser (for a certain definition of dense code)
This, however, usually doesn't effect me if the official format for a project is one way or the other because [drumroll] I just format my tree differently and then format to the official style when I push.
> just make a choice, run the linter automatically and be done with it.
Most people probably do this. These types of discussions (probably) come up when someone else made the choice and other people also need to adhere to this choice. This is important for teams, but sometimes big egos don't want these choices made for them.
I went through this on a few projects, and what surprised me the most was that some devs have very strong opinions about import ordering. I mostly rely on the IDE to manage imports, and most the time they're not even visible. We had to add a lot of prettier rules to get import orders just right.
A strong reason I enjoy Rust for collaboration is that it's so opinionated, it forces people to focus on solving real problems. I agree that bikeshedding over ES Lint and Prettier configs are not a strong use of time.
You're missing what the bike shedding metaphor is about. It's not about having bike sheds or not, it's about coloring bike sheds, which every day bike riders in their right mind really don't give a shit about, because it doesn't affect their life in any tangible way.
No, the original metaphor is they were planning to build a nuclear reactor and they spent significantly more time than expected on the details of the bike shed because it was simple to understand and change, unlike the details of the reactor which were complex and required expertise and had lots of constraints. Who cares what color the bike shed is, we're building a nuclear reactor here!
The point is that in the large picture there are many much more important topics with higher impact to focus on. The company wont make much more money by having consistently formatted code, compared to putting that energy towards new features.
1. Assuming at least one person who cares about linter settings isn't utterly confused or moronic, what are their self-described reasons why they care? People's work styles, brains, and even sensory perception differ in some important ways!
2. As freedom-loving developers [1] who want to make our own choices to help our own styles of work, why should we even have to care about "enforcing" one standard for something that isn't really necessary? This one-standard-per-project thing is a downstream result of a design decision upstream (storing source code as plain text).
3. How should we design languages going forward? This brings the conversation back to top-level post (which is why we're here -- to think about what languages could be, not to rehash tired old debates, after all): how can we take what we've learned and build better languages -- perhaps ones where the primary source of truth for source code is not plain text?
[1] Slightly tongue-in-cheek. It is one thing to want to have freedom to do our jobs well, it is another thing to turn this into advocacy an overarching system such as a political philosophy or various decentralized financial mechanisms and so on. Here, I'm merely referring to the "let me do my job in the way that actually works for my brain" sense.