It happened a bit differently; Atwood and friends simply came out with a standard document and called it "standard markdown", which Gruber then refused to endorse. Eventually after the series of blog posts and some back and forth they renamed the project "CommonMark", which it is still called today.
I am not sure (of course), but I think Atwood simply thought standardizing this format was so obviously valuable that he didn't consider Gruber might not want to work with him. In retrospect it's kind of nice that it didn't happen, it really keeps everyone incentivized to keep the format simple.
The linked post contains three cases of Markdown syntax (underscores) leaking into the text, where actual italics were likely intended. This is the most basic Markdown syntax element failing to work. The problem CommonMark is trying to solve is not adding new features (the only one they added to Gruber Markdown is fenced code blocks), but rather specifying how to interpret edge cases to ensure the same Markdown code produces the same HTML everywhere.
I understand the goal of the spec. In my experience once some spec document gets adapted widely enough, there's a strong incentive to add new features to it, which renderers would then be compelled to implement. Before you know it, MD is a complicated spec that doesn't serve its original purpose.
In this case a few minor edge cases is really not a big deal compared to that (in my opinion).
I feel like no one serious uses the uncle Bob style of programming anymore (where each line is extracted into its own method). This was a thing for a while but anyone who's tried to fix bugs in a codebase like that knows exactly what this article is talking about. It's a constant frustration of pressing the "go to definition" key over and over, and going back and forth between separate pieces that run in sequence.
I don't know how that book ever got as big as it did, all you have to do is try it to know that it's very annoying and does not help readability at all.
and one which I highly recommend and which markedly improved my code --- the other book made me question my boss's competence when it showed up on his desk, but then it was placed under a monitor as a riser which reflected his opinion of it....
That entire conversation on comments is just wildly insane. Uncle Bob outright admits that he couldn't understand the code he had written when he looked back on it for the discussion, which should be an automatic failure. But he tries to justify the failure as merely the algorithm just being sooooo complex there's no way it can be done simply. (Which, compared to the numerics routines I've been staring out, no, this is among the easiest kind of algorithm to understand)
The whole thing is really uncomfortable; it's as if, after attempting the sudoku solver, Ron Jeffries sat down for a discussion with Peter Norvig, who was not especially diplomatic about the outcome of the experiment. The section before that, where they're talking about the "decomposition" of Knuth's simple primality tester, is brutal. "Looping over odd numbers is one concern; determining primality is another".
That's a really interesting read. I felt myself being closer to John about the small method part, but closer to UB for the TDD part, even if in both cases I was somewhere inbetween.
At the very least, you convinced me to add John's book to my ever-growing reading list.
It's like with goto. Goto is useful and readable in quite a few situations but people will write arrow like if/else tree with 8 levels of indentation just to avoid it because someone somewhere said goto is evil.
Funny how my Python code doesn't have those arrow issues. In C code, I understand some standard idioms, but I haven't really ever seen a goto I liked. (Those few people who are trying to outsmart the compiler would make a better impression on me by just showing the assembly.)
IMX, people mainly defend goto in C because of memory management and other forms of resource-acquisition/cleanup problems. But really it comes across to me that they just don't want to pay more function-call overhead (risk the compiler not inlining things). Otherwise you can easily have patterns like:
int get_resources_and_do_thing() {
RESOURCE_A* a = acquire_a();
int result = a ? get_other_resource_and_do_thing(a) : -1;
cleanup_a(a);
return result;
}
int get_other_resource_and_do_thing(RESOURCE_A* a) {
RESOURCE_B* b = acquire_b();
int result = b ? do_thing_with(a, b) : -2;
cleanup_b(b);
return result;
}
(I prefer for handling NULL to be the cleanup function's responsibility, as with `free()`.)
Maybe sometimes you'd inline the two acquisitions; since all the business logic is elsewhere (in `do_thing_with`), the cleanup stuff is simple enough that you don't really benefit from using `goto` to express it.
In the really interesting cases, `do_thing_with` could be a passed-in function pointer:
int get_resources_and_do(int(*thing_to_do)(RESOURCE_A*, RESOURCE_B*)) {
RESOURCE_A* a;
RESOURCE_B* b;
int result;
a = acquire_a();
if (!a) return -1;
b = acquire_b();
if (!b) { cleanup_a(a); return -2; }
result = thing_to_do(a, b);
cleanup_b(b); cleanup_a(a);
return result;
}
And then you only write that pattern once for all the functions that need the resources.
Of course, this is a contrived example, but the common uses I've seen do seem to be fairly similar. Yeah, people sometimes don't like this kind of pattern because `cleanup_a` appears twice — so don't go crazy with it. But I really think that `result = 2; goto a_cleanup;` (and introducing that label) is not better than `cleanup_a(a); return 2;`. Only at three or four steps of resource acquisition does that really save any effort, and that's a code smell anyway.
(And, of course, in C++ you get all the nice RAII idioms instead.)
this rings alarm bells for me reading that a cleanup_c(c) has maybe been forgotten somewhere, since the happy and unhappy paths clean up different amounts of things.
i imagine your python code escapes the giant tree by using exceptions though? that skips it by renaming and restructuring the goto, rather than leaving out the ability to jump to a common error handling spot
> this rings alarm bells for me reading that a cleanup_c(c) has maybe been forgotten somewhere, since the happy and unhappy paths clean up different amounts of things.
The exact point of taking the main work to a separate function is so that you can see all the paths right there. Of course there is no `c` to worry about; the wrapper is so short that it doesn't have room for that to have happened.
The Python code doesn't have to deal with stuff like this because it has higher-level constructs like context managers, and because there's garbage collection.
def get_resources_and_do(action):
with get_a() as a, get_b() as b:
action(a, b)
You're assuming function calls or other constructs are more readable and better programming. I don't agree. Having a clear clean-up or common return block is a good readable pattern that puts all the logic right there in one place.
Jumping out of the loop with a goto is also more readable than what Python has to offer. Refactoring things into functions just because you need to control the flow of the program is an anti pattern. Those functions add indirection and might never be reused. Why would you do that even if it was free performance wise?
This is why new low level languages offer alternatives to goto (defer, labelled break/continue, labelled switch/case) that cover most of the use cases.
Imo it's debatable if those are better and more readable than goto. Defer might be. Labelled break probably isn't although it doesn't matter that much.
Python meanwhile offers you adding more indirection, exceptions (wtf?) or flags (inefficient unrolling and additional noise instead of just goto ITEM_FOUND or something).
A colleague recently added a linter rule against nested ternary statements. OK, I can see how those can be confusing, and there's probably a reason why that rule is an option.
Then replaced a pretty simple one with an anonymous immediately invoked function that contained a switch statement with a return for each case.
I guess "anonymous IIFE" is the part that bothers you. If someone is nesting ternary expressions in order to distinguish three or more cases, I think the switch is generally going to be clearer. Writing `foo = ...` in each case, while it might seem redundant, is not really any worse than writing `return ...` in each case, sure. But I might very well use an explicit, separately written function if there's something obvious to call it. Just for the separation of concerns: working through the cases vs. doing something with the result of the case logic.
It just looked way more complex (and it's easy to miss the () at the end of the whole expression that makes it II). And the point of the rule was to make code more readable.
Basically it's a shame that Typescript doesn't have a switch-style construct that is an expression.
And that nowadays you can't make nested ternaries look obvious with formatting because automated formatters (that are great) undo it.
One of the most infuriating categories of engineers to work with is the one who's always citing books in code review. It's effectively effort amplification as a defense mechanism, now instead of having a discussion with you I have to go read a book first. No thanks.
I do not give a shit that this practice is in a book written by some well respected whoever, if you can't explain why you think it applies here then I'm not going to approve your PR.
Yeah, and any of these philosophies are always terrible when you take them to their limit. The ideas are always good in principle and built on a nugget of truth, it's when people take it as gospel I have a problem. If they just read the book and drew inspiration for alternative, possibly better, coding styles and could argue their case that's unequivocally good.
> I feel like no one serious uses the uncle Bob style of programming anymore (where each line is extracted into its own method)
Alas, there's a lot of Go people who enjoy that kind of thing (flashback to when I was looking at an interface calling an interface calling an interface calling an interface through 8 files ... which ended up in basically "set this cipher key" and y'know, it could just have been at the top.)
Hardcore proponents of this style often incant 'DRY' and talk about reuse, but in most cases, this reuse seems to be much more made available in principle than found useful in practice.
There's also the "it makes testing easier because you can just swap in another interface and you don't need mocks" argument - sure but half of the stuff I find like this doesn't even have tests and you still tend to need mocks for a whole bunch of other cases anyway.
I wonder how hard it would be to build an IDE "lens" extension that would automatically show you a recursively inlined version of the function you're hovering over when feasible and e.g. shorter than 20 lines.
As John Carmack said: "if a lot of operations are supposed to happen in a sequential fashion, their code should follow sequentially" (https://cbarrete.com/carmack.html).
A single method with a few lines is easy to read, like the processor reading a single cache line, while having to jump around between methods is distracting and slow, like the processor having to read various RAM locations.
Depending on the language you can also have very good reasons to have many lines, for example in Java a method can't return multiple primitive values, so if you want to stick to primitives for performances you inline it and use curly braces to limit the scope of its internals.
It's an extremism to get a strong reaction, but the takeaway is that you should aim when possible to make the code understandable without comments, and that a good programmer can make code more understandable than a newbie with comments.
But of course understandeable code with comments simply has much more bandwidth of expression so it will get the best of both worlds.
I see writing commentless code like practicing playing piano only with your left hand, it's a showoff and you can get fascinatingly close to the original piece (See Godowsky's Chopin adaptations for the left hand), but of course when you are done showing off, you will play with both hands.
Great, that's exactly how I feel with any style that demands "each class in its own file" or "each function in its own file" or whatever. I'd rather have everything I need in front of my eyes as much as possible, rather than have it all over the place just to conform with an arbitrary requirement.
I said this at a company I worked and got made fun of because "it's so much more organized". My take away is that the average person has zero ability to think critically.
If those demands made any sense they would be enforced by the languages themselves. It's mostly a way of claiming to be productive by renaming constants and moving code around.
I wonder how hard it would be to have an IDE extension that would automatically show you a recursively inlined version of the function you're hovering over when feasible and e.g. shorter than 20 lines.
I can assure you that I am very serious and I do cut things up almost as finely as Uncle Bob suggests. Where others balk at the suggestion that a function or method should never expand past 20 or so lines, I struggle to imagine a way I could ever justify having something that long in my own code.
But I definitely don't go about it the same way. Mr. Martin honestly just doesn't seem very good at implementing his ideas and getting the benefits that he anticipates from them. I think the core of this is that he doesn't appreciate how complex it is to create a class, at all, in the first place. (Especially when it doesn't model anything coherent or intuitive. But as Jeffries' Sudoku experience shows, also when it mistakenly models an object from the problem domain that is not especially relevant to the solution domain.)
The bit about parameters is also nonsense; pulling state from an implicit this-object is clearly worse than having it explicitly passed in, and is only pretending to have reduced dependencies. Similarly, in terms of cleanliness, mutating the this-object's state is worse than mutating a parameter, which of course is worse than returning a value. It's the sort of thing that you do as a concession to optimization, in languages (like, not Haskell family) where you pay a steep cost for repeatedly creating similar objects that have a lot of state information in common but can't actually share it.
As for single-line functions, I've found that usually it's better to inline them on a separate line, and name the result. The name for that value is about as... valuable as a function name would be. But there are always exceptions, I feel.
The Ruby ecosystem was particularly bad about "DRY"(vs WET) and indirection back in the day.
Things were pretty dire until Sandi Metz introduced Ruby developers to the rest of the programming world with "Practical Object-Oriented Design". I think that helped start a movement away from "clever", "artisanal", and "elegant" and towards more practicality that favors the future programmer.
Does anyone remember debugging Ruby code where lines in stack traces don't exist because the code was dynamically generated at run time to reduce boilerplate? Pepperidge Farm remembers.
Haskell (and OCaml I suppose two) are outliers though as one is supposed to have a small functions for single case. It's also super easy to find them and haskell-language-server can even suggest which functions you want based on signatures you have.
But in other languages I agree - it's abomination and actually hurt developers with lower working memory (e.g. neuroatypical ones).
It’s because maths are the ultimate abstraction. It’s timeless and corner cases (almost) fully understood. Ok maybe not, but at least relative to whatever JavaScript developers are reinventing for the thousand time.
> where each line is extracted into its own method
Never heard of "that style of programming" before, and I certainly know that Uncle Bob never adviced people to break down their programs so each line has it's own method/function. Are you perhaps mixing this with someone else?
> Even a switch statement with only two cases is larger than I'd like a single block or function to be.
His advice that follows, to leverage polymorphism to avoid switch statements isn't bad per-se, but his reasoning, that 6 lines is too long, was a reflection of his desire to get every function as short as possible.
In his own words, ( page 34 ):
> [functions] should be small. They should be smaller than that. That is not an assertion I can justify.
He then advocates for functions to be 2-3 lines each.
> to leverage polymorphism to avoid switch statements [...] was a reflection of his desire to get every function as short as possible.
That's both true, but long way away from "every line should have it's own method", but I guess parent exaggerated for effect and I misunderstood them, I took it literally when I shouldn't.
People have forgotten this, but he did the same with Windows Phone for a while at the very start of his time as CEO. His motto was "cloud first, mobile first" where cloud meant Azure and mobile meant Windows Phone. After some time he gave up and they pivoted into the direction he is now well known for, which was to focus on good developer tooling regardless of OS.
GitHub and VSCode were smart ways to quickly recapture developer mindshare. They felt distinctly un-Microsoft with how open and multiplatform they were.
The Azure Linux friendliness play was essential and smart. Again, Microsoft felt like they were opening up to the world.
But they've backslidden. They've ceded Windows and gaming to their cloud and AI infra ambitions. They're not being friendly anymore.
Microsoft spent a lot of energy making Windows more consumer friendly, only to piss it away with Windows 11.
One evil thing they were doing that they've suddenly given up on: they spent a ton of money buying up gaming studios (highly anti-competitively) to win on the console front and to stymie Steam's ability to move off Windows. They wanted to make Windows/Xbox gaming the place everyone would be. They threw all of that away because AI became a bigger target.
They'll continue to win in enterprise, but they're losing consumer, gamer, and developer/IC support and mindshare. I've never seen so many people bitch about GitHub as in the last year. You'd swear it had became worse than Windows 7 at this point.
>One evil thing they were doing that they've suddenly given up on: they spent a ton of money buying up gaming studios (highly anti-competitively) to win on the console front and to stymie Steam's ability to move off Windows. They wanted to make Windows/Xbox gaming the place everyone would be. They threw all of that away because AI became a bigger target.
No kidding, the totally threw it all away. It used to be that Windows was already the place for gaming. And the Xbox 360 arguably won its generation. But that was a long time ago. Has any Microsoft gaming release exceeded expectations lately? Call of Duty will always sell like hotcakes, but the latest Black Ops is a hot expensive mess that underperformed last year's title.
Maybe it won some battles in your part of the world, presumably North America. But the PS3 outsold it globally as its contemporary, and even the PS5 passed the 360 in global lifetime sales as of November 2025: https://www.vgchartz.com/article/466599/ps5-outsells-xbox-36...
Microsoft seems to have decided that they can't make all that much money with gaming. But they are underestimating the mindshare they are losing with that.
Do you think they'll continue to win in enterprise? As a casual office user, who's had to do some PowerPoint and word docs recently, I found the experience of using office 365 truly miserable. All of them are laggy and horrible to use.
I think by moving onto the cloud they've left themselves open to being disrupted, and when it comes it'll be like Lotus Notes, an extremely quick downfall.
They have enterprise users locked in mainly due to Active Directory, for which there is no good replacement, and to a some extent SharePoint. There's also Office, of course, and you are right that the migration to web tech isn't well taken. I'm thinking of "New Outlook" in particular. They probably plan to EOL classic Outlook when Office 2024 EOLs in 2029. The last stronghold will be Excel. If native Excel ever gets discontinued, then everything Microsoft will have been webshittified™.
Trust me, I really want that to happen, but who has the billions to burn (and the will to use them at that) to build a solid alternative? Most probably, the EU will have a misguided shot at it, out of desperation from the USA, and will subsidise some inadequate local actors. I'm not sure whether it will be good, timely nor sufficient.
Microsoft has never been an end-user-focused company. Almost every successful product they've ever made was to sell to a business for their employees to use. Everything else they seem to either half ass or screw up or lose their passion for at some point.
I think I first came to that realization with windows phone 7/8? The UI was cool looking, but functionality was half-baked and third party app availability was dismal. HOWEVER! You could sign a windows phone into an active directory/365 account and manage the bloody daylights out of it via group policy and the tools to do that were SUPER WELL MADE.
Same is/was true of Microsoft Teams - an utter abomination of a chat client, the search is garbage, the emoji and sticker variety sometimes weird, the client itself randomly uses up 100% CPU for no reason and is just generally buggy... but gosh darnit, MS made sure sysadmins could ban memes and use of certain emoji via policy and gave insane amounts of detail to auditing and record keeping. So sure it's a pile of shit to use, but awesome if you wanna spy on your employees and restrict their every move.
Windows is fun because with the enterprise version, they give all that control to the employers, but with the consumer version they give all that control to advertisers, developers, and themselves.
I think this is also why every consumer-focused product they make either fails instantly, or ends up rotting on the vine and failing after whoever evangelized that product leaves the company (possibly being forced out for not being a "culture fit"). Do I have to go on about zune/windows phone/xbox? Or surface? Or the way they randomly dumped their peripherals product line on another company? lol.
I believe Microsoft biggest achievement is being capable to stay relevant for the past 50 years, largely due to enterprise.
If you take a close look as an user, all their products is half-baked in some way (inconsistent behaviors, dark patterns, poor support, etc.), good enough so they can lock you in and hold your data hostage with time.
> You'd swear it had became worse than Windows 7 at this point.
Do you mean Windows Vista instead? Because Windows 7 was probably the last (half-)decent windows (no UI though for tablet, no ads in the OS, no ubiquitous telemetry, no account BS).
Yeah, my mistake. I spent the post-XP era on Linux and specially Ubuntu.
I've been using all three major OS families recently and I'm not enjoying my time on Windows. It's so full of ads, and the Linux / Unix bits feel bolted on.
> But they've backslidden. They've ceded Windows and gaming to their cloud and AI infra ambitions. They're not being friendly anymore.
Forget being “friendly”. GitHub has enormous mindshare and has frankly quite reasonably pricing (far cheaper than GitLab, for example), but the product just sucks lately. The website, while quite capable (impressively so at times) is so slow and buggy that it’s hard to benefit from any of its capabilities.
It’s gotten to the point where, every time I try a newish capability, I ask myself “how bad can this possibly be,” and it invariably exceeds expectations.
GitHub needs to take a step back and focus on fixing things. Existing features should work, be coherent, and be fast. If it takes longer to load a diff in the web viewer than it takes to pull the entire branch and view the diff locally, something is wrong.
If a coworker reviews my code, I should not sitting right next to them, literally looking at the same website they’re on, and wondering why they see the correct context for their review comment but I don’t.
I installed Linux (an arch-based distro) last month. There have been some minor issues but nothing worse than what I experienced regularly on Windows recently.
My computer feels fast again and when things randomly break I can at least get to the root cause and fix it myself.
I used to quite like Windows, but it has gotten worse every patch day for years now. The pain of learning a new system is not so bad and at least I own my computer now.
I had been Windows user since Windows3.1. More than 3 decades straight. After a few years of working with Linux, installed Debian on home PC about a year ago and couldn't be more happier since then.
I briefly test-drove Windows 2, but have been a solid Windows user since 3.1 too.
I have been forced to use Windows 11 on a succession of work PCs, but I stayed 10 at home due to the lack of a movable task bar and the terrible right-click menu in 11.
When Microsoft started pushing hard against remaining on 10 this year, I made the switch - to MacOS. It was an easy decision, since I was finally able to get a MacBook for work, too, so no context-switching required. I run a copy of Win11 in a VM for apps that need it, but find that I rarely have to spin it up.
As a product manager, I cannot image the decision-making behind building a product update so shitty that you drive away 35-year customers.
I've been trying out different distros, but still using windows 10 ltsc as my main OS. I've got 2 additional partitions containing popos with cosmic and kde fedora that I've narrowed it down to, but both need just a little more bugfixing to to become perfect for me. LTSC is still supported for a while, but if my computer stopped working, I feel like macOS would be a no-brainer for most people.
A while back (Win XP?), I got frustrated with Windows and installed Linux on my dev machine instead. But I still had to run Windows, so I installed VMWare on Linux on that machine and ran Windows in a VM. For whatever reason, Windows was noticeably faster in the VM than running on bare metal. Super bizarre OS.
I think part of it is that creating a process is cheap in Unix-style OSs, and expensive in Windows. Windows doesn't want you to exec gcc, cpp, ld, etc. over and over again. It wants you to run an IDE that does it all in one process.
Visual Studios creates worker threads used for compiling C# that are reused for builds. This can be seen in task manager. Ran into a bug were the VS compiler would only work properly for one build and require VS to be restarted.
Visual Studios is like Windows, each version just seems to be getting worse with more bugs.
Visual Studio calls just one (msbuild) for almost all of the compilation process, perhaps half a dozen processes at most. Certainly not hundreds to thousands, as typically seen under Linux.
I was a Windows Insiders user for a long time... When I was bumped to Windows 11 it borked (didn't have tpm enabled) and had to do a full re-install... a few months later, I saw an ad in the start menu search results... that was it. I switched my primary drive over to my Linux install and largely haven't looked back.
Still on Windows for work, but would happily swap. I also use an M1 Air for my personal laptop, but that is probably my last Apple hardware.
I was a fan, user, then developer from the DOS days-pre Windows 3.0–to Windows 10 without a single gap.
When they threatened
Windows 10 EOL last year (?), that’s when I took a day to do a clean install of Mint and port my games and LLM tinkering over.
Because I knew MS was doubling-down on the user-hostile experience.
I thought I’d miss Windows but Steam, Wine, and Radeon made it delightful.
Windows is now only on my company-issued laptop. I predict that will also go away, as Windows 11 has introduced backdoors to circumvent company controls and install their BS.
Are you saying that any Linux install you've tried in let's say, the past decade, has actually failed for you? I've not seen that and I've put it on many dissimilar machines with success. I use Ubuntu, and now Kubuntu, perhaps you could name the distro that gave you issues?
Ehh, nothing so strong as "failed". For example in Cinnamon I will occasionally install an app that doesn't have a tray icon. Or if I install an app using a chromium based browser, it doesn't have an icon associated with it. So then I tell claude to fix it. It goes out to the internet and finds a suitable icon and will set it up for me.
Or trying to get Steam to work, which is wildly better than it used to be thanks to proton, but still not quite a perfect experience. For example there's a menu compatibility setting you have to enable for some menus to work, and other menus don't work when you have hover-click enabled in the accessibility settings of Cinnamon. Those weren't fixed by Claude CLI like the icons example, but definitely identified through chats with Claude.
The only "fail" states I get into are when I'm doing homelab power user stuff, setting up ownCloud, configuring Caddy, proxmox, etc. I don't blame Linux for that though.
All in all, I would say Linux is absolutely in a state I would install on my parents' computer without fear like I would've had in perhaps 2010.
I highly doubt this. As someone who is pretty active in a lot of beginner linux communities its becoming the case that a lot issues are caused by users following LLM instructions and creating issues where there were none.
Example someone will want to configure something and the LLM will give them advice from the wrong distro thats 5 years out of date. If they asked a person or looked on the fourms they'd have got what they wanted in a few mins. Instead they go down a rabbit hole where an LLM feeds them worse and worse advice trying to fix the mountain of issues its building up.
Tried Linux around 5 years ago - took many issues, had to learn various commands.
Tried again a few months ago and used various llms to configure everything well, troubleshoot etc
Eg when waking from standby and your mouse isn't working, do you want to troubleshoot and learn various commands over an hour or ask an llm and fix it within a few minutes?
When creating an on demand voice to text app for Linux do I learn various commands and dependencies etc that may take one/many days or use an llm to make it within 30 min?
> If they asked a person or looked on the fourms they'd have got what they wanted in a few mins.
Not minutes. In best case scenario it is hours, in worst it is years to infinity.
You are also not taking into account the survivorship bias. You only see people who couldn't fix their system with AI and need further help. But you are not seeing a huge number of beginners (recently a big influx of those), who were successful in fixing problems using AI the vast majority of time.
Nowadays, there is no good reason to use a simple Search engine to find solutions by manually browsing all the possible links. Just ask Phind/Perplexity/others to explain the problem, give the solution and provide verifiable links one can check to validate.
I search all the time. If I trusted the AI response i'd be coming away with the wrong answer more often than not. Why would that be a better outcome than spending an extra 30s to get reliable information? Perplexity IS a simple search engine. Its far simpler than google's engine.
It definitely depends but it's useful for me. In general I find AI pretty useful when you can do a guided search in which you are personally able to discard bad paths quickly before they start polluting the context too much. I have pretty beginner linux skills but I'm quite technical overall and have a decent BS detector, so it's been useful for me.
The quality of Office is very rapidly declining; it seems that the entire team has moved to forcing AI into every feature instead of fixing any issues. The web version is barely usable (esp compared to Google's versions) and the desktop is quickly getting worse seemingly every day.
I have not used Azure for a few years now; back when I did use it, it seemed pretty good.
That applies to all teams not only Office, even Aspire now has AI on the dashboard, and they proudly made use of AI building the new Aspire CLI experience.
A lot of comments here kind of miss the point, but that's to be expected because you can only really get it when you have the experience. Like hearing a description of a painting will not give you the same emotion as looking at it yourself.
Zig has completely changed the way I program (even outside of it). A lot of the goals and heuristics I used to have while writing code have completely changed. It's like seeing programming itself in a new way.
If managers are pushing a clearly not-working tool, it makes perfect sense for workers to complain about this and share their experiences. This has nothing to do with the future. No one knows for sure if the models will improve or not. But they are not as advertised today and this is what people are reacting to.
We are running mapserver in production in the cloud (AWS lambda) to visualize lots of different data using WMS. We're also doing lots of processing using GDAL in the cloud as well. Compared to ESRI it's amazingly cost effective even considering Amazon's high prices.
nice. If you aren't already familiar, you might be interested in this platform
for Dutch geospatial data: https://github.com/PDOK . They use mapserver on the cloud at massive scale, and all of their infra is open.
By definition they are social apps, so it's not usually up to just individuals whether to use them. For example if I stopped using what's app I'd cut myself off from the majority of my friends and family.
This is probably not true. If it is, if your ties are so weak that they rely on an app, maybe it is ok to let them go and seek stronger social ties elsewhere.
I am not sure (of course), but I think Atwood simply thought standardizing this format was so obviously valuable that he didn't consider Gruber might not want to work with him. In retrospect it's kind of nice that it didn't happen, it really keeps everyone incentivized to keep the format simple.
reply