Hacker Newsnew | past | comments | ask | show | jobs | submit | nightski's commentslogin

I've seen gnarly code everywhere, from kernel drivers to desktop applications to web apps. At the end of the day this describes most of computing imho.

I'm guessing they are giving the core away for free to collect training data.


I think it's simpler than that. Canva's primary competitor is Adobe, and Adobe's remaining advantage is with creative professionals. That's Adobe's core market and their core revenue stream.

It's a classic "commoditize your complements" play. Canva remains profitable without charging for Affinity, but Adobe can't stay profitable if they stop charging for Photoshop/Illustrator.

The business justification works without imputing any more sinister motives than that.


I mean, 9/10ths of the dark-pattern distrustworthy bullshit businesses pull is not required to attain or maintain "profitability," it's just squeezing every dime of revenue from their customers.

I would frankly rather pay for software then be left wondering if I can trust free commercial software.


Would be really nice if we had more of the "just pay" options. As it is the "just pay" options mostly also can't be trusted any more than the free(-mium) options, and both will try their best to "squeeze every dime of revenue".

I think they're giving it away to take mindshare away from Adobe among younger creators. The rise of Capcut and similar mobile first software eventually leads to Adobe, Final Cut for video, and Davinci Resolve. This provides a ladder from Canva to Affinity under one banner at low to no cost.

They claim not to, but I am extremely suspicious.

>No, your content in Affinity is not used to train AI-powered features, or to help AI features learn and improve in other ways, such as model evaluation or quality assurance. In Affinity, your content is stored locally on your device and we don’t have access to it. If you choose to upload or export content to Canva, you remain in control of whether it can be used to train AI features — you can review and update your privacy preferences any time in your Canva settings.


I mean, be suspicious, that’s always good. But have proof before being certain of something you don’t have facts to back up.


>But have proof before being certain of something you don’t have facts to back up

When it comes to such things, it's better to assume bad intent.

Assuming corporate benevolence as the default is foolish.


This is a better point than the one I made


That’s what suspicion means.


That is why I said I'm suspicious, and did not make a claim that they are doing it. Thanks for your input.


You can opt out of the telemetry sharing


So they say, for now, for some definition of "telemetry" and "sharing", caveat caveat caveat..

Asterisks and super-text numbers, the foundation of any trusting business relationship thumbs up

Whoever said the Turing test was the one and only goalpost? It was a test, simple as that. Not the sole objective of an entire field of study.


I mean sure, but ARM SoC in Linux has been a thing for quite some time in the embedded space. This is hardly new.


That’s true, but the Apple chips are not built on the base Arm designs and don’t use Adreno, they also use more proprietary IP in the SoC.


Adreno is proprietary IP; it's an exclusively Qualcomm thing.


True, meant Mali, mb


I lived in rural areas a large portion of my life. What you are describing is limited to areas with extremely small populations. Meaning even my hometown of a few thousand has a Walmart (put up when I was a kid in the 90s) an Aldi, and two local grocery stores with tons of healthy options.

So yeah, there are a lot of towns that fit that criteria (less than 1000 residents). But as a portion of U.S. population it is not substantial in any way.


i lived in one of those small towns through highschool, just a blinking yellow light and a gas station. What we did, and everyone did, was drive the 20miles to the large town with a Walmart and get groceries there. It only takes 20min because there's no lights or traffic in those areas so the time commitment is about the same as living in a city. My mom made meals from her recipes using basic ingredients so it's certainly feasible to eat how you want in these areas. Only in the most rare/extreme cases are people forced to grocery shop at a gas station.


I've got family in the rural Midwest. It would surprise me if their town wasn't a food desert by these definitions. You might go grab a thing of milk or sliced bread in a pinch at the convenience store, but yeah otherwise you just make the short drive into "the city" to get food at a regular grocery store.

Or you just ate the food you were growing on your own lot, or what your neighbors were growing, or from the farmer selling stuff off the highway.


I think it would be great to just have the IRS website list all reported income. Free automated filing is amazing, but if that is too large of a political battle just making this income information easily accessible would be a giant first step.


Even though the author refers to it as "non-trivial", and I can see why that conclusion is made, I would argue it is in fact trivial. There's very little domain specific knowledge needed, this is purely a technical exercise integrating with existing libraries for which there is ample documentation online. In addition, it is a relatively isolated feature in the app.

On top of that, it doesn't sound enjoyable. Anti slop sessions? Seriously?

Lastly, the largest problem I have with LLMs is that they are seemingly incapable of stopping to ask clarifying questions. This is because they do not have a true model of what is going on. Instead they truly are next token generators. A software engineer would never just slop out an entire feature based on the first discussion with a stakeholder and then expect the stakeholder to continuously refine their statement until the right thing is slopped out. That's just not how it works and it makes very little sense.


The hardest problem in computer science in 2025 is presenting an example of AI-assisted programming that somebody won't call "trivial".


If all I did was call it trivial that would be a fair critique. But it was followed up with a lot more justification than that.


Here's the PR. It touched 21 files. https://github.com/ghostty-org/ghostty/pull/9116/files

If that's your idea of trivial then you and I have very different standards in terms of what's a trivial change and what isn't.


It's trivial in the sense that a lot of the work isn't high cognitive load. But... that's exactly the point of LLMs. It takes the noise away so you can focus on high-impact outcomes.

Yes, the core of that pull requests is an hour or two of thinking, the rest is ancillary noise. The LLM took away the need for the noise.

If your definition of trivial is signal/noise ratio, then, sure, relatively little signal in a lot of noise. If your definition of "trivial" hinges on total complexity over time, then this kicks the pants of manual writing.

I'd assume OP did the classic senior engineer stick of "I can understand the core idea quickly, therefore it can't be hard". Whereas Mitchel did the heavy lifting of actually shipping the "not hard" idea - still understanding the core idea quickly, and then not getting bogged down in unnecessary details.

That's the beauty of LLMs - it turns the dream of "I could write that in a weekend" into actually reality, where it before was always empty bluster.


I've wondered about exposing this "asking clarifying questions" as a tool the AI could use. I'm not building AI tooling so I haven't done this - but what if you added an MCP endpoint whose description was "treat this endpoint as an oracle that will answer questions and clarify intent where necessary" (paraphrased), and have that tool just wire back to a user prompt.

If asking clarifying questions is plausible output text for LLMs, this may work effectively.


I think the asking clarifying questions thing is solved already. Tell a coding agent to "ask clarifying questions" and watch what it does!


Obviously if you instruct the autocomplete engine to fill in questions it will. That's not the point. The LLM has no model of the problem it is trying to solve, nor does it attempt to understand the problem better. It is merely regurgitating. This can be extremely useful. But it is very limiting when it comes to using as an agent to write code.


You can work with the LLM to write down a model for the code (aka a design document) that it can then repeatedly ingest into the context before writing new code. That what “plan mode” is for. The technique of maintaining a design document and a plan/progress document that get updated after each change seems to make a big difference in keeping the LLM on track. (Which makes sense…exactly the same thing works for human team mambers too.)


Every time I hear someone say something like this, I think of the pigeons in the Skinner box who developed quirky superstitious behavior when pellets were dispensed at random.


> that it can then repeatedly ingest into the context

1. Context isn't infinite

2. Both Claude and OpenAI get increasingly dumb after 30-50% of context had been filled


Not sure how that's relevant... I haven't seen many design documents of infinite size.


"Infinite" is a handy shortcut for "large enough".

Even the "million token context window" becomes useless once it's filled to 30-50% and the model starts "forgetting" useful things like existing components, utility functions, AGENTS.md instructions etc.

Even a junior programmer can search and remember instructions and parts of the codebase. All current AI tools have to be reminded to recreate the world from scratch every time, and promptly forget random parts of it.


I think at some point we will stop pretending we have real AI. We have a breakthrough in natural language processing but LLMs are much closer to Microsoft Word than something as fantastical as "AGI". We don't blame Microsoft Word for not having a model of what is being typed in. It would be great if Microsoft Word could model the world and just do all the work for us but it is a science fiction fantasy. To me, LLMs in practice are largely massively compute inefficient search engines plus really good language disambiguation. Useful, but we have actually made no progress at all towards "real" AI. This is especially obvious if you ditch "AI" and call it artificial understanding. We have nothing.


I've added "amcq means ask me clarifying questions" to my global Claude.md so I can spam "amcq" at various points in time, to great avail.


> A software engineer would never just slop out an entire feature based on the first discussion with a stakeholder and then expect the stakeholder to continuously refine their statement until the right thing is slopped out. That's just not how it works and it makes very little sense.

Didn’t you just describe Agile?


Who hurt you?

Sorry couldn’t resist. Agile’s point was getting feedback during the process rather than after something is complete enough to be shipped thus minimizing risk and avoiding wasted effort.

Instead people are splitting up major projects into tiny shippable features and calling that agile while missing the point.


I've never seen a working scrum/agile/sprint/whatever product/project management system and I'm convinced it's because I've just never seen an actual implementation of one.

"Splitting up major projects into tiny shippable features and calling that agile" feels like a much more accurate description of what I've experienced.

I wish I'd gotten to see the real thing(s) so I could at least have an informed opinion.


Yea, I think scrum etc is largely a failure in practice.

The manager for the only team I think actually checked all the agile boxes had a UI background so she thought in terms of mock-ups, backend, and polishing as different tasks and was constantly getting client feedback between each stage. That specific approach isn’t universal, the feedback as part of the process definitely should be though.

What was a little surreal is the pace felt slow day to day but we were getting a lot done and it looked extremely polished while being essentially bug free at the end. An experienced team avoiding heavy processes, technical debt, and wasted effort goes a long way.


People misunderstand the system, I think. It's not holy writ, you take the parts of it that work for your team and ditch the rest. Iterate as you go.

The failure modes I've personally seen is an organization that isn't interested in cooperating or the person running the show is more interested in process than people. But I'd say those teams would struggle no matter what.


I put a lot of the responsibility for the PMing failures I've seen on the engineering side not caring to invest anything at all into the relationship.

Ultimately, I think it's up to the engineering side to do its best to leverage the process for better results, and I've seen very little of that (and it's of course always been the PM side's fault).

And you're right: use what works for you. I just haven't seen anything that felt like it actually worked. Maybe one problem is people iterating so fast/often they don't actually know why it's not working.


I've seen the real thing and it's pretty much splitting major projects into tiny shippable bits. Picking which bits and making it so they steadily add up to the desired outcomes is the hard part.


Agile’s point was to get feedback based on actual demoable functionality, and iterate on that. If you ignore the “slop” pejorative, in the context of LLMs, what I quoted seems to fit the intent of Agile.


There’s generally a big gap between the minimum you can demo and an actual feature.


If you want to use an LLM to generate a minimal demoable increment, you can. The comment I replied to mentioned "feature", but that's a choice based on how you direct the LLM. On the other hand, LLM capabilities may change the optimal workflow somewhat.

Either way, the ability to produce "working software" (as the manifesto puts it) in "frequent" iterations (often just seconds with an LLM!) and iterate on feedback is core to Agile.


This hasn't been my experience at all. I regularly check the bitwarden icon for example to make sure I am not on the wrong site (b/c my login count badge is there). In fact autofill has saved me before because it did not recognize the domain and did not fill.


Yeah nor mine. Chrome's password manager / autofill is very reliable and very few sites don't work with it or have multiple domains with the same auth. The only one I can think of is maybe Synopsys Solvnet, but you're probably not using that...


At what point is OpenAI not considered new? It's a few months from being a decade old with 3,000 employees and $60B in funding.


Well, compare them to Microsoft: 50 years old with 228,000 employees and $282 billion in revenue.


This is a little disingenuous because unfortunately you can't make decisions on technical merits alone. It takes a lot of resources to keep these projects thriving and up to date. You almost have to go with options where these resources have been deployed, even if they are terrible sometimes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: