Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Classical professional tools like Photoshop have a lot less potential though. They are very precise, but (I assume) they have barely advanced in the past decade. Tools based on generative AI will probably improve massively over the next few years. Most such tools seem currently based on Stable Diffusion, and apparently OpenAI/Midjourney/Google have zero interest in supporting such tools. But this could change soon, e.g. when Adobe tries to compete with the SD ecosystem.

We already now see deepfakes (e.g. of Trump or the Pope, recently even videos) that a far beyond what we saw in the years before, indicating that the old professional tools weren't so powerful after all. Now if we extrapolate this a few years into the future...



You assume wrong. What these tools offer is constantly churning. Far faster than ever before and Photoshop has been around for over 30 years. They release updates constantly. Photoshop got AI filters like detail enhancement for zooming a few years ago. Automatic object detection, content-aware delete, etc. etc etc. a few years before that. That's only what I can recall off the top of my head for Photoshop alone, but it's such a giant environment that even most of their own product people probably couldn't tell you off the cuff. In areas like video compositing, tools like Nuke are developing tools with these capabilities even more quickly... and they better when the cheap license costs $3500/yr.

As I mentioned in another comment, so much of this hype is based on developers assuming they understand something that they don't. I've indulged in this hubris as a developer but straddling both sides of this line has been illuminating.


Well, Adobe at least seems headed to fully embrace the generative AI hype now:

https://www.adobe.com/sensei/generative-ai/firefly.html

This sounds all very similar to the Stable Diffusion tool chain, though probably cloud based and with a more intuitive UI on top.


All of these technologies are being integrated into professional toolkits in ways that make sense when they're polished enough to be professionally useful... and for the foreseeable future, that's how it will stay. Beyond the high-volume low-effort work on places like Fiverr, commercial artists and designers are valuable for their ability to think conceptually and make the artistic decisions about what goes on the screen, where, and why. The how is an implementation detail. Designers dropped balsamiq in favor of sketch in no time flat, and then dropped sketch in favor of figma even more quickly. Adobe XD, capable and included for free in an ecosystem they already use, it's barely in the conversation. These are fields where people readily adopt new technology that suits their needs but the current tools aren't even in the ballpark.

Being able to quickly generate and iterate on assets is great for inspiration but pretty useless for professional output without fine-tuned, predictable, repeatable controls. These tools will simply integrate with existing professional tools until they can do it better.

Imagine the first person to make an electric saw made some automated thing that could cut the wood to make a cool looking flat pack house somewhere in the neighborhood of your specifications in 5 minutes. The caveats: while it would assemble perfectly, the actual angles of the cuts might be unpredictable... Like 40 and 50 degrees rather than 45 and 45, and the layout was never quite what you expect even if it was OK more often than not. Pros knew those were fundamentally deal breakers for professional work, and remained more professionally useful with their hand saws because they had the required precision, control, and predictability. While enthusiasts were gong crazy exploring all of the different kinds of oh-so-slightly wonky structures they could generate and predicting the end of carpentry, the old school saw companies started making circular saws, chop saws, drills, and the like. The market for handyman-built dog houses, sheds and playhouses would immediately be lost to the automated machine but I guarantee you that all consequential work would still be done by carpenters with power tools.


I agree that for the foreseeable future designers / commerical artists will have to overall decide "what goes on the screen, where, and why", but not anymore for all smaller scale details. Generative AI automates creativity at least to some extent, which is qualitatively different from past developments. The example with the island tortoise on the Adobe website is not a real demo, but it doesn't seem far away.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: