Sort of. The problem with most integrated GPUs is that they don’t have as many dedicated processing cores and the RAM, shared with the system, is often slower than on dedicated graphics cards. Also… with the exception of system on a chip designs, traditional integrated graphics reserved a chunk of memory for graphics use and still had to copy to/from it. I believe with newer system-on-a-chip designs we’ve seen graphics APIs e.g. on macOS that can work with data in a zero-copy fashion. But the trade off between fewer, larger system integrated graphics cores vs the many hundreds or thousands or tens of thousands of graphics cores, well, lots of cores tends to scale better than fewer. So there’s a limit to how far two dozen beefy cores can take you vs tens of thousands of dedicated tiny gfx cores.
The theoretical best approach would be to integrate lots of GPU cores on the motherboard alongside very fast memory/storage combos such as Octane, but reality is very different because we also want portable, replaceable parts and need to worry about silly things like cooling trade offs between placing things closer for data efficiency vs keeping things spaced apart enough so the metal doesn’t melt from the power demands in such a small space. And whenever someone says “this is the best graphics card,” someone inevitably comes up with a newer arrangement of transistors that is even faster.