This is an excellent summary, but it glosses over part of the problem (perhaps because the author has an obvious, and often quite good solution, namely DuckDB).
The implicit problem is that even if the dataset fits in memory, the software processing that data often uses more RAM than the machine has. And unlike using too much CPU, which just slows you down, using too much memory means your process is either dead or so slow it may as well be. It's _really easy_ to use way too much memory with e.g. Pandas. And there's three ways to approach this:
* As mentioned in the article, throw more money at the problem with cloud VMs. This gets expensive at scale, and can be a pain, and (unless you pursue the next two solutions) is in some sense a workaround.
* Better data processing tools: Use a smart enough tool that it can use efficient query planning and streaming algorithms to limit data usage. There's DuckDB, obviously, and Polars; here's a writeup I did showing how Polars uses much less memory than Pandas for the same query: https://pythonspeed.com/articles/polars-memory-pandas/
* Better visibility/observability: Make it easier to actually see where memory usage is coming from, so that the problems can be fixed. It's often very difficult to get good visibility here, partially because the tooling for performance and memory is often biased towards web apps, that have different requirements than data processing. In particular, the bottleneck is _peak_ memory, which requires a particular kind of memory profiling.
In the Python world, relevant memory profilers are pretty new. The most popular open source one at this point is Memray (https://bloomberg.github.io/memray/), but I also maintain Fil (https://pythonspeed.com/fil/). Both can give you visibility into sources of memory usage that was previous painfully difficult to get. On the commercial side, I'm working on https://sciagraph.com, which does memory and also performance profiling for Python data processing applications, and is designed to support running in development but also in production.
The implicit problem is that even if the dataset fits in memory, the software processing that data often uses more RAM than the machine has. And unlike using too much CPU, which just slows you down, using too much memory means your process is either dead or so slow it may as well be. It's _really easy_ to use way too much memory with e.g. Pandas. And there's three ways to approach this:
* As mentioned in the article, throw more money at the problem with cloud VMs. This gets expensive at scale, and can be a pain, and (unless you pursue the next two solutions) is in some sense a workaround.
* Better data processing tools: Use a smart enough tool that it can use efficient query planning and streaming algorithms to limit data usage. There's DuckDB, obviously, and Polars; here's a writeup I did showing how Polars uses much less memory than Pandas for the same query: https://pythonspeed.com/articles/polars-memory-pandas/
* Better visibility/observability: Make it easier to actually see where memory usage is coming from, so that the problems can be fixed. It's often very difficult to get good visibility here, partially because the tooling for performance and memory is often biased towards web apps, that have different requirements than data processing. In particular, the bottleneck is _peak_ memory, which requires a particular kind of memory profiling.
In the Python world, relevant memory profilers are pretty new. The most popular open source one at this point is Memray (https://bloomberg.github.io/memray/), but I also maintain Fil (https://pythonspeed.com/fil/). Both can give you visibility into sources of memory usage that was previous painfully difficult to get. On the commercial side, I'm working on https://sciagraph.com, which does memory and also performance profiling for Python data processing applications, and is designed to support running in development but also in production.