The problem is that in CPython the only mechanism to leverage multiple cores for CPU-bound workloads is the multiprocessing module. That module suffers from the cost of serializing all objects that are transferred between processes over IPC. Threading in CPython mostly doesn't utilize multiple cores for CPU-bound workloads due to the GIL.
The goal of the project is to achieve good multi-core support without the serialization overhead and to make that support both obvious and undeniable. While very few Python programs actually benefit from true parallelism, it's a glaring gap that I'd like to see filled.
My proposal is a means to an end. I'd be just as happy if the situation were resolved in some other way and without my involvement. However, I've found that in open source, waiting for someone else to do what you want done is a losing proposition. So I'm not going to hold my breath. The only project of which I'm aware that could make a difference is Trent Nelson's pyparallel. I hope to collaborate on that, but I'll likely continue pursuing alternatives at the same time for now. I'm certainly open to any serious recommendations on how to achieve the goal, if you're sincerely interested in making a difference. (I appreciate the the mention of gevent and Erlang, which are things I've already taken into consideration.)
As to the details of my proposal, it's very early in the project and the python-ideas post is simply a high-level exploratory discussion of the problem along with a lot of unsettled details about how I think it might be solved in the Python 3.6 timeframe. A more serious proposal would be in the form of a PEP.
Regarding your feedback, the post implies that you either misunderstood what I said or you don't understand the underlying technologies. To clarify:
* the proposal is changing/adding relatively little, instead focusing on leverage as much existing features as possible
* Python's existing threading support would be leveraged
* subinterpreters, which already exist, would be exposed in Python through a new module in the stdlib
* subinterpreters are already highly independent and share very little global state
* the key change is enabling subinterpreters to run more or less without the GIL (leaving that to the main interpreter)
* the key addition is a mechanism to efficiently and safely share objects between subinterpreters
* the approach is drawing inspiration in part from CSP (Hoare's Communicating Sequential Processes)
* think of it as shared-nothing threads with message passing
It will most certainly improve multi-core support. It shares more in common with Erlang's approach than you think. It is neither a hack nor crap on the wall, as you put it.
The goal of the project is to achieve good multi-core support without the serialization overhead and to make that support both obvious and undeniable. While very few Python programs actually benefit from true parallelism, it's a glaring gap that I'd like to see filled.
My proposal is a means to an end. I'd be just as happy if the situation were resolved in some other way and without my involvement. However, I've found that in open source, waiting for someone else to do what you want done is a losing proposition. So I'm not going to hold my breath. The only project of which I'm aware that could make a difference is Trent Nelson's pyparallel. I hope to collaborate on that, but I'll likely continue pursuing alternatives at the same time for now. I'm certainly open to any serious recommendations on how to achieve the goal, if you're sincerely interested in making a difference. (I appreciate the the mention of gevent and Erlang, which are things I've already taken into consideration.)
As to the details of my proposal, it's very early in the project and the python-ideas post is simply a high-level exploratory discussion of the problem along with a lot of unsettled details about how I think it might be solved in the Python 3.6 timeframe. A more serious proposal would be in the form of a PEP.
Regarding your feedback, the post implies that you either misunderstood what I said or you don't understand the underlying technologies. To clarify:
* the proposal is changing/adding relatively little, instead focusing on leverage as much existing features as possible * Python's existing threading support would be leveraged * subinterpreters, which already exist, would be exposed in Python through a new module in the stdlib * subinterpreters are already highly independent and share very little global state * the key change is enabling subinterpreters to run more or less without the GIL (leaving that to the main interpreter) * the key addition is a mechanism to efficiently and safely share objects between subinterpreters * the approach is drawing inspiration in part from CSP (Hoare's Communicating Sequential Processes) * think of it as shared-nothing threads with message passing
It will most certainly improve multi-core support. It shares more in common with Erlang's approach than you think. It is neither a hack nor crap on the wall, as you put it.