I worked in a place full of deep learning PhDs, and you'd have people trying to apply reinforcement learning to problems that had known mathematical solutions, and integer programming problems.
I don't think the issue is just that companies hire people who are awful at ML, it's also that people are trying to shoehorn deep learning into everything, even when it currently has nothing to offer and we have better solutions already. IMHO, we're producing too many deep learning PhDs.
This is just my general sense, as a very non-expert with more experience of doing than theory...but the benefit is someone knowing the theory AND being able to translate that into revenue.
I think most people view the hard part as doing the PHd, and so lots of people value that experience, and because they have that experience you have this endowment effect: wow, that PHd was hard, I must do very hard and complex things.
To give you an example: Man Group. They are a huge quant hedge fund, in fact they were one of the first big quant funds. Now, they even have their own program at Oxford University that they hire out of...have you heard of them? Most people haven't. Their performance is mostly terrible, and despite being decades ahead of everyone their returns were never very good (they did well at the start because they had a few exceptional employees, who then went elsewhere...David Harding was one). The issue isn't PHds, they have many of them, the issue is having that knowledge AND being able to convert it.
I think this is really hard to grasp because most people expect problems to yield instantly to ML but, in most cases, they don't and other people have done valuable work with non-ML stuff that should be built on but isn't because domain knowledge or common sense is often lacking.
A similar thing is people who come out of CS, and don't know how to program. They know a bit but they don't know how to use Git, they don't know how to write code others can read, etc.
The Man Group has had respectable returns, especially during Coronavirus. Nothing amazing, but certainly not terrible. Regardless, there's more to the picture: Sharpe ratio, vol, correlation to the market, etc
That isn't the case. First, I was talking about multi-decade, not how have they done in the last few hours. Second, their long-term returns haven't been good. They lagged the largest funds (largely because their strategy has mostly been naive trend-following). Third, you are correct that their marketing machine has sprung into action recently. But how much do you know about what trades they are making? If you were around pre-08, you may be familiar with the turn they have made recently (i.e. diving head first into liquidity premium trades with poor reasoning, no fundamental knowledge).
And again, the key point was: they have had this institute for how long? Decade plus? Are they a leading quant fund? No. Are they in the top 10? No. Are they doing anything particularly inventive? See returns. No.
How is this any different to developers who insist on using some shiny new web framework, micro service spaghetti and kubernates overkill infrastructure for their silly little CRUD app?
I don't think it is any different. Overvaluing the latest hotness is extremely common in the tech industry and is one of my least favorite parts of it.
Unfortunately, this is where the incentives of the company and that of the employee diverges. For the employee, if they choose some simpler, appropriate model or solution to the problem, they will not be able to get that next DL job. Especially early in their career. I cannot bring myself to do resume driven development, but I understand why people do it.
But you probably don't need a DL job. As my dad always said, as long as you make them/save them money, they'll never fire you.
I know that I (as a DS Lead/Manager) would hire someone who uses an appropriate solution to a business problem above someone who has an intricate knowledge of applying PyTorch to inappropriate problems.
I don't think the issue is just that companies hire people who are awful at ML, it's also that people are trying to shoehorn deep learning into everything, even when it currently has nothing to offer and we have better solutions already. IMHO, we're producing too many deep learning PhDs.