I work in this field, but nothing state of the art (relatively simple LSTM models and GAN models). While I found the article informative, it was also a little depressing to see how far research goes beyond what I am working on. I spend about 8 hours a week off-work-hours studying and reading papers and I find it difficult to keep up.
I think it makes more sense to focus on the benchmarks. Benchmarks change less often than the underlying algorithms/models, and they are easier to follow.
Once a model performs consistently well on a given benchmarks over several years, then it makes sense to get more into the details.
For example in the NLP field, in 2018, there is a focus on multi-tasks models. Some studies (don't have the refs at hand, sorry) suggest that different models generalize differently (and some time better) when trained on several tasks at once.
Anyway, those papers and models are the result of team of researchers working on the problem full time, with tons of data at their hands. If any sane individuals were able to keep up with the state of the art, it wouldn't be a research field, I guess :-)