Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it always helps to have a project to apply things to as you're learning something, even if it means coming up with something small. While preparing, I found it helpful to read for at least an hour each morning, and then divided the rest of the day into learning vs. "diving in" as I felt like it.

Getting deep into RL specifically wasn't so necessary for me because I was just replicating AlphaZero there, although reading papers on other neural architectures, training methods, etc. helped with other experimentation.

You may be well past this, but my biggest general recommendation is the book, "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" to quickly cover a broad range of statistics, APIs, etc., at the right level of practicality before going further into different areas (for PyTorch, I'm not sure what’s best).

Similarly, I was familiar with the calculus underpinnings but did appreciate Andrew Ng's courses for digging into backpropagation etc., especially when covering batching.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: