Ah, sorry I'm probably in a different timezone. As for goals, I know having a specific goal when asking for advice is ideal, but I guess I don't really have one. I guess I'm kind of lost in life and want to expose myself to people who are smart and sort of have it figured out.
Still, would help to know more about you, your skills, experience, passions, etc.
Not to overgeneralize here, but a lot of the HN participants are technically skilled, even to the level of "nerdish", enjoying making things that amaze and amuse. Where does that fit into your life's experience?
hey this is kind of off topic but it’s about apple so here goes: Does anyone know what happened to the whole csam thing from a few months ago? I know they delayed the feature but iirc they didn’t cancel it completely. Does anyone have any updates?
Am I the only one bothered by the thickness? I mean, the new macs are going to be amazing machines but....it's just so un-Apple, you know? Still, gonna pick one up as soon as I can :)
Hard agree. I've been mildly on the side of "enough with losing features in pursuit of thinness" for a good while, and while I appreciate the heft of my 13", I'll let them worry about that for the next New Thing when it comes in Goldest Gold.
Apple page for the new 16 inch says it is 1.68 cm thick and the old 2019 16 inch was 1.62 cm so at least the larger model actually got thicker. I previously assumed that both sizes had at least the same thickness.
I'm not bothered about it and if that's the space they need to put their tech into it, then this is it. Would be a shame if anyone would make something thinner on the cost of tech specs. Not that that ever happened...
I see. I guess compute intensive stuff is usually implemented in c++. By the way, if you don't mind, could you share your experience in learning RL? I am struggling through Sutton and Barto's text right now and wondering if I'll progress faster if I just "dive into things." Also, nice project!
I think it always helps to have a project to apply things to as you're learning something, even if it means coming up with something small. While preparing, I found it helpful to read for at least an hour each morning, and then divided the rest of the day into learning vs. "diving in" as I felt like it.
Getting deep into RL specifically wasn't so necessary for me because I was just replicating AlphaZero there, although reading papers on other neural architectures, training methods, etc. helped with other experimentation.
You may be well past this, but my biggest general recommendation is the book, "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" to quickly cover a broad range of statistics, APIs, etc., at the right level of practicality before going further into different areas (for PyTorch, I'm not sure what’s best).
Similarly, I was familiar with the calculus underpinnings but did appreciate Andrew Ng's courses for digging into backpropagation etc., especially when covering batching.
I found "Foundations of Deep Reinforcement Learning - Theory and Practice in Python" by Laura Graesser and Wah Loon Keng quite helpful in that it was somewhat like get a excellent summary course in about 6 years of RL advancements. I will say that it's quite forthcoming with the math. Anyway, I just wanted to know how they (not sure exactly who did it first, I just meant people with machines) got RL to play Atarti Pitfall. So take any recommendation I make with a grain of salt.