Hacker Newsnew | past | comments | ask | show | jobs | submit | lemonade5117's commentslogin

The book Advanced Programming in the Unix Environment also covers something similar iirc.


The blog looks really nice! Definitely gonna read the other posts when I have time.


Yeah, makes sense. I'm gonna try that out. Thanks!


That makes sense. Do you have any advice on finding local meetups?


Thanks, that's very comprehensive! I'll have to try finding some good podcasts.


Ah, sorry I'm probably in a different timezone. As for goals, I know having a specific goal when asking for advice is ideal, but I guess I don't really have one. I guess I'm kind of lost in life and want to expose myself to people who are smart and sort of have it figured out.


Still, would help to know more about you, your skills, experience, passions, etc.

Not to overgeneralize here, but a lot of the HN participants are technically skilled, even to the level of "nerdish", enjoying making things that amaze and amuse. Where does that fit into your life's experience?


I think OP is asking for privacy related extensions but here’s a fun one that I use: https://chrome.google.com/webstore/detail/nicolas-cage/fjgbn...

It replaces every instance of “god” on your current web page with “Nicolas Cage.” Gets me every time.


I don't understand this. Why would you replace Nicolas Cage with Nicolas Cage?


Sounds like the plot for Face/Off sequel where Nicolas Cage swaps his face with himself, Nicolas Cage.



I don’t understand this. Why would you replace butt with butt?


Because there's no way that could ever go wrong...

https://news.ycombinator.com/item?id=28869819


hey this is kind of off topic but it’s about apple so here goes: Does anyone know what happened to the whole csam thing from a few months ago? I know they delayed the feature but iirc they didn’t cancel it completely. Does anyone have any updates?


Am I the only one bothered by the thickness? I mean, the new macs are going to be amazing machines but....it's just so un-Apple, you know? Still, gonna pick one up as soon as I can :)


Personally not. They've been optimising for thinness at the cost of pro-ness for too long I think. This seems like a more sensible balance to me.


Hard agree. I've been mildly on the side of "enough with losing features in pursuit of thinness" for a good while, and while I appreciate the heft of my 13", I'll let them worry about that for the next New Thing when it comes in Goldest Gold.

For now, magsafe! HDMI! A keyboard that works!!!


I thought it looked thicker too but it turns out the new models are even 0.01cm thinner (if you want to call it that).


Oh wow, it certainly doesn't look that way but the new ones actually are thinner. That's interesting.


Apple page for the new 16 inch says it is 1.68 cm thick and the old 2019 16 inch was 1.62 cm so at least the larger model actually got thicker. I previously assumed that both sizes had at least the same thickness.


I'm not bothered about it and if that's the space they need to put their tech into it, then this is it. Would be a shame if anyone would make something thinner on the cost of tech specs. Not that that ever happened...


Absolutely not. The 16 inch is a huge laptop even if it's thin. Might as well make it a touch thicker if it means better thermals/battery life.


I can't imagine many people would be bothered by 0.160mm greater thickness, for the larger model. The smaller model is the same thickness.


I'd much rather them make it a little thicker and have better cooling and features.


Could using something like AlphaZero.jl make it more efficient?

https://github.com/jonathan-laurent/AlphaZero.jl


The engine itself is in C++, but it calls in to TensorFlow via Python as a portability/distribution vs. performance trade-off.

Next steps could be using one of Lc0's backends for GPU scenarios, or taking the other side of the trade and using the C++ API for TPU.

There's also your typical CPU and memory optimizations that could be made - some baseline work there but not targeted.


I see. I guess compute intensive stuff is usually implemented in c++. By the way, if you don't mind, could you share your experience in learning RL? I am struggling through Sutton and Barto's text right now and wondering if I'll progress faster if I just "dive into things." Also, nice project!


I think it always helps to have a project to apply things to as you're learning something, even if it means coming up with something small. While preparing, I found it helpful to read for at least an hour each morning, and then divided the rest of the day into learning vs. "diving in" as I felt like it.

Getting deep into RL specifically wasn't so necessary for me because I was just replicating AlphaZero there, although reading papers on other neural architectures, training methods, etc. helped with other experimentation.

You may be well past this, but my biggest general recommendation is the book, "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" to quickly cover a broad range of statistics, APIs, etc., at the right level of practicality before going further into different areas (for PyTorch, I'm not sure what’s best).

Similarly, I was familiar with the calculus underpinnings but did appreciate Andrew Ng's courses for digging into backpropagation etc., especially when covering batching.


I found "Foundations of Deep Reinforcement Learning - Theory and Practice in Python" by Laura Graesser and Wah Loon Keng quite helpful in that it was somewhat like get a excellent summary course in about 6 years of RL advancements. I will say that it's quite forthcoming with the math. Anyway, I just wanted to know how they (not sure exactly who did it first, I just meant people with machines) got RL to play Atarti Pitfall. So take any recommendation I make with a grain of salt.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: