I like _Working Effectively with Legacy Code_ by Michael Feathers. It's a collection of techniques for bootstrapping tests into an existing code base - carefully, incrementally restructuring the code to add enough tests to safely perform deeper structural changes. At the same time, it's ultimately a book about improving legacy codebases (testing is just a major technique), so it focuses on using testing when it's most useful.
(Also, maybe it's just me, but a book that starts from, "Help! I just inherited a C++ project with ten years of rot and half the comments are in Norwegian! How do I even start adding tests to this without breaking it further?" seems a bit more practical than one that starts by applying unit tests to example code designed for convenient testing. It's seldom that easy.)
It definitely helps to start with an environment where tests are easy to write, so that you can focus on the principles of testing without having to learn a lot of mechanics.
Python's built-in "unittest" module is an example of that. While it's not perfect, you can quickly form good habits by using it.
And it is not limited to Python. For instance, I've written unittest-compatible classes that basically run other programs as test cases. Either way, you're forced to think about things like "how do I automatically detect that this failed?" and "is the purpose of this test clear?".
I personally just sort of figured it out by doing BDD on a few small projects to get a feel for how things go. I'm not 100% "driven" now, but overall, it's pretty high. I tend to do Cucumber tests more than straight up unit testing, as I find them more valuble personally...
Doing is the best way to learn most things in the software world.