That's a tough question. There's no one "right" sample size. It depends on the size of the effect you're trying to measure, how much noise is in the data, whether or not you want to analyze subgroups within the data, and many other factors.
The sample size here, 136, is not bad at first glance, many studies get published with smaller ones. It's large enough for the purposes here, but you'd definitely want to replicate the experiment a few more times.
The rule of thumb is that signal-to-noise increases by the square root of the sample size. This is a brutal curve, and suggests that simply gathering more data is seldom practical.
Another rule that I use myself is, "multiply p by 10." In other words, a p-value of 0.05 is as good as a coin toss. This sounds outrageous, but seems consistent with reality.