Wanting to use data is not a valid reason to allow for using data which is not suitable to use. If you send out 120 surveys and get 90 back, you can't make assumptions about what those 30 would have said and you just have to present the data you have.
Eh, it’s tricker than just “go with what you’ve got.”
For example, you should be checking whether the response rate is associated with other factors and incorporate that into your analysis. You might find that you have pretty good data from unhappy students, but not satisfied ones, or vice versa.
I mean it is extremely common to mislead (often unintentionally and with the best motives) and use the performance of collecting data to give credence to that. The alternative is to be up front about your methodology, which means not making assumptions at multiple stages in the process, and not shading the conclusions by 'looking for other factors' or other things. When you do multiple rounds of 'fixing' data you are just injecting assumptions about the true distribution, which violates the entire point of collecting data at all. If you 'know' what the answer should look like, just write that down that assumption and skip the extra steps, OR ensure the methodology will allow the data to prove you wrong, or allow the data to show a lack of a conclusion (including by lack of data).
I realize I'm taking a very harsh stance here, but I've seen again and again people 'fixing' data in multiple rounds, the effect of which is any actual insight is removed in favor of reinforcing the assumptions held before collecting data. When you do this at multiple steps in the process it becomes very hard to have a good intuition about whether you've done things that invalidate the conclusion (or the ability to draw any conclusion at all).