Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Regarding the unhelpfulness of online reviews, my company has problems with manufacturers/sellers writing 5-star reviews of their own product listings (ASIN's) on Amazon. We've begun (manually) data mining 5-star reviews to identify whether each 5-star-reviewer has any other reviews (or wish list, to indicate the possibility of a real user account), then calculating the % of reviews written by no-history user accounts. Of the ASIN's we've assessed, the gut-level-doesn't-seem-like-heavy-review-fraud listings can be in the 6% range, whereas the looks-like-review-fraud ASIN's are above 20%. We're working with Amazon to identify and penalize these manufacturers/sellers, but internally at Amazon the Seller Performance team is separate from their Community (user review) team, so it presents a challenge. Also hard for them to separate valid complaints from sour grapes complaints.


"but internally at Amazon the Seller Performance team is separate from their Community (user review)"

Indeed, my suspicion is that organizational politics have more to do with the lack of a better rating system than any technical limitation.

The approach I use is to read 3 star ratings first before biasing myself with the more extreme ratings. I also check to see what else the reviewer has rated and if there's nothing there then I immediately dismiss the review.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: