I really like seeing all of these companies like MobileWorks, Exec, Uber, et al that are solving problems that are very challenging but in more of an operational way than technical (although technology obviously plays a large role).
I'm impressed with the accuracy of MobileWorks over mechanical Turk (or pure MT- not sure if MW just filters workers heavily). I tried a receipt transcription task where I fed the same instructions and same photo to both services. Out of the box, MW was 100% while MT was about 90% for the same cost.
As Anand said we have a separate workforce. We have a bunch of algorithmic and social elements that we have developed for improving quality. In fact,we have written an IEEE paper on how to improve quality in crowdsourcing.
Glad to hear it! This is an important philosophical difference for us. Quality control in a cloud labor service should be the platform's job, not the customer's. The system should work correctly out of the box.
And yes, our workforce is separate from Turk's - part of the reason the quality is better.
Where do they hire their workforce from? Are they overseas or American workers?
[edit] The main reason I was asking this, was I find it really surprising that you're able to hire skilled/educated workers at an outsourcing rate. It's even more surprising that you're finding them organically through press. I'd imagine most TechCrunch readers are over qualified for that kind of work. Kudos to you for making it happen though.
TechCrunch is not the only press we get. A TV station in Jamaica covered us and we got a lot of workers from there.
Even when we are in press in the tech circles we get a lot of worker growth by the workers are referred from the work at home or stay at home mom forums. My guess is that someone reposts about us at the more appropriate forums and that sets the ball rolling.
Having said that we have seen that some very qualified people (who might be reading techcrunch or MIT tech review) have joined our workforce.
This is awesome stuff. I am glad that they are attacking the accuracy angle since for a lot of tasks I care a lot about accuracy and not so much on saving the last $.
Is there a way to achieve "guaranteed" high quality on non-easy tasks?
I have a binary classification labelling problem. The labels are very unbalanced (95% no, 5% yes). Answering each question takes a few minutes of web research.
I ran this job on CrowdFlower with gold standard data for quality-control, and using 5 annotators for each question. Nonetheless, I received 99% no answers.
Right now I have about 1000 instances to label. There could be more in the future, if I am confident that I will get high quality labeling.
Yes: you can achieve high quality with only a few respondents because the workforce is higher quality and the data is reviewed by a human in the crowd.
Drop us a line and we'll set it up: support@mobileworks.com