Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sorry, but was the trained ML model to be implemented and used, as is, in public, like in an airport? Or was it to become the next standard or the next "ML for dummies" book? Or was it just research or an experiment?

If it was an experiment, then let it be. Perhaps the researcher was looking for something else, circumscribing the data, model, whatever to the experiment itself.

> researchers need to be more circumspect about ML algorithms

What does entitle you to tell what to study or how?



Your entire comment is correct, but still missing the bigger picture. It's understood that it's way easier to detect features in pictures of white faces than black faces due to the fact that it's easier to resolve lines and shadows. These lighting differences show up once the image is pixelated, and gives something for PULSE to lock on to when it attempts the upscale. I'm questioning whether or not the algorithm even works for cases where these lighting differences are difficult or impossible to resolve.

If the researchers created a toy, then great, it's a cool project and is a neat algorithm. But they didn't create a toy. It's an academic paper to attempt to move the needle forward in ML academia. And they are doing the exact same thing as a lot of other researchers, which is basing their research on old biased benchmarks. If the bedrock of the field is based on biased data and everyone builds on top of that, your research down the line will skew more and more in favor of the bias.

>What does entitle you to tell what to study or how?

Nothing entitles me. It is my opinion based on the facts in front of me. The ML field has a bias problem, researchers toss a "oh this is biased" blurb in their papers, and then continue using the biased data. Everyone looks at the cool demos, and then the research gets slurped up and implemented without regard to the science. More algorithms get based on previous biased algorithms.


> doing the exact same thing as a lot of other researchers, which is basing their research on old biased benchmarks

They might have a reason. I can understand if they want to compare the result of the model with a past experiment. That's normal.

> attempt to move the needle forward

Completely agree, so just let them work.

By the way, I don't see "evil" in these experiments and I want a 100% free from bias model too, but I wouldn't dare to attribute the result to lazyness, stupidity or racism. If I come with something completely new then I would try to compare it with something that already exists too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: