This project brings back memories. I worked somewhere over 20 years ago where we were working on something just like this (touch displays using cameras). The biggest challenge was definitely the lighting conditions as you mentioned. We tried to rely on natural light but it was too unreliable. Darker skin tones were harder to pick up, and then you had issues with random reflections, light and shadow being cast on the screen, etc., which would make the system detect spurious fingers and touches.
We also had algorithms to detect finger shape to detect location of the pointer and when you were touching the screen. I saw way too many videos of fingers touching screens back then, so it's funny to see similar video clips here.
It sounds like a fun project. I worked on a vision based biometric system that used near infrared light as its light source. NIR was supposed to be more stable than natural light but we still experienced issues similar to yours. We found that certain problems appeared at different times of day, and the system also struggled to handle the diversity of people.
> But if you are running in the wrong direction, speed is of very little value.
I think of it differently. Speed is great because it means you can change direction very easily, and being wrong isn't as costly. As long as you're tracking where you're going, if you end up in the wrong place, but you got there quickly and noticed it, you can quickly move in a different direction to get to the right place.
Sometimes we take time mostly because it's expensive to be wrong. If being wrong doesn't cost anything, going fast and being wrong a lot may actually be better as it lets you explore lots of options. For this strategy to work, however, you need good judgment to recognize when you've reached a wrong position.
Correct. Admittedly, graphic design is not even my passion, so there's probably lots of room for improvement. But at this point I've grown accustomed to the friendly face. :D
Watering plants is also super easy once you do it regularly. You get a sense of how much water a plant needs just by looking at it and testing the soil (via moisture meter or just by touch). It's quite rewarding realizing how each plant differs.
I think you're spot on. It feels like parts were edited with AI and parts were left alone.
> This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.
The statement this is making is presumably the crux of the problem (Digg cannot survive without trust!) but it's worded so poorly that it's hard to imagine someone sat down and figured these three sentences were the best way to make the point.
I was thinking about that recently. Maybe decades from now people will look at things like the Linux kernel or Doom and be shocked that mere humans were able to program large codebases by hand.
I was being a little facetious, but there are things that most people would find tedious today that we would put up with in the past. Writing anything long by hand (letters, essays), doing accounting without a spreadsheet, writing a game in only assembly language, using punch cards, typesetting newspapers and books manually...
I've noticed that too and it's not too different from political discussions. At the end of the day, I think the split is really about different values people have, their identity, and justice.
A lot of developers' identities is tied to their ability to create quality solutions as well as having control over the means of production (for lack of a better term). An employer mandating that they start using AI more and change their quality standards is naturally going to lead to a sense of injustice about it all.
> I think the real divide is over quality and standards.
I think there are multiple dimensions that people fall on regarding the issue and it's leading to a divide based on where everyone falls on those dimensions.
Quality and standards are probably in there but I think risk-tolerance/aversion could be behind some how you look at quality and standards. If you're high on risk-taking, you might be more likely to forego verifying all LLM-generated code, whereas if you're very risk-averse, you're going to want to go over every line of code to make sure it works just right for fear of anything blowing up.
Desire for control is probably related, too. If you desire more control in how something is achieved, you probably aren't going to like a machine doing a lot of the thinking for you.
This. My aversion to LLMs is much more that I have low risk tolerance and the tails of the distribution are not well-known at this point. I'm more than happy to let others step on the land mines for me and see if there's better understanding in a year or two.
I am a high quality/craftsmanship person. I like coding and puzzling. I am highly skilled in functional leaning object oriented deconstruction and systems design. I'm also pretty risk averse.
I also have always believed that you should always be "sharpening your axe". For things like Java delelopment or things where I couldn't use a concise syntax would make extensive use of dynamic templating in my IDE. Want a builder pattern, bam, auto-generated.
Now when LLMs came out they really took this to another level. I'm still working on the problems.. even when I'm not writing the lines of code. I'm decomposing the problems.. I'm looking at (or now debating with the AI) what is the best algorithm for something.
It is incredibly powerful.. and I still care about the structure.. I still care about the "flow" of the code.. how the seams line up. I still care about how extensible and flexible it is for extension (based on where I think the business or problem is going).
At the same time.. I definately can tell you, I don't like migrating projects from Tensorflow v.X to Tenserflow v.Y.
> I'm looking at (or now debating with the AI) what is the best algorithm for something.
That line always makes me laugh. There’s only 2 points of an algorithm, domain correctness and technical performance. For the first, you need to step out of the code. And for the second you need proofs. Not sure what is there to debate about.
Not true. There is also cost, money or opportunity. Correctness or performance isn't binary -- 4 or 5 nines, 6 or 7 decimal precision, just to name a few. That drives a lot discussion.
There may be other considerations as well -- licensing terms, resources, etc.
I was using a M1 Mac Mini and only 8GB of RAM on it to build iOS apps for maybe a year. It's absolutely doable, though it very noticeably gets a little less snappy when building projects. When building in Xcode and then switching to Firefox to browse for instance, I could tell it took slightly longer to switch tabs and YouTube playback would occasionally stutter if too much was happening.
I also was using an Intel MacBook Pro with 16GB at the time. Doing the same thing there was much smoother and snappier. On the whole, it actually made me want to just the laptop instead since it "felt" nicer. (This isn't measuring build times or anything like that, just snappiness of the OS.)
We also had algorithms to detect finger shape to detect location of the pointer and when you were touching the screen. I saw way too many videos of fingers touching screens back then, so it's funny to see similar video clips here.
reply