There was no machine vision stuff in the app at that point. Claude suggested a couple of different ways of handling this and I went with the easiest way: piggybacking on the Apple Vision Framework (which means that this feature, as currently implemented, will only work on Macs - I'm actually not sure if I will attempt a Windows release of this app, and if I do, it won't be for a while).
Despite this being "easier" than some of the alternatives, it is nonetheless an API I have zero experience with, and the implementation was built with code that I would have no idea how to write, although once written, I can get the gist. Here is the "detectNodWithPitch" function as an example (that's how a "nod" is detected - the pitch of the face is determined, and then the change of pitch is what is considered a nod, of course, this is not entirely straightforward).
Despite this being "easier" than some of the alternatives, it is nonetheless an API I have zero experience with, and the implementation was built with code that I would have no idea how to write, although once written, I can get the gist. Here is the "detectNodWithPitch" function as an example (that's how a "nod" is detected - the pitch of the face is determined, and then the change of pitch is what is considered a nod, of course, this is not entirely straightforward).
```
- (void)detectNodWithPitch:(float)pitch { // Get sensitivity-adjusted threshold // At sensitivity 0: threshold = kMaxThreshold degrees (requires strong nod) // At sensitivity 1: threshold = kMaxThreshold - kThresholdRange degrees (very sensitive) float sens = _cppOwner->getSensitivity(); float threshold = NodDetectionConstants::kMaxThreshold - (sens * NodDetectionConstants::kThresholdRange);
}@end
```