Training is to feed in thousands of sets of {sparse sample of actual image, actual image}. The model is adjusted until the total difference between the output image and the actual image is minimized across all training images.
The output images are approximately the same because the model is "looking" at training images at a lower level that we do. The talk says they chop the images up into small pieces. So the model never "sees" the full shapes that are in the full images. It only sees small local features. I guess it turns out that these smaller pieces are pretty generic in that they are common between images of black holes and everything else. The curve of an elephant trunk looks similar to the curve of an event horizon if you cut it out in a small enough piece.
Perhaps if they didnt do this step, then the model would be more sensitive to the images its trained on.
Kathie said [1] "What you can do is to use methods where you [have] do not need any calibration whatsoever and you can still can get pretty good results.
So here on the bottom at the top is the truth image, and this is simulated data, as we are increasing the amount of amplitude error and you can see here ... it's hard to see ... but it breaks down once you add too much gain here. But if we use just closure quantities - we are invariant to that.
So that really, actually, been a really huge step for the project, because we had such bad gains.
"
[1] https://youtu.be/UGL_OL3OrCE?t=1177
The output images are approximately the same because the model is "looking" at training images at a lower level that we do. The talk says they chop the images up into small pieces. So the model never "sees" the full shapes that are in the full images. It only sees small local features. I guess it turns out that these smaller pieces are pretty generic in that they are common between images of black holes and everything else. The curve of an elephant trunk looks similar to the curve of an event horizon if you cut it out in a small enough piece.
Perhaps if they didnt do this step, then the model would be more sensitive to the images its trained on.