Better video about the explanation of the algorithm that was used.
Katie Bouman “Imaging a Black Hole with the Event Horizon Telescope”She tells a lot of stuff that I learned on University and job.
It seems that they generate an image from the model and check whether it fits the data.
Because the data is too noisy.
In computer vision, it is a common method for finding
known objects and variants,
and it can give false positives and oversimplifies the resulting model.
It can only generate simulated images.
They averaged the images of the teams together.
We can bring in our own model and it would be just as good as theirs!
So it is the smoking Snoop model, chosen by democratic majority.
They also use machine learning which is known to give false positives.
I am not sure why and how they used it. Seems unnecessary.
Their method seems also unable to detect systematic errors (artefacts) or filter
errors at extreme magnification. I did not see extreme tests.
Strangely I do not see her mention gradual steps of magnification, which is
necessary for calibration and feedback. And would show the plasma beam.
Now it is a one-shot.
They just guessed which model seemed the best out of many alternatives,
not considering any artefacts.
If I am fair, In some I can almost see the positions of the radio-telescopes on earth (=bokeh artefact).
But in another I see a 5 or 6 sided shape (=antenna artefact?).
Correcting for the first would show a star with a tail, I believe.
Of course rings can still work in EU, or a rotating plasma ball.
Shape can also differ per frequency.
In this discussion I criticize their CSI-enhance technology.
They probably want to use it everywhere.