>>11054762>compare different algosStill no error estimation
>training data splitStill training data, not real world data. The learned algorithm can and will fail. It has happened before and will happen before. Imagine training an algo to optimize drug administering to heart disease patients. It works 99.99999% of the time, but will in a few cases create a plan that plain kills the patient. There's no way of quantizing this probability and hence analyzing or even mitigating that risk.
>get a solutionSure, that's what I said. You always get a solution but you have no means of verification. Splitting training data is nice and all, but as above, not all your applications will be interpolation.
>hitting local minimaNo, mode collapse is finding trivial solutions which the developer didn't think about. An example is a spider robot which was to be trained to wall from point A to point B minimizing foot to floor contact. What the algorithm did was stumble at the first step, then let the spider walk on its "knees" for zero foot contact.
>a priori error correctionAh okay. It means that even before running an algorithm you can give an upper bound for the error it will make. If you know Taylor-series, they can approximate functions. You can also calculate the maximum error such an estimation will make by integrating over the terms in the series you didn't use (times some constant). This is not possible at all for ML algorithms, unless you know the function you'd like to approximate.
Sorry for the wall of text.