With NetTrain[]
if you use the option ValidationSet->Scale[.1]
it will run a validation test on 10% of the training data after each epoch. For example,
data = RandomSample@Table[x -> Exp[-x^2] + RandomVariate[NormalDistribution[0, .15]], {x, -3, 3, .2}];
net = NetChain[{150, Tanh, 150, Tanh, 1}, "Input" -> "Scalar", "Output" -> "Scalar"];
trained = NetTrain[net, data, ValidationSet -> Scaled[.1]];
My question is how to get more information on the validation tests performed, i.e. what was the set (indices of examples) and what were the metrics (precision, recall, ...). And are there any (undocumented) suboptions that can control the splits/shuffling choices for each validation test performed?
Follow-up: is there any mechanism for breaking out of the training loop if the validation results start to plateau?
Answer
I'll add to our ToDo list the idea of having the validation indices be available as a property of NetTrain.
Comments
Post a Comment