After some testing, I still cann't train with two GPU in Windows 10.
But I can train with Mathematica 11.3 in Ubuntu 16.04 with two graphics card GTX 1080
The problem is, If I use two card to train, the model is not so trainable as we can see the loss curve in the following pictures. Not the reason of BatchSize's setting, I've tested the BatchSize Value From 50 to 250 in the case of TargetDevice->{"GPU",All}
.
Is this a bug?
You may try the MNIST example
NetTrain[NetModel["LeNet"], trainingData, ValidationSet -> testData, TargetDevice->{"GPU",All}]
Comments
Post a Comment