Given the exasperating RAM needs of my Mathematica program, I contemplated launching remote kernels to run my Mathematica script. This is a two part question:
Part 1
If I launch a remote kernel(s), is the need for RAM also distributed amongst all the remote computers?
Part 2
I keep getting an error message when I try to launch a remote kernel from the Mathematica GUI.
The kernel c20-0707-23 failed to connect to the front end. (Error = MLECONNECT). You should try running the kernel connection outside the front end.
I have located the Kernel Program correctly on the remote computer. When I try starting a kernel over ssh in a terminal window with:
ssh username@remote.computer /path/to/math
Or
ssh username@remote.computer /path/to/MathKernel
After I enter the password to log in to the remote computer, I see that a kernel has been launched.
However, the same doesn't occur with the front end.
Are there any easy to understand tutorials/examples on launching remote kernels, without needing the GUI as I'd like to run scripts? I tried reading, this, for instance and the only thing that it did was make me panic as it was greek. I also read this but it's not an entirely similar problem
Here's a sample script that I used to launch local kernels and execute a simple NDSolve
command:
#!/usr/local/bin/MathematicaScript -script
CloseKernels[]
LaunchKernels[3]
Print["AbsoluteTiming reveals:" NDSolve[{y'[x] == y[x] Cos[x + y[x]],
y[0] == 1}, y, {x, 0, 30}]; // AbsoluteTiming ]
CloseKernels[]
This question may help answer the other question I linked in the first line.
Answer
Sometimes, yes it is sometimes distributed. I can't say for sure when and why, but I have noticed cases where ParallelMap does not distribute all the memory, but ParallelTable does.
Basically, Mathematica is trying to make things easy. You don't always want to have to spell out each and every variable that you want available on a remote kernel, and so it does its best to figure out just what you need. There are lots of options in Mathematica 8.
If you're trying to "aggregate" memory on other nodes, then you have to be careful to create the large data on remote machines to avoid the network transfer of the contents of the master kernel to the other machines. (Please know that I have run into issues in the past, where there seemed to be a limit as to precisely how much memory could be allocated by a Remote Kernel itself. i.e. couldn't allocate a gigabyte, let alone 16 gigabytes that are found in machines today. I don't know if this is still a problem.)
Comments
Post a Comment