Skip to main content

parallelization - Mathematica Parellelization on HPC


I'm currently using Mathematica on a High Performance Cluster (HPC) which consists of many compute nodes each with around 16 cores. I currently run my Mathematica script on 20 of the nodes that invokes 10 cores and 10 of the subkernels licenses in Parallel, meaning I use 20 Mathkernel licenses and 200 subkernel licenses.


The problem is we have limited Mathkernel licenses (36, and for me to be using 20 of them is unfair on everyone else!) although ample subkernel licenses (288). Is there a way I can just use a single (or at least fewer) Mathkernel licenses to invoke the 200 subkernels I need?


Currently in each of the 20 scripts I just have


LaunchKernels[10];
ParellelTable[....];

which launches the 10 local subkernels on each node, but could I specify different nodes to launch subkernels on perhaps? Thereby I would only need to launch one Mathkernel which could invoke the 200 subkernels spread across the compute nodes.




Answer



What you need to launch subkernels across several nodes on a HPC cluster is the following:



  1. Figure out how to request several compute nodes for the same job

  2. Find the names of the nodes that have been allocated for your job

  3. Find out how to launch subkernels on these nodes from within the main kernel


All of these depend on the grid engine your cluster is using, as well as your local setup, and you'll need to check its docs and ask your administrator about the details. I have an example for our local setup (complete with a jobfile), which might be helpful for you to study:


https://bitbucket.org/szhorvat/crc/src


Our cluster uses the Sun Grid Engine. The names of the nodes (and information about them) are listed in a "hostfile" which you can find by retrieving the value of the PE_HOSTFILE environment variable. (I think this works the same way with PBS, except the environment variable is called something else.)



Note that if you request multiple nodes in a single job file, the job script will be run on only one of the nodes, and you'll be launching the processes across all nodes manually (at least on SGE and PBS).


Launching processes on different nodes is usually possible with ssh: just run ssh nodename command to run command. You may also need to set up passphraseless authentication if it is not set up by default. To launch subkernels, you'll need to pass the -f option to ssh to let it return immediately after it has launched the remote process.


Some setups use rsh instead of ssh. To launch a command in the background using rsh, you'll need to do


rsh -n nodename "command >& /dev/null &"

To run the remote process in the background, it important to redirect the output (both stdout and stderr) because there's a bug in rsh (also described in its man page) that won't let it return immediately otherwise.


Another thing to keep in mind about rsh is that you can't rsh to the local machine, so you'll need to launch the subkernels which will run on the same machine as the main kernel without rsh.


See my example for details.


Comments

Popular posts from this blog

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

How to remap graph properties?

Graph objects support both custom properties, which do not have special meanings, and standard properties, which may be used by some functions. When importing from formats such as GraphML, we usually get a result with custom properties. What is the simplest way to remap one property to another, e.g. to remap a custom property to a standard one so it can be used with various functions? Example: Let's get Zachary's karate club network with edge weights and vertex names from here: http://nexus.igraph.org/api/dataset_info?id=1&format=html g = Import[ "http://nexus.igraph.org/api/dataset?id=1&format=GraphML", {"ZIP", "karate.GraphML"}] I can remap "name" to VertexLabels and "weights" to EdgeWeight like this: sp[prop_][g_] := SetProperty[g, prop] g2 = g // sp[EdgeWeight -> (PropertyValue[{g, #}, "weight"] & /@ EdgeList[g])] // sp[VertexLabels -> (# -> PropertyValue[{g, #}, "name"]...