Skip to main content

differential equations - Improving NDSolve speed for heavily stiff problems


Having looked around the intergoogles and Mathematica.SE, I thought I'd pose a question with a minimum working example.



Here is the situation I am trying to improve:



  1. I am solving a 4th order non linear PDE with NDSolve.

  2. It is stiff and I use a stiff solver such as BDF or LSODA.

  3. On occassion, I have no choice but to increase the MaxStepFraction to uncomfortable levels.

  4. As a result, the code runs longer than usual (made worse by the fact that it is a stiff equation to begin with)


Is there any way I could improve NDSolve performance/speed?


Here is my minimum example:




$HistoryLength = 0;
Needs["VectorAnalysis`"]
Needs["DifferentialEquations`InterpolatingFunctionAnatomy`"];
Clear[Eq0, EvapThickFilm, h, Bo, \[Epsilon], K1, \[Delta], Bi, m, r]
Eq0[h_, {Bo_, \[Epsilon]_, K1_, \[Delta]_, Bi_, m_, r_}] := \!\(
\*SubscriptBox[\(\[PartialD]\), \(t\)]h\) +
Div[-h^3 Bo Grad[h] +
h^3 Grad[Laplacian[h]] + (\[Delta] h^3)/(Bi h + K1)^3 Grad[h] +
m (h/(K1 + Bi h))^2 Grad[h]] + \[Epsilon]/(
Bi h + K1) + (r) D[D[(h^2/(K1 + Bi h)), x] h^3, x] == 0;

SetCoordinates[Cartesian[x, y, z]];
EvapThickFilm[Bo_, \[Epsilon]_, K1_, \[Delta]_, Bi_, m_, r_] :=
Eq0[h[x, y, t], {Bo, \[Epsilon], K1, \[Delta], Bi, m, r}];
TraditionalForm[EvapThickFilm[Bo, \[Epsilon], K1, \[Delta], Bi, m, r]];
L = 2*92.389;

TMax = 3100*100;
Off[NDSolve::mxsst];
Clear[Kvar];
Kvar[t_] := Piecewise[{{1, t <= 1}, {2, t > 1}}]

(*Ktemp = Array[0.001+0.001#^2&,13]*)
hSol = h /. NDSolve[{
(*Bo,\[Epsilon],K1,\[Delta],Bi,m,r*)

EvapThickFilm[0.003, 0, 1, 0, 1, 0.025, 0],
h[0, y, t] == h[L, y, t],
h[x, 0, t] == h[x, L, t],
(*h[x,y,0] == 1.1+Cos[x] Sin[2y] *)

h[x, y, 0] ==

1 + (-0.25 Cos[2 \[Pi] x/L] - 0.25 Sin[2 \[Pi] x/L]) Cos[
2 \[Pi] y/L]
},
h,
{x, 0, L},
{y, 0, L},
{t, 0, TMax},
Method -> {"BDF", "MaxDifferenceOrder" -> 1},
MaxStepFraction -> 1/50
][[1]] // AbsoluteTiming


A BDF limited to Order 1 needs about 41 seconds to solve the equation until ****failure**** while the LSODA allowed up to order 12 does a fantastic job of cutting it down to 18 seconds.




Now when I increase the MaxStepFraction, I obviously increase the grid density. I am currently running a case that has several thousand grid points and has taken 24+ HOURS, yes hours and hasn't given me a solution yet. This was expected as I have run cases that took about 3-4 hours before with a coarser grid and do hog the ram (they take up about ~3-4GBs mostly because I am exporting data as .MAT files)


What suggestions could be provided to improve the speed for such a stiff equation?


I have also tried stopping tests and it doesn't quite help all the time as I'd rather mathematica quit my program naturally as a result of overbearing stiffness than artificially through a stopping test. (The former has physical significance)


Yes, this question bears resemblance to this but I don't think its the same.


I have given Parallelize a thought but it doesn't work on NDSolve. Any options that I have either on the Mathematica front, computing front, or saving the interpolation function data?



Edit:



Using the LaunchKernel[n] option just before the NDSolve cell didn't do much. My AbsoluteTiming barely even changed.


CloseKernels[];
LaunchKernels[3];
L = 2*92.389; TMax = 3100*100;
.........
......

Edit 2:


By using Table and launching up to 6 kernels, these are the results that I got:




{{1,{19.454883,InterpolatingFunction[{{0.,184.778},{0.,184.778},{0.,282761.}},<>]}}, {2,{19.162008,InterpolatingFunction[{{0.,184.778},{0.,184.778},{0.,282761.}},<>]}}, {3,{18.919101,InterpolatingFunction[{{0.,184.778},{0.,184.778},{0.,282761.}},<>]}}, {4,{20.166785,InterpolatingFunction[{{0.,184.778},{0.,184.778},{0.,282761.}},<>]}}, {5,{20.265163,InterpolatingFunction[{{0.,184.778},{0.,184.778},{0.,282761.}},<>]}}, {6,{20.556365,InterpolatingFunction[{{0.,184.778},{0.,184.778},{0.,282761.}},<>]}}}



So with more kernels, the performance actually degraded....? Wha...?



Answer



Yes, it is stiff -- but the main issue that I see is that the solution goes wild near the TMax that you specify. That's because you need a super-fine spatial grid to accurately represent what happens when the higher order terms finally manifest themselves. It's going to take a lot of time and a lot of memory (MinPoints option), and there's no way around it.


Comments

Popular posts from this blog

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

How to remap graph properties?

Graph objects support both custom properties, which do not have special meanings, and standard properties, which may be used by some functions. When importing from formats such as GraphML, we usually get a result with custom properties. What is the simplest way to remap one property to another, e.g. to remap a custom property to a standard one so it can be used with various functions? Example: Let's get Zachary's karate club network with edge weights and vertex names from here: http://nexus.igraph.org/api/dataset_info?id=1&format=html g = Import[ "http://nexus.igraph.org/api/dataset?id=1&format=GraphML", {"ZIP", "karate.GraphML"}] I can remap "name" to VertexLabels and "weights" to EdgeWeight like this: sp[prop_][g_] := SetProperty[g, prop] g2 = g // sp[EdgeWeight -> (PropertyValue[{g, #}, "weight"] & /@ EdgeList[g])] // sp[VertexLabels -> (# -> PropertyValue[{g, #}, "name"]...