Skip to main content

differential equations - NDSolve runs out of memory


I need to solve a second order ODE numerically. The ODE depends on two parameters (a,b). Things work fine when 'a' is small, but for large 'a' the solutions are oscillating rapidly and Mathematica takes a long time to solve or eventually just runs out of memory.


I need to integrate over quite a large range (10,000) and that is part of the problem, but I actually only need the value of the InterpolatingFunction produced at the end point. Is there a way to tell Mathematica I just want this last point? And not store the rest (very large) InterpolatingFunction in memory? i.e. just integrate so far, take that point use as ICs for next leg, then integrate to next pt, take that as IC and integrate onwards,etc...


Or just some other strategy for using NDSolve with such highly oscillatory solutions.


Some definitions:


M=1;
rstar[r_] := r + 2 M Log[r/(2 M) - 1];

$MinPrecision = 45;
wp = $MinPrecision;
ac = $MinPrecision - 8;
λ[l_] = l (l + 1);
rinf = 10000;
rH = 200001/100000;
nH = 200;

The ODE is:


eq[ω_, l_] := Φ''[r] + (2 (r - M))/(

r (r - 2 M)) Φ'[
r] + ((ω^2 r^2)/(r - 2 M)^2 - λ[l]/(
r (r - 2 M))) Φ[r] == 0

Without going into detail about why I have these initial conditions, they are :


HorizonICs[l_?IntegerQ, ω_?NumericQ] := 
Module[{ΦinrH, dΦinrH},
Clear[b];
b[0] = 1; b[-1] = 0; b[-2] = 0;
b[n_] :=

b[n] = Simplify[
1/(2 n (n - 4 I ω)) (-2 I ω b[-2 + n] +
2 I n ω b[-2 + n] + l b[-1 + n] + l^2 b[-1 + n] +
n b[-1 + n] - n^2 b[-1 + n] - 4 I ω b[-1 + n] +
8 I n ω b[-1 + n])];
uintrunc[r_, n_] := Sum[b[i] (r - 2 M)^i, {i, 0, n}];
ΦinrH = (Exp[-I ω rstar[rH]] uintrunc[rH, nH])/(
2 M);
dΦinrH =
D[(Exp[-I ω rstar[r]] uintrunc[r, nH])/(2 M), r] /.

r -> rH;
]

Solve as


ΦinExt[ωω_ , 
l_] := ΦinExt[ωω, l] = Φ /.
NDSolve[{eq[ωω, l], Φ[rH] ==
N[HorizonICs[l, ωω][[1]], wp], Φ'[
rH] == N[HorizonICs[l, ωω][[2]],
wp]}, Φ, {r, rH, rinf}, WorkingPrecision -> wp,

AccuracyGoal -> ac, MaxSteps -> ∞,
Method -> "StiffnessSwitching"][[1]];

You can also look for another solution that has the property of being simple near infinity and we set the ICs there. This is the particular solution that seems to be really really slow and causes memory crash.


Some definitions:


M = 1;
r0 = 5/2;
rstar[r_] := r + 2 M Log[r/(2 M) - 1];
$MinPrecision = 45;
wp = $MinPrecision;

ac = $MinPrecision - 8;
λ[l_] = l (l + 1);
rinf = 10000;
ninfphase =
50;

Set init conditions:


Infinitycs = Module[{n = ninfphase, c}, 
Clear[c];
veqexp =

CoefficientList[
Series[(-2 - l r - l^2 r + 2 (r + I r^3 ω)
\!\(\*SuperscriptBox["v", "′",
MultilineFunction->None]\)[r] + (-2 + r) r^2
\!\(\*SuperscriptBox["v", "′",
MultilineFunction->None]\)[r]^2 + (-2 + r) r^2
\!\(\*SuperscriptBox["v", "′′",
MultilineFunction->None]\)[r])/
r /. {v'[r_] :> Sum[-i c[i]/r^(i + 1), {i, 1, n}],
v''[r_] :>

Sum[i (i + 1) c[i]/r^(i + 2), {i, 1, n}]}, {r, ∞,
n - 1}], r^-1];
Do[c[i] = c[i] /. Simplify[Solve[veqexp[[i]] == 0, c[i]][[1]]];
Print, {i, 1, n}] ;
Table[c[i], {i, 1, n}]];

InfinityICs[ll_?IntegerQ, ωω_?NumericQ] := Module[{c2},
Do[c2[i] =
Infinitycs[[i]] /. {l -> ll, ω -> ωω}, {i, 1,
ninfphase}];

vtrunc = Sum[c2[i]/r^i, {i, 1, ninfphase}];
init = 1/r Exp[I ωω rstar[r] + vtrunc] /. r -> rinf;
dinit =
D[1/r Exp[I ωω rstar[r] + vtrunc], r] /. r -> rinf;
Clear[c2];
{init, dinit}]

Solve it


Φout[ωω_, 
l_] := Φout[ωω, l] = Φ /.

Block[{$MaxExtraPrecision = 100},
NDSolve[{eq[ωω, l], Φ[rinf] ==
N[InfinityICs[l, ωω][[1]], wp], Φ'[
rinf] ==
N[InfinityICs[l, ωω][[2]],
wp]}, Φ, {r, rinf, r0}, WorkingPrecision -> wp,
AccuracyGoal -> ac, MaxSteps -> ∞]][[1]];

Answer




I need to integrate over quite a large range (10,000) and that is part of the problem, but I actually only need the value of the InterpolatingFunction[] produced at the end point. Is there a way to tell Mathematica I just want this last point?




One way to go about it is to have the start and end of the integration interval be identical. Consider the following:


y[5] /. First @ NDSolve[{y'[x] == y[x] Cos[x + y[x]], y[0] == 1}, y, {x, 5, 5}]
0.07731217497157500942

To see that the approach saves space, here's a comparison:


yi = y /. First@NDSolve[{y'[x] == y[x] Cos[x + y[x]], y[0] == 1}, y, {x, 5, 5}];
ByteCount[yi]
1296


yn = y /. First@NDSolve[{y'[x] == y[x] Cos[x + y[x]], y[0] == 1}, y, {x, 0, 5}];
ByteCount[yn]
3288

Comments

Popular posts from this blog

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

What is and isn't a valid variable specification for Manipulate?

I have an expression whose terms have arguments (representing subscripts), like this: myExpr = A[0] + V[1,T] I would like to put it inside a Manipulate to see its value as I move around the parameters. (The goal is eventually to plot it wrt one of the variables inside.) However, Mathematica complains when I set V[1,T] as a manipulated variable: Manipulate[Evaluate[myExpr], {A[0], 0, 1}, {V[1, T], 0, 1}] (*Manipulate::vsform: Manipulate argument {V[1,T],0,1} does not have the correct form for a variable specification. >> *) As a workaround, if I get rid of the symbol T inside the argument, it works fine: Manipulate[ Evaluate[myExpr /. T -> 15], {A[0], 0, 1}, {V[1, 15], 0, 1}] Why this behavior? Can anyone point me to the documentation that says what counts as a valid variable? And is there a way to get Manpiulate to accept an expression with a symbolic argument as a variable? Investigations I've done so far: I tried using variableQ from this answer , but it says V[1...