Skip to main content

differential equations - Position of discontinuous coefficient influences the solution of PDE


This issue is raised in the discussion under this post about heat flux continuity and I think it's better to start a new question to state it in a clearer way. Just consider the following example:


Lmid = 1; L = 2; tend = 1;
m[x_] = If[x < Lmid, 1, 2];
eq1 = m[x] D[u[x, t], t] == D[u[x, t], x, x];
eq2 = D[u[x, t], t] == D[u[x, t], x, x]/m[x];


Clearly, eq1 and eq2 is mathematically the same, the only difference between them is the position of the discontinuous coefficient m[x]. Nevertheless, the solution of NDSolve will be influenced by this trivial difference, if "FiniteElement" is chosen as the method for "SpatialDiscretization":


opts = Method -> {"MethodOfLines", 
"SpatialDiscretization" -> {"FiniteElement",
"MeshOptions" -> {"MaxCellMeasure" -> 0.01}}};

ndsolve[eq_] := NDSolveValue[{eq, u[x, 0] == Exp[x]}, u, {x, 0, L}, {t, 0, tend}, opts];

{sol1, sol2} = ndsolve /@ {eq1, eq2};
Plot[{sol1[x, tend], sol2[x, tend]}, {x, 0, L}]


enter image description here


Apparently sol2 is a weak solution that's just 0th order continuous in x direction.


Further check shows that, sol1 is 1st order continuous in x direction, while D[sol2[x, tend]/m[x], x] is continous:


Plot[D[{sol1[x, tend], sol2[x, tend]/m[x]}, x] // Evaluate, {x, 0, L}]

enter image description here


To make this post a question, I'd like to ask:




  1. Is this behavior of NDSolve intended, or kind of a mistake?





  2. Is this behavior controlable? I mean, can we predict what's continuous in the solution, just from the form of the equation?





Answer



Here is an explanation of what happens. Let's setup the problem once more.


Lmid = 1; L = 2; tend = 1;
m[x_] = If[x < Lmid, 1, 2];
(*m[x_]=2;*)

eq1 = m[x] D[u[x, t], t] == D[u[x, t], x, x];
eq2 = D[u[x, t], t] == D[u[x, t], x, x]/m[x];
opts = Method -> {"MethodOfLines",
"SpatialDiscretization" -> {"FiniteElement",
"MeshOptions" -> {"MaxCellMeasure" -> 0.01}}};
ndsolve[eq_] :=
NDSolveValue[{eq, u[x, 0] == Exp[x]}, u, {x, 0, L}, {t, 0, tend},
opts];

Equation 1 and 2 are mathematically the same, however, when we evaluate them we get different results as shown here:



sol1 = ndsolve[eq1];
Plot[sol1[x, tend], {x, 0, L}]

enter image description here


sol2 = ndsolve[eq2];
Plot[sol2[x, tend], {x, 0, L}]

enter image description here


What happens? Let's look at how the PDE gets parsed.


ClearAll[getEquations]

getEquations[eq_] := Block[{temp},
temp = NDSolve`ProcessEquations[{eq, u[x, 0] == Exp[x]},
u, {x, 0, L}, {t, 0, tend}, opts][[1]];
temp = temp["FiniteElementData"];
temp = temp["PDECoefficientData"];
(# -> temp[#]) & /@ {"DampingCoefficients", "DiffusionCoefficients",
"ConvectionCoefficients"}
]

getEquations[eq1]

{"DampingCoefficients" -> {{If[x < 1, 1, 2]}},
"DiffusionCoefficients" -> {{{{-1}}}},
"ConvectionCoefficients" -> {{{{0}}}}}

This looks good.


getEquations[eq2]
{"DampingCoefficients" -> {{1}},
"DiffusionCoefficients" -> {{{{-(1/If[x < 1, 1, 2])}}}},
"ConvectionCoefficients" -> {{{{-(If[x < 1, 0, 0]/
If[x < 1, 1, 2]^2)}}}}}


For the second eqn. we get a convection coefficient term. Why is that? The key is to understand that the FEM can only solve this type equation:


$d\frac{\partial }{\partial t}u+\nabla \cdot (-c \nabla u-\alpha u+\gamma ) +\beta \cdot \nabla u+ a u -f=0$


Note, that there is no coefficient in front of the $\nabla \cdot (-c \nabla u-\alpha u+\gamma)$ term. To get things like $h(x) \nabla \cdot (-c \nabla u-\alpha u+\gamma)$ to work, $c$ is set to $h$ and $\beta$ is adjusted to get rid of the derivative caused by $\nabla \cdot (-c \nabla u)$


Here is an example:


c = h[x];
β = -Div[{{h[x]}}, {x}];
Div[{{c}}.Grad[u[x], {x}], {x}] + β.Grad[u[x], {x}]
(* h[x]*Derivative[2][u][x] *)


In the case at hand that leads to:


Div[{{1/m[x]}}.Grad[u[x], {x}], {x}] - 
Div[{{1/m[x]}}, {x}] // Simplify

(* {Piecewise[{{Derivative[2][u][x]/2, x >= 1}}, Derivative[2][u][x]]} *)

But that is the same as specifying:


 eq3 = D[u[x, t], t] == 
Inactive[
Div][{{1/If[x < 1, 1, 2]}}.Inactive[Grad][u[x, t], {x}], {x}];


sol3 = ndsolve[eq3];
(* Plot[sol2[x, tend] - sol3[x, tend], {x, 0, L}] *)

I have checked that flexPDE (another FEM tool) gives exactly the same solutions in all three cases. So this issue is not uncommon. In principal a message could be generated but how would one detect when to trigger that message? If you have suggestions about this, let me know in the comments. I think it were also good to add this example to the documentation - if there are no objections. I hope this clarifies the unexpected behavior a bit.


Comments

Popular posts from this blog

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

What is and isn't a valid variable specification for Manipulate?

I have an expression whose terms have arguments (representing subscripts), like this: myExpr = A[0] + V[1,T] I would like to put it inside a Manipulate to see its value as I move around the parameters. (The goal is eventually to plot it wrt one of the variables inside.) However, Mathematica complains when I set V[1,T] as a manipulated variable: Manipulate[Evaluate[myExpr], {A[0], 0, 1}, {V[1, T], 0, 1}] (*Manipulate::vsform: Manipulate argument {V[1,T],0,1} does not have the correct form for a variable specification. >> *) As a workaround, if I get rid of the symbol T inside the argument, it works fine: Manipulate[ Evaluate[myExpr /. T -> 15], {A[0], 0, 1}, {V[1, 15], 0, 1}] Why this behavior? Can anyone point me to the documentation that says what counts as a valid variable? And is there a way to get Manpiulate to accept an expression with a symbolic argument as a variable? Investigations I've done so far: I tried using variableQ from this answer , but it says V[1...