Skip to main content

equation solving - I can't understand why "FindRoot::nlnum:" shows up in my code


I want to solve a system of equations using Findroot as follows:


Clear[a]; Clear[c]; n = 2;
SysEqn1 = Table[a[i] + I Sum[(PolyLog[1, E^(I (c[j] - a[i] + 0.5))] +

PolyLog[1, E^(I (c[j] - a[i] + (2 \[Pi] - 1.2)))] -
PolyLog[1, E^(I (c[j] - a[i] - 0.3))] -
PolyLog[1, E^(I (c[j] - a[i] - 0.4))]), {i, n}], {j, n}];
SysEqn2 = Table[c[i] + I Sum[(PolyLog[1, E^(I (c[j] - a[i] + 0.5))] +
PolyLog[1, E^(I (c[j] - a[i] + (2 \[Pi] - 1.2)))] -
PolyLog[1, E^(I (c[j] - a[i] - 0.3))] -
PolyLog[1, E^(I (c[j] - a[i] - 0.4))]), {i, n}], {j, n}];
SysEqn = Join[SysEqn1, SysEqn2];
startingValues1 = Table[{a[i], -1 + 2 i/n}, {i, n}];
startingValues2 = Table[{c[i], -0.9 + 2 i/n}, {i, n}];

starting = Join[startingValues1, startingValues2];
FindRoot[SysEqn, starting]

The error message from the above code is as fo


FindRoot::nlnum: "The function value {a[i]+(0. +1.\ I)\ (1.\ Log[1. +Times[<<2>>]]+1.\ Log[1. +Times[<<2>>]]-1.\ Log[1. +Times[<<2>>]]-1.\ Log[1. +Times[<<2>>]]+1.\ Log[1. +Times[<<2>>]]+1.\ Log[1. +Times[<<2>>]]-1.\ Log[1. +Times[<<2>>]]-1.\ Log[1. +Times[<<2>>]]),a[i]+(0. +1.\ I)\ (<<1>>),<<1>>,c[i]+(0. +1.\ I)\ (1.\ Log[1. +Times[<<2>>]]+<<11>>)} is not a list of numbers with dimensions {4} at {a[1],a[2],c[1],c[2]} = {0.,1.,0.1,1.1}."

For n=2 case, I have four equations so I think the number of starting variables that should be determined is also four. But for some reason my code doesn't work... I would really appreciate if you can help me out with this problem!



Answer



First of all, there seems to be a bug:


PolyLog[1, E^(I (c[1] - a[2] + (2 π - 1.2)))]

(* -1. Log[1 - 2.71828^((0. + 1. I) (5.08319 - 1. a[2.] + c[1.]))] *)

Note the indices have been converted from Integer to Real numbers. This can be easily fixed with NHoldAll.


Second, there's an a[i] and c[i] at the beginning of Table in each system, but the Table iterator is j. Looking at the image posted in the original form of the question, it seems the j iterator for the Sum was misplaced in the edit.


ClearAll[a];
ClearAll[c];
SetAttributes[a, NHoldAll];
SetAttributes[c, NHoldAll];
n = 2;
SysEqn1 = Table[a[i] + I Sum[(PolyLog[1, E^(I (c[j] - a[i] + 0.5))] +

PolyLog[1, E^(I (c[j] - a[i] + (2 π - 1.2)))] -
PolyLog[1, E^(I (c[j] - a[i] - 0.3))] -
PolyLog[1, E^(I (c[j] - a[i] - 0.4))]), {j, n}], {i, n}];
SysEqn2 = Table[c[i] + I Sum[(PolyLog[1, E^(I (c[j] - a[i] + 0.5))] +
PolyLog[1, E^(I (c[j] - a[i] + (2 π - 1.2)))] -
PolyLog[1, E^(I (c[j] - a[i] - 0.3))] -
PolyLog[1, E^(I (c[j] - a[i] - 0.4))]), {j, n}], {i, n}];
SysEqn = Join[SysEqn1, SysEqn2];
startingValues1 = Table[{a[i], -1 + 2 i/n}, {i, n}];
startingValues2 = Table[{c[i], -0.9 + 2 i/n}, {i, n}];

starting = Join[startingValues1, startingValues2];

FindRoot[SysEqn, starting]
(*
{a[1] -> 1.8484 + 0.907529 I, a[2] -> 3.0128 + 1.14394 I,
c[1] -> 1.8484 + 0.907529 I, c[2] -> 3.0128 + 1.14394 I}
*)

Comments

Popular posts from this blog

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

What is and isn't a valid variable specification for Manipulate?

I have an expression whose terms have arguments (representing subscripts), like this: myExpr = A[0] + V[1,T] I would like to put it inside a Manipulate to see its value as I move around the parameters. (The goal is eventually to plot it wrt one of the variables inside.) However, Mathematica complains when I set V[1,T] as a manipulated variable: Manipulate[Evaluate[myExpr], {A[0], 0, 1}, {V[1, T], 0, 1}] (*Manipulate::vsform: Manipulate argument {V[1,T],0,1} does not have the correct form for a variable specification. >> *) As a workaround, if I get rid of the symbol T inside the argument, it works fine: Manipulate[ Evaluate[myExpr /. T -> 15], {A[0], 0, 1}, {V[1, 15], 0, 1}] Why this behavior? Can anyone point me to the documentation that says what counts as a valid variable? And is there a way to get Manpiulate to accept an expression with a symbolic argument as a variable? Investigations I've done so far: I tried using variableQ from this answer , but it says V[1...