Skip to main content

calculus and analysis - How to take derivative of parameterized coordinate?


Suppose I have a vector in $\mathbb{R}^n$ but $n$ is not known in advance. I want to be able to write functions which operate on the components of that vector, and then I'd like to be able to take derivatives with respect to the components. As an example, consider the relation: $$\frac{\partial}{\partial x_j} \sum_i x_i$$


Under the assumption that the $x_i$ are independent, I want a call to Simplify[] to return 1. Similarly, calling Simplify[] on $\frac{\partial x_i}{\partial x_j}$ should give KroneckerDelta[i,j]. It's not clear how I should represent generic coordinates like this. I've seen this, but I'm not sure it provides an answer. As the linked post suggests, I could do this for a fixed $n$, but that's not situation I'm working on, especially since I want to see the generic form for any $n$.


For reference, it seems that sympy let's you do something close to this.


from sympy.tensor import IndexedBase, Idx
x = IndexBase('x')
i, j = map(Idx, ['i', 'j'])
x[i]

x[i].diff

#

x[i].diff(x[j])
# ValueError: Can't differentiate wrt the variable: x[j], 1

But despite being able to represent the variables abstractly, I can't seem to differentiate them.


Here is another example. Suppose you wanted to calculate the derivative of the entropy with respect to one of the components of the input distribution (again, assuming all the variables are independent). The final form is the same no matter what $n$-simplex the distribution lives on, so you'd like to be able to do this for any dimension.


$$ \frac{\partial H}{\partial p_j} = - \frac{\partial}{\partial p_j} \sum_i p_i \log p_i = - (\log p_j + 1)$$


Problems like this come up in optimization, when you need to provide the gradient and Hessian to numerical algorithms.


Update: Here are two other related posts:



how to differentiate formally?


How to customize derivative behavior via upvalues?



Answer



I think this can be hacked more or less case by case with UpValues, I think this is one of the most flexible aspects of Mathematica.


For instance if you just want partial derivatives to interact with sums you can just define your sum function MySum (or you can maybe unprotect Sum, not sure if this is possible) and define UpValues


MySum /: D[MySum[s_, i_], x[j_]] := D[s /. i -> j, x[j]]

This gives the desired results for


D[MySum[x[i], i], x[j]]


(1)


-D[MySum[x[i] Log[x[i]], i], x[j]]

(-1 - Log[x[j]])


The x here can also be modified to a more general pattern


Comments

Popular posts from this blog

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

What is and isn't a valid variable specification for Manipulate?

I have an expression whose terms have arguments (representing subscripts), like this: myExpr = A[0] + V[1,T] I would like to put it inside a Manipulate to see its value as I move around the parameters. (The goal is eventually to plot it wrt one of the variables inside.) However, Mathematica complains when I set V[1,T] as a manipulated variable: Manipulate[Evaluate[myExpr], {A[0], 0, 1}, {V[1, T], 0, 1}] (*Manipulate::vsform: Manipulate argument {V[1,T],0,1} does not have the correct form for a variable specification. >> *) As a workaround, if I get rid of the symbol T inside the argument, it works fine: Manipulate[ Evaluate[myExpr /. T -> 15], {A[0], 0, 1}, {V[1, 15], 0, 1}] Why this behavior? Can anyone point me to the documentation that says what counts as a valid variable? And is there a way to get Manpiulate to accept an expression with a symbolic argument as a variable? Investigations I've done so far: I tried using variableQ from this answer , but it says V[1...