Skip to main content

plotting - Logarithmic scale in a DensityPlot and its legend


I was recently faced with the task of creating a DensityPlot with a logarithmic colour scale, and with providing it with an appropriate legend. Since I could not find any resources to this effect on this site, I'd like to document my solution here.


For definiteness, suppose that I want a plot of the function f(x,y)=sinc2(x)sinc2(y)=sin2(x)sin2(y)x2y2, which is the diffraction pattern of a square aperture, over a box of side 20. The problem with such a function, and the reason a logarithmic scale is necessary, is that the function has lots of detail over a wide range of orders of magnitude of f. Thus, doing a naive DensityPlot of it will produce either whited-out parts with a sharp boundary with the region where the contrast is acceptable, or one bright spot and lots of detail completely lost:


enter image description here


(Images produced by


DensityPlot[Sinc[x]^2 Sinc[y]^2, {x, -20, 20}, {y, -20, 20},

ColorFunction -> ColorData["DeepSeaColors"], PlotPoints -> 100, PlotRange -> u]

for u set to Automatic and Full respectively.)


Because of this, and particularly for plotting a function that has a zero, any solution to this problem must take as arguments the minimum and the maximum values of the range of interest. That range of interest is intrinsically hard-to-impossible for an automated range finder to obtain, so I'm OK with having to supply those values by hand.




To give a more concrete example of what the goal is, let me nick the final image from my answer:




Answer



Giving the density plot a logarithmic scale must always involve - unless some future version of Mathematica includes it by default - overriding the ColorFunctionScaling of the original plotting command and supplying a custom scaling function. The simplest logarithmic scaling is of the form scaling(x)=log(x/min)log(max/min), which is given the parameters min and max, and maps them respectively to 0 and 1, which are the limits of the standard input range of any colour function. This scaling function is implemented as


LogarithmicScaling[x_, min_, max_] := Log[x/min]/Log[max/min]


To include this in the plotting command, one needs to set ColorFunctionScaling to False, and supply LogarithmicScaling to some appropriate ColorFunction, which looks like


plotter[min_, max_] := DensityPlot[Sinc[x]^2 Sinc[y]^2, {x, -20, 20}, {y, -20, 20}
, PlotPoints -> 100, PlotRange -> Full
, ColorFunctionScaling -> False
, ColorFunction -> (ColorData["DeepSeaColors"][LogarithmicScaling[#, min, max]] &)
]
plotter[0.00003, 1]

and produces



enter image description here




Getting the legend to work, on the other hand, is more tricky. The standard way to include a legend for a DensityPlot is to set some nontrivial option for PlotLegends; as the examples on the DensityPlot documentation show, the Automatic setting usually does a good job. For a logarithmic scale, on the other hand, the resulting BarLegend needs to be modified.


More specifically, the colour function provided to BarLegend must vary linearly with the input it is given (which goes linearly from bottom to top of the scale), and it is the ticks themselves that must be rescaled. This requires the inverse of the scaling function, which is given by x=min(maxmin)scaling(x). The strategy is to find those values of x for which scaling(x) is 0, 1, and a given number of values evenly spread between the two. Thus:



  • We set PlotLegends to BarLegend,

  • we give BarLegend the unadulterated colour function we gave to the DensityPlot,

  • we tell BarLegend to take 0 and 1 as the minimum and maximum values to be fed to the colour function, to generate the full colour spectrum linearly,

  • we generate the list of positions of the ticks using min (max/min)^(Range[0, 1, 1/NumberOfTicks]),

  • we generate the colour-function-input they correspond to, and their labels, by mapping {LogarithmicScaling[#, min, max], ScientificForm[#, 2]} & over that list,


  • and we feed that to the Ticks option of BarLegend.


The resulting code looks like


plotter[min_, max_, NumberOfTicks_] :=  DensityPlot[Sinc[x]^2 Sinc[y]^2
, {x, -20, 20}, {y, -20, 20}
, PlotPoints -> 100, PlotRange -> Full
, ColorFunctionScaling -> False
, ColorFunction -> (ColorData["DeepSeaColors"][LogarithmicScaling[#, min, max]] &)
, PlotLegends -> BarLegend[{ColorData["DeepSeaColors"], {0, 1}}, LegendMarkerSize -> 370
, Ticks -> ({LogarithmicScaling[#, min, max], ScientificForm[#, 2]} & /@ (

min (max/min)^Range[0, 1, 1/NumberOfTicks]))]
]
plotter[0.00003, 1, 5]

and produces output like


enter image description here


The highlighter colours the Ticks option inside BarLegend in red, but it works just fine as far as I can tell. The error class this is assigned to (visible e.g. by changing the colour setting in Edit > Preferences > Appearance > Syntax Coloring > Errors and Warnings) is "Unrecognized option names". I don't think this is particularly bad, but rather reflects the fact that the highlighter is not perfect, and should not really be expected to be.




Addendum: minor ticks.


While the above is perfectly fine, the ticks do not make it immediately clear that the scale is logarithmic in the way that appropriately placed minor ticks will do. To implement these, the best option is to take advantage of the built-in capability to make nice ticks in log scale plots.



The essential part of this is to extract, using AbsoluteOptions, the Ticks of an appropriate LogPlot. Unfortunately, the linearized coordinates of the ticks are rather inconveniently placed, and have an arbitrary linear scale of their own. The code below is therefore rather long, but I've made it verbose so that hopefully it's clear what's going on.


LogScaleLegend[min_, max_, colorfunction_, height_: 400] := Module[
{bareTicksList, numberedTicks, m, M, ml, Ml, minInArbitraryScale,
maxInArbitraryScale, linearScaling},
bareTicksList =
First[Ticks /. AbsoluteOptions[LogLogPlot[x, {x, min, max}]]];
numberedTicks = (
Select[
bareTicksList /. {Superscript -> Power},
NumberQ[#[[2]]] &

]
)[[All, {1, 2}]];
m = Min[numberedTicks[[All, 2]]];
M = Max[numberedTicks[[All, 2]]];
ml = Min[numberedTicks[[All, 1]]];
Ml = Max[numberedTicks[[All, 1]]];
{minInArbitraryScale, maxInArbitraryScale} =
ml + (Ml - ml) Log[{min, max}/m]/Log[M/m];
linearScaling[x_] := (x - minInArbitraryScale)/(
maxInArbitraryScale - minInArbitraryScale);


DensityPlot[y
, {x, 0, 0.04}, {y, 0, 1}
, AspectRatio -> Automatic, PlotRangePadding -> 0
, ImageSize -> {Automatic, height}
, ColorFunction -> colorfunction
, FrameTicks -> {{None,
Select[
Table[{
linearScaling[r[[1]]],

r[[2]] /. {Superscript[10., n_] -> Superscript[10, n]},
{0, If[r[[2]] === "", 0.15, 0.3]},
{If[r[[2]] === "", Thickness[0.03], Thickness[0.06]]}
}, {r, bareTicksList}]
, (#[[1]] (1 - #[[1]]) >= 0 &)]
}, {None, None}}
]
]

With this function, then, the code



plotter[min_, max_, NumberOfTicks_] := DensityPlot[
Sinc[x]^2 Sinc[y]^2
, {x, -20, 20}, {y, -20, 20}
, PlotPoints -> 100
, PlotRange -> Full
, ColorFunctionScaling -> False
, ColorFunction -> (ColorData["DeepSeaColors"][
LogarithmicScaling[#, min, max]] &)
, PlotLegends ->
LogScaleLegend[min, max, ColorData["DeepSeaColors"], 350]

]
plotter[0.00003, 1, 5]

produces the output


enter image description here


Comments

Popular posts from this blog

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

What is and isn't a valid variable specification for Manipulate?

I have an expression whose terms have arguments (representing subscripts), like this: myExpr = A[0] + V[1,T] I would like to put it inside a Manipulate to see its value as I move around the parameters. (The goal is eventually to plot it wrt one of the variables inside.) However, Mathematica complains when I set V[1,T] as a manipulated variable: Manipulate[Evaluate[myExpr], {A[0], 0, 1}, {V[1, T], 0, 1}] (*Manipulate::vsform: Manipulate argument {V[1,T],0,1} does not have the correct form for a variable specification. >> *) As a workaround, if I get rid of the symbol T inside the argument, it works fine: Manipulate[ Evaluate[myExpr /. T -> 15], {A[0], 0, 1}, {V[1, 15], 0, 1}] Why this behavior? Can anyone point me to the documentation that says what counts as a valid variable? And is there a way to get Manpiulate to accept an expression with a symbolic argument as a variable? Investigations I've done so far: I tried using variableQ from this answer , but it says V[1...