Skip to main content

Posts

Showing posts from February, 2014

graphics - Why is ListDensityPlot unable to plot datasets with extreme ranges

Consider the following dataset: data = Flatten[ Table[{x 10^-9, y 10^-9, x^2 + y^2},{x, -100, 100, 10}, {y, -100,100, 10}] , 1]; If I try to ListDensityPlot this set: ListDensityPlot[data] it does not plot the function. However, if I do the obvious re-scale of the coordinates: data2 = Flatten[Table[{x , y , x^2 + y^2}, {x, -100, 100, 10}, {y, -100, 100, 10}], 1]; it has no problem plotting it: ListDensityPlot[data2] The same problem exists for other plotting methods ( ListPlot3D , ListContourPlot , etc.). While rescaling the coordinates is a simple fix, is it possible to plot datasets of this sort without first rescaling the coordinates? Answer The reason why ListDensityPlot doesn't plot it is because the meshes aren't being generated correctly: ListDensityPlot[data #, Mesh -> All, ImageSize -> 300] & /@ {1, 100, 10^3, 10^4} Now I don't know exactly how to fix this, but my guess is that the mesh function relies on the Delaunay triangulation of the set of poi

list manipulation - How to find the circulating fraction by pattern matching?

Important note: It's not hard to solve this problem, hence please explain how to use patterm matching instead of how to find the recurring period of a fraction. We can easily extract real digits from a number, for example, $99/700$, using RealDigits[99/700, 10, 24][[1]] or so. The result is {1, 4, 1, 4, 2, 8, 5, 7, 1, 4, 2, 8, 5, 7, 1, 4, 2, 8, 5, 7, 1, 4, 2, 8} . Now I would like to find out the recurring period which is {1,4,2,8,5,7} . This result is quite easy to get via this code: RealDigits[99/700, 10][[1, -1]] . Well, I tried to find out this period myself and practice my programming ability with pattern matching when I found out I cannot easily do this job via this code: RealDigits[99/700, 10, 24][[1]] /. {Shortest[pre___, 2], Longest[Repeated[Shortest[r__, 3], {2, Infinity}], 4], Shortest[inc___, 1]} /; MatchQ[{r}, {inc, __}] -> {{pre}, {r}, {inc}} I think this code could generate a proper result as: Pattern matching will find out how to make the recurrin

graphics3d - 3D graphics rendering artifact with overlapping planes (z-fighting)

I am trying to render a matrix as a depth map: data = {{1, 1, 1, 1}, {1, 0, 3, 1}, {2, 0, 0, 1}}; ListPlot3D[data, Mesh -> None, InterpolationOrder -> 0, Filling -> Bottom, FillingStyle -> {Opacity[1]}, ColorFunction -> "SolarColors", ViewPoint -> {Pi, Pi, 5}] However, for the matrix element with the lowest value, the height of the corresponding bar in the plot is zero. This results in rendering artifacts (z-fighting). Viewing the graph from below or rotating the graph makes the problem more obvious: Answer This is not entirely the same, as it changes coloring and z-scaling, but perhaps something similar may be of help. Essentially, the zero values are lifted by a small increment, while the original z-range is preserved. data = {{1, 1, 1, 1}, {1, 0, 3, 1}, {2, 0, 0, 1}}; ListPlot3D[data /. x_ /; x < .01 -> 0.01, Mesh -> None, InterpolationOrder -> 0, Filling -> Bottom, FillingStyle -> {Opacity[1]}, ColorFunction -&

plotting - Tailoring RegionPlot3D with PlotPoints?

This seemingly tame solid gives Mathematica (v9) a bit of a workout if you want to generate a good picture: rinner[y_] = Sqrt[y]; router[y_] = 1; RegionPlot3D[rinner[y]^2 <= x^2 + z^2 <= router[y]^2, {x, -1, 1}, {z, -1, 1}, {y, 0, 1}, AxesLabel -> {x, y, z}, PlotPoints -> 100, PlotStyle -> Opacity[.75], MeshFunctions -> {#3 &}, Mesh -> 5] I kept increasing PlotPoints from 100 to 200 to 300 and things get pretty slow---without much of an improvement in the rendering of the choppy part of the region at the top. Bumping up MaxRecursion and PerformanceGoal->"Quality" didn't seem to help. I tried playing with variations like PlotPoints->{100,100,300} to get better results faster, and this leads to my two questions. What else should I try? (I experimented with RevolutionPlot3D , but I want solids .) Is it possible to tailor the placement of PlotPoints to a subset of either (a) an axis (say, 10x more points in the $z$ direction, but pack

version 10 - How to recover a notebook?

I was closing a notebook, luckily Mathematica (10!) asked me if I really wanted to quit without saving, so I saved, but now when I open that notebook this is what I see: Is there a way to recover the content of the notebook? EDIT Mathematica showed me this dialog: and I clicked on "Save". EDIT2 As requested: (* Content-type: application/vnd.wolfram.mathematica *) (*** Wolfram Notebook File ***) (* http://www.wolfram.com/nb *) (* CreatedBy='Mathematica 10.0' *) (*CacheID: 234*) (* Internal cache information: NotebookFileLineBreakTest NotebookFileLineBreakTest NotebookDataPosition[ 158, 7] NotebookDataLength[ 28123, 729] NotebookOptionsPosition[ 26564, 684] NotebookOutlinePosition[ 27872, 720] CellTagsIndexPosition[ 27829, 717] WindowTitle->Wolfram Mathematica 10.0 WindowFrame->ModelessDialog*) (* Beginning of Notebook Content *) Notebook[{ Cell[BoxData[ PaneBox["\<\"\"\>", Im

Solving an equation for all possible integer solutions

Problem: A person spends 15 dollars at the store. Eggs cost 1 dollar. Milk costs 2 dollars and bread costs 3 dollars. I'm attempting to create a list of all possible integer solutions to the problem. I attempted to solve the problem with the solve function Solve[e 1 + m 2 + b 3 == 15 && e > = 0 && m >= 0 && b >= 0, {e, m, b}, Integers] I'm not solving this exact problem. In theory its the same thing except a lot bigger. The above code works and produces an answer, but I get stuck in the larger calculation. Should I go about solving the problem another way? If so how?

finite element method - How to solve a nonlinear coupled PDE with initial and some boundary values

I would like to solve the following nonlinear coupled PDE with a mix of initial conditions and boundary values: rMax = 0.01; sol = First@NDSolve[{ Derivative[2, 0][g][r, z] + Derivative[0, 2][g][r, z] == u[r, z]^2, Derivative[2, 0][u][r, z] + Derivative[0, 1][u][r, z] == -g[r, z], Derivative[1, 0][u][0, z] == 0.0, Derivative[1, 0][u][rMax, z] == 0.0, u[rMax, z] == 0.0, u[r, 0] == g[r, 0] == Sin[\[Pi] r/rMax], Derivative[1, 0][g][0, z] == g[rMax, z] == 0.0}, {u, g}, {r, 0, rMax}, {z, 0, 0.01}] but I receive the following error message (in version 10.0.1.0): NDSolve::femnonlinear: Nonlinear coefficients are not supported in this version of NDSolve . The offender is the square term u[r, z]^2 in the first equation; without the square NSolve[] executes without errors. NDSolve seems to apply the FEM method by default to such problems. I'm wondering why NDSolve[] doesn't switch back to another (propagation-type) algorithm? When I add the option Method -> "MethodOfLines"

probability or statistics - FindDistributionParameters fails with custom distribution?

Context I would like to find the MaximumLikelihood solution of a customized PDF Let's start with a built in PDF. Following the documentation dat = RandomVariate[LaplaceDistribution[2, 1], 1000]; param=FindDistributionParameters[dat, LaplaceDistribution[μ, σ], ParameterEstimator -> {"MaximumLikelihood", Method -> "NMaximize"}] (* {μ->2.27258,σ->0.521354} *) Show[Plot[ PDF[LaplaceDistribution[μ, σ] /. param, x], {x, -5, 5}], Histogram[dat, Automatic, "PDF"]] works as expected. It finds a good estimator of $\mu$ and $\sigma$. The problem Now let me do the same with a customized PDF. Here I just impose that my custom PDF cannot be evaluated before it is given numerical values. Clear[myLaplaceDistribution]; myLaplaceDistribution[μ_?NumberQ, σ_?NumberQ] := LaplaceDistribution[μ, σ] Then dat = RandomVariate[LaplaceDistribution[2, 1], 10]; FindDistributionParameters[dat, myLaplaceDistribution[μ, σ], ParameterEstimator -> {"MaximumLikeli

associations - Using Datasets in Mathematica 10.0

Bug introduced in 10.0 and fixed in 10.1 I am trying to use the Dataset functionality which was introduced in Mathematica 10.0 : dataset = Dataset[{ <|"a" -> 1, "b" -> "x", "c" -> {1}|>, <|"a" -> 2, "b" -> "y", "c" -> {2, 3}|>, <|"a" -> 3, "b" -> "z", "c" -> {3}|>, <|"a" -> 4, "b" -> "x", "c" -> {4, 5}|>, <|"a" -> 5, "b" -> "y", "c" -> {5, 6, 7}|>, <|"a" -> 6, "b" -> "z", "c" -> {}|>}] But this gives me an error as show in the image below: I tried the same syntax in combination with Needs["TypeSystem`"]; but with the same result. How can I resolve this issue?

mac os x - Front End Find feature broken in Version 10 with OSX Mavericks?

I installed Mathematica 10 and on my Macbook-Pro with OSX Mavricks. I was most frustrated to find that the front-end Find feature doesn't work. Do others have this problem? Is it possible something got messed up during download & installation? Answer Some front end troubles can be solved as described here: On OS X holding down Shift-Command during startup will reset the caches. This is worth trying when the front end is misbehaving. When having multiple versions of Mathematica installed, some problems can be avoided by using separate configurations for the different front end versions. This can be set on the System tab of the Preferences window.

functions - Partitioning a list when the cumulative sum exceeds 1

I have a long list of say 1 million Uniform(0,1) random numbers, such as: dat = {0.71, 0.685, 0.16, 0.82, 0.73, 0.44, 0.89, 0.02, 0.47, 0.65} I want to partition this list whenever the cumulative sum exceeds 1. For the above data, the desired output would be: {{0.71, 0.685}, {0.16, 0.82, 0.73}, {0.44, 0.89}, {0.02, 0.47, 0.65}} I was trying to find a neat way to do this efficiently with Split combined with say Accumulate or FoldList or Total , but my attempts with Split have not been fruitful. Any suggestions? Answer dat = {0.71, 0.685, 0.16, 0.82, 0.73, 0.44, 0.89, 0.02, 0.47, 0.65}; Module[{t = 0}, Split[dat, (t += #) <= 1 || (t = 0) &] ] {{0.71, 0.685}, {0.16, 0.82, 0.73}, {0.44, 0.89}, {0.02, 0.47, 0.65}} Credit to Simon Woods for getting me to think about using Or in applications like this. Performance I decided to make an attempt at a higher performing solution at the cost of elegance and clarity. f2[dat_List] := Module[{bin, lns}, bin = 1 - Unitize @ FoldLis

calculus and analysis - Differentiate a numerically defined function

My function is f[a_, b_] := NIntegrate[Sqrt[(Cos[t] - a)^2 + b^2], {t, 0, Pi}] I want to calculate g[1,1] where g[a,b] is defined as... g[a_, b_] := Derivative [1, 0][f][a, b] I get the error The integrand has evaluated for non-numerical values... Now I can easily calculate the derivative first and not get an error, but I don't want to do that for a particular reason. I can also use a finite difference formula that I create myself, but I want to use procedures already defined by Mathematica . Is it possible to avoid this error and calculate the derivative of a numerical integral?

computational geometry - Cut a polygon by a line

I have a polygon, says, Polygon[{0,0},{10,0},{10,10}{0,10}] . I would like to cut it by a line (through two points) and take one half of it. Could you please suggest me the functions I should use? (I am very new to Mathematica ). Answer I guess a pretty easy approach is to use HalfPlane with your two points that define your line. RegionDifference should do the rest. RegionDifference[ Polygon[{{0, 0}, {10, 0}, {10, 10}, {0, 10}}], HalfPlane[{{1, 0}, {3, 7}}, {1, 1}]]; RegionPlot[%]

functions - FourierSeries command , for arbitrary period T?

ExptoTrig[FourierSeries[Piecewise[{{-Pi,-2Pi You can see, the function goes from $(-2\pi,2\pi]$ But Wolfram Alpha gives a wrong answer to this, because she computes it as if the function would go from $-\pi$ to $\pi$. (The right answer should be $4\sin(\frac{x}{2}) + ..$.) So, how can I use this command when the function's period T is not 2$\pi$ exactly? Answer f[x_] = Piecewise[{{-Pi, -2 Pi < x <= 0}, {Pi, 0 < x <= 2 Pi}}]; T = 4 \[Pi]; fr = FourierTrigSeries[f[x], x, 3, FourierParameters -> {1, 2 \[Pi]/T}] (* 4 Sin[x/2] + 4/3 Sin[(3 x)/2] *)

workbench - MUnit creating a hierarchy of TestSuites

I would like to create a hierarchy of TestSuites . However it appears that it is not possible to call a TestSuite from another TestSuite . So before I go and reinvent the wheel, I thought I would see how others have done (or would) accomplish this. Details of what I am trying to accomplish I have grouped packages (and their tests) that perform like functionality by directory. For illustrative purposes lets say I have three subdirectories: importers, manipulators, and processes. I have a TestSuite in each subdirectory named TestAll.mt . I also have one in the top level directory called TestAll.mt that I would like to have call the TestAll.mt in the three subdirectories. I would like to be able to run the test suites at both the subdirectory and top levels. Running the test suite at the subdirectory level would only run the particular subset of tests, while running the test suite at the top level would run the test suites in all the subdirectories. I would like to avoid having to add an

Why does a variable become real when using the second argument of Dynamic?

Version 9, on Windows 7. Please compare these 2 very simple examples Manipulate[ x, Row[{Manipulator[Dynamic[x], {0, 10, 1/100}]}], {{x, 1}, None} ] and Manipulate[ x, Row[{Manipulator[Dynamic[x, (x = #) &], {0, 10, 1/100}]}], {{x, 1}, None} ] Would you not expect them to work the same way? But in the second case, x becomes real, while in the first case it remains rational as expected. Any one knows the reasoning for this? I read about the second argument of Dynamic , but do not see something obvious. I did not read everything about it. But the above behaviour is surprising. The fix is easy: Manipulate[ x, Row[{Manipulator[ Dynamic[x, (x = Rationalize[#]) &], {0, 10, 1/100}]}], {{x, 1}, None} ] I am only asking why it happens, so I can learn more. Answer This is not an answer but it's too long for a comment. Also please remember that all the following are just guesses . My first observation is that the simplified Slider[Dynamic[x, (x=#)&], {0, 1, 1/10}]

Plotting Solution to Heat Equation

By hand, I've solved the heat equation and looking to 3D plot the solution. My function is $$2\sum_{n=1}^{\infty}\frac{(-1)^n}{n}\sin(nx)e^{-111n^2t}$$ The code I've been trying to use to far is Plot3D[2*Sum[((-1)^n)/(n)Sin[n x]Exp[-111t*n^2],{n,1,Infinity}], {x,-Pi,Pi},{t,0,100}] It's been running for a while with no output. Any help would be fab :)

physics - Two bouncing balls in 1 dimension, issues with two different methods?

I'm trying to simulate 2 balls with the same mass and diameter bouncing one on top of another under gravity, see the illustration below (not ideal, but this is the best result I've got so far, the numbers are the time in seconds, the height of the box is 2 meters): However, I'm not interested in the pretty animation, but rather in the variation of the distance with time. My main motivation - I want to see how the period of this function, if it's indeed periodic, depends on the initial conditions. For the initial values of $h_2=2$ and $h_1=1.5$, I obtained a periodic dependence with my first method: The first plot is for the positions $y_1$ and $y_2$, while the second plot is for the distance $y_2-y_1$. This looks promising, however the method is approximate: I'm using a constant finite time step $dt$ and check the distances between the blue ball and the ground as well as between the blue and red balls. To get the correct simulation, I had to take a very small $dt$,

geometry - TriangleMeasurement causing problem when used in Manipulate

On my Windows 10 machine, Manipulate with TriangleMeasurement locks up once I start changing the sliders. Even with SynchronousUpdating -> False and ContinuousAction -> False it still locks up. Code below draws a triangle and then uses TriangleMeasurement to compute the area. It locks up after several movements of sliders. I wonder if someone could confirm this is reproducible or perhaps I'm not setting up Manipulate correctly. Manipulate[ a = {1, 4}; b = {1, s2}; c = {u2, v2}; o = {0, 0}; myTriangle = {EdgeForm[Black], FaceForm[], Triangle@{a, b, c}}; cobArea = TriangleMeasurement[{a, o, c}, "Area"]; Show[ Graphics @ {myTriangle, Line @ {o, c}, Line @ {o, b}, Line @ {o, a}}, Axes -> True, PlotRange -> 5], {{v2, -1}, -0.1, -5}, {{s2, -1}, -0.1, -5}, {{u2, -1}, -0.3, -5}, TrackedSymbols :> True, SynchronousUpdating -> False, ContinuousAction -> False]

mathematical optimization - Inequalities with assumptions and constraints

I'm using Mathematica 8 I've been searching the net without luck for this specific solutions: Suppose I have an inequality f(x;M,m)>0 where I KNOW that M>4m and m>0. How can I let Mathematica know this so that when solving the inequality using Reduce I don't get twenty irrelevant solutions, but instead only solutions where 0<4m I've used assumptions and so on but without luck, for instance: Assuming[m > 0, Reduce[(x + m)*(x - m) < 0]] Produces; x \[Element] Reals && (m < -Sqrt[x^2] || m > Sqrt[x^2]) m<-Sqrt is unnecessary here, so why does Mathematica write it out? How can we stop it? Answer Although the comments already solved the problem, here is an answer with slight additions: Assumptions and the command Assuming (which makes assumptions locally), don't affect Reduce . A good way to check whether a given command (say Solve ) is affected by Assuming is to look through the Options for that function and see if Assumption

numerics - Converting to machine precision

There are multiple ways to convert an expression to machine precision, for example: In[1]:= a = Sqrt[2] Out[1]= Sqrt[2] In[2]:= {1.a, 1`a, N@a, SetPrecision[a,MachinePrecision]} Out[2]= {1.41421,1.41421,1.41421,1.41421} In[3]:= Precision /@ % Out[3]= {MachinePrecision,MachinePrecision,MachinePrecision,MachinePrecision} My question is whether or not these methods are absolutely equivalent. Is it just a matter of personal taste which one to use, or are there examples where they behave differently? Answer In terms of speed N and SetPrecision can be expected to be faster as they do not involve an unnecessary multiplication. (Conversely 2` * a would be better than N[2 * a] because the latter does exact multiplication before the conversion.) 1. a and 1` a can be considered identical because they represent the same input. Personally I have taken to using the latter form for entering machine-precision integers because the syntax better reminds me of the purpose. One can see that N and

list manipulation - Reverse DeleteDuplicates using Information from Tally

I have a list of values, e.g., {1, 1, 1, 2, 2, 3, 3, 4, 4, 4, 4, 4, 4, 5} . I delete the duplicates with DeleteDuplicates[{1, 1, 1 ,2, 2, 3, 3, 4, 4, 4, 4, 4, 4, 5}]; {1, 2, 3, 4, 5} I want to perform some calculations on each value and at the end reverse the process of deleting. From Tally I know, how often the elements appear: Tally[{1, 1, 1, 2, 2, 3, 3, 4, 4, 4, 4, 4, 4, 5}]; {{1, 3}, {2, 2}, {3, 2}, {4, 5}, {5, 1}} Now, from the calculations I have the new list {12, 14, 15, 16, 17} . I want to reverse the process of DeleteDuplicates on this list. Means: {{12,3},{14,2},{15,2},{16,5},{17,1}} -> So, that I get: {12, 12, 12, 14, 14, 15, 15, 16, 16, 16, 16, 17} . I want to do that, because the calculations take very long and I want to calculate duplicates two times. Answer tal = {{1, 3}, {2, 2}, {3, 2}, {4, 5}, {5, 1}}; lis = {12, 14, 15, 16, 17}; Flatten @ MapThread[Table[#1, {#2}] &, {lis, Last /@ tal}] {12, 12, 12, 14, 14, 15, 15, 16, 16, 16, 16, 16, 17} Or Flatten[Consta

plotting - How do I plot coordinates (latitude and longitude pairs) on a geographic map?

I'm attempting for the first time to create a map within Mathematica . In particular, I would like to take an output of points and plot them according to their lat/long values over a geographic map. I have a series of latitude/longitude values like so: {{32.6123, -117.041}, {40.6973, -111.9}, {34.0276, -118.046}, {40.8231, -111.986}, {34.0446, -117.94}, {33.7389, -118.024}, {34.122, -118.088}, {37.3881, -122.252}, {44.9325, -122.966}, {32.6029, -117.154}, {44.7165, -123.062}, {37.8475, -122.47}, {32.6833, -117.098}, {44.4881, -122.797}, {37.5687, -122.254}, {45.1645, -122.788}, {47.6077, -122.692}, {44.5727, -122.65}, {42.3155, -82.9408}, {42.6438, -73.6451}, {48.0426, -122.092}, {48.5371, -122.09}, {45.4599, -122.618}, {48.4816, -122.659}, {42.3398, -70.9843}} I've tried finding documentation on how I would proceed but I cannot find anything that doesn't assume a certain level of introduction to geospatial data. Does anyone know of a good resource onl