Skip to main content

programming - Is learning to use Mathematica useful for pure theoretical research in Mathematics and Computer Science?



I am looking for opinions from Mathematica users about Mathematica itself. After reading the faq, I thought that in some sense I "wish to solve a problem using Mathematica" ... although I understand that this question is probably too vague. I hope not to be distracting here.


I see that some researchers (in math and computer science) at my university use Mathematica to carry out intricate calculations. I wonder if mathematical software today is so good that it can do better than humans at symbolic manipulation. This would mean that it would be really worthwhile to acquire some background in using such software, especially in those fields of research where one often has to deal with huge formulas.


Also, two subquestions:



  1. How does one cite the use of Mathematica in a research paper? One could in principle say "beginning from formula X, using Mathematica we get Y". Would that be accepted professionally?

  2. Are these systems foolproof (particularly Mathematica)? I mean, is it known that there are no special instances of symbolic-manipulation problems for which the software could end up doing an error dealing with them?


P.S. I didn't find a proper tag for my question.



Answer



“I wonder if mathematical software today is so good that it can do better than humans at symbolic manipulation. This would mean that it would be really worthwhile to acquire some background in using such software, especially in those fields of research where one often has to deal with huge formulas.”



I would say faster and more accurate symbolic manipulation is one of the main points of Mathematica so yes I’d agree with the above implication and that it would be worthwhile acquiring some background.


Symbolic manipulation appears in many contexts however, and your header question about its usefulness in “pure theoretical research in Mathematics and Computer Science” is almost a completely separate (and loaded) question in itself with its own philosophical and practical overtones well beyond the two subquestions posed. These subquestions do tangentially relate to these overtones though:




  1. Citations: "beginning from formula X, using Mathematica we get Y" I think as general rule this would be OK provided details are provided (perhaps in appendices/code attachments) about how this was done and in particular how such results can be reproduced. Usually it is the conceptual insights that are of interest more so than the mechanical manipulation. The other way is to cite computational work is to create a related Demonstration and cite this using it’s own citation syntax (that appears below each Demonstration).




  2. Correctness: For sufficiently large manipulations I think correctness actually becomes much more of an issue if Mathematica (or similar) is not used. The chance of error doing this manipulations by hand is much higher that from using Mathematica's inbuilt transformations. I’d even go further and say the chance of an error using these inbuilt transformations is actually much lower than the chance of an error appearing in published theoretical proofs (usually done entirely by hand). The caveat to this is of course good programming practice that builds upon these transformations and constant vigilance some of which Leonid mentioned but some rules of thumb I’ve found useful are:





    • For lower-dimensions, implement solutions using at least 3 different (“conceptionally orthogonal”) methods. (The flexibility of Mathematica’s language means this is usually not too difficult and provides a good margin of error)




    • For higher-order dimensions check for lower-order consistency (It is usually impractically to refine 3 different implementations so usually it is sufficient to settle on one for refinement and efficiency improvements in tackling higher-order dimensions)




    • Use sanity checks often (graphs, implications of output in relation to obvious truths)




    • Use validation suites routinely (either unit tests in Wolfram Workbench, or custom-made ones in the frontend- my preferred method)





    • Cross-reference with other published algorithms/output




    • Keep an open mind that errors are possible given these aren’t proofs but try to categorize possible error sources - e.g. One crude, overarching one for error sources:







1) Your algorithm 2) The implementation of your algorithm 3) Mathematica’s transformations 4) Your computer system


I’d say 3) and 4) are pretty unlikely error sources; 2) is where most errors occur and hence its focus in the above measures (which can also help in confirming that 1) is a publishable result).


To the specific question in the header -take theoretical computer science (TCS)- one could ask is it even a result in “Theoretical Computer Science” if Mathematica is heavily involved? This comes down to definitions and philosophy. There is a school of thought that for example experimentation using high-performance computation in TCS is unlikely to yield too many insights. (I’m talking about traditional experimentation, not actual TCS proofs in mathematica which is again is an even more loaded question.)


Take one luminary - Scott Aaronson’s view about using high-performance computation in TCS research given in a presentation in which he states (slide 4):




The hope: Examining the minimal circuits would inspire new conjectures about asymptotic behavior, which we could then try to prove


Conventional wisdom: We wouldn’t learn anything this way - There are $~2^{2^{n}}$ circuits on $n$ variables — astronomical even for tiny $n$
- Small-$n$ behavior can be notoriously misleading about asymptotics


My view: The conventional wisdom is probably right. That’s why I’m talking in this session.




This stackexchange entry indicates some successes in in experimental complexity theory although it appears pretty limited and in limited domains.



I can offer a kind of counter-example in a Demonstration in which for a type of CNF circuit a conjecture about asymptotic (threshold) behaviour is surprisingly clear from considering only the first few dimensions $n=2,3,4$ (it had been checked statistically for larger values of $n$ and in related theoretical work)


(Note how “The hope:” part above implicitly reveals what is considered TCS or of value in TCS).


Then there are also the cultural issues worth considering.


How many (latex) papers in “TCS journals” even mention runnable code? (I’d suggest <1%)


Timelessness. It’s a pretty good bet that latex-generated PDF’s in TCS will be viewable in 10-20-50 years mainly because this format already houses so much scientific knowledge. Conversely it’s almost guaranteed that a sufficiently large Mathematica package (e.g. of the sort that might involve some systematic experimentation in TCS) will not be runnable in even 5 years. This of course is not a Mathematica issue per-se (it perhaps handles backward compatibility better than most) but one common to all experimentation since backward compatibility is an order of magnitude greater problem for runnable code compared with static documents. One of the potentially important things about the Demonstration site IMO is that maybe this backward compatibility will end up being managed for you.


This timelessness and cultural issues become relevant to the extent that your code becomes more and more complex and more and more part of your results - which in many ways will be inevitable in any systematic experimentation (if you take the philosophical position about the potential of symbolic manipulation in TCS) that might increasingly be needed to discover something new.


So my take it on using Mathematica systematically in TCS would be:




  • For “mainly theoretical results” any motivation/checking/illustrations using Mathematica could be beneficial.





  • Closer and deeper integration between mathematica experimentation and TCS is still a relatively unexplored and potentially fruitful area IMO (- as someone not working in the field) but … most experts in the area would probably disagree and at any rate …




  • Technically the infrastructure for a larger-scale Latex/Mathematica - theoretical/experimental framework is not sufficiently developed (IMO) to currently go too far down this path.




  • Demonstrations might be a step in the right direction and perhaps provide a good litmus test for the level of complexity in terms of symbolic manipulation used in TCS research. If it can be put into a Demonstration your work may have a better chance of gaining some sort of timelessness. Currently Demonstrations are a fair way behind what can be done in a notebook (e.g. integration of packages, input fields, external data sources etc) but IMO this situation may improve over time and particularly if WRI shares this view about the importance of imbuing this timelessness in computational research (perhaps adding package support, Google Play/App Store functionality etc).





Comments

Popular posts from this blog

plotting - Filling between two spheres in SphericalPlot3D

Manipulate[ SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, Mesh -> None, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], {n, 0, 1}] I cant' seem to be able to make a filling between two spheres. I've already tried the obvious Filling -> {1 -> {2}} but Mathematica doesn't seem to like that option. Is there any easy way around this or ... Answer There is no built-in filling in SphericalPlot3D . One option is to use ParametricPlot3D to draw the surfaces between the two shells: Manipulate[ Show[SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], ParametricPlot3D[{ r {Sin[t] Cos[1.5 Pi], Sin[t] Sin[1.5 Pi], Cos[t]}, r {Sin[t] Cos[0 Pi], Sin[t] Sin[0 Pi], Cos[t]}}, {r, 1, 2 - n}, {t, 0, Pi}, PlotStyle -> Yellow, Mesh -> {2, 15}]], {n, 0, 1}]

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]

plotting - Mathematica: 3D plot based on combined 2D graphs

I have several sigmoidal fits to 3 different datasets, with mean fit predictions plus the 95% confidence limits (not symmetrical around the mean) and the actual data. I would now like to show these different 2D plots projected in 3D as in but then using proper perspective. In the link here they give some solutions to combine the plots using isometric perspective, but I would like to use proper 3 point perspective. Any thoughts? Also any way to show the mean points per time point for each series plus or minus the standard error on the mean would be cool too, either using points+vertical bars, or using spheres plus tubes. Below are some test data and the fit function I am using. Note that I am working on a logit(proportion) scale and that the final vertical scale is Log10(percentage). (* some test data *) data = Table[Null, {i, 4}]; data[[1]] = {{1, -5.8}, {2, -5.4}, {3, -0.8}, {4, -0.2}, {5, 4.6}, {1, -6.4}, {2, -5.6}, {3, -0.7}, {4, 0.04}, {5, 1.0}, {1, -6.8}, {2, -4.7}, {3, -1.