I am looking for best practices and recommended workflows for using the Testing Framework. For example, lets say I am developing a Mathematica package for other users, it is hosted on GitHub and I wish to make it easy for other developers to contribute to repository.
The project includes a suite of (unit) tests that I would like to run as often as necessary. I would like to run them within FrontEnd and conveniently inspect the results (timings, which tests failed and why, etc), save them to file and also further automate testing in the future. I can think of few possible workflows but none of them ticks all the boxes.
"Testing Notebook", Pro: It gives a clear overview of tests and looks nice. It can be run in FrontEnd and also programmatically with
TestReport
. Con: Contains even more metadata than the normal notebook (all those buttons) so it is less suitable to put under version control. Somehow the evaluation of tests is slower in testing notebook than in normal notebook.".wlt file", Pro: This is the documented way to use plain text files with test and run them programmatically from another notebook or command line. Con: I can't open .wlt files in FrontEnd!? So I have to open them in other text editor without nice features of FrontEnd (autocomplete, code coloring, etc).
".wl file", Pro: I can easily edit it in FrontEnd as any other package. Con: This workflow is not documented. Should I put all tests in some specific
Context
?
So the question is, what is your recommended workflow for testing? While browsing other people's GitHub repositories I have seen some custom made approaches, but I would like rely on something that is documented and easy to explain to other developers who will contribute to the project.
I am also curious what is the purpose of BeginTestSection
and EndTestSection
in .wlt files?
Answer
I think @Jason B.'s answer is a good workflow. Here is an alternative that I have adopted over the years. Most of my code I write in *.m / *.mt / *.wl / *.wlt files using IntelliJ Idea with the Mathematica Plugin, so version control is straightforward.
My projects are written as paclets (see How to distribute Mathematica packages as paclets? for an introduction). Paclets allow you to specify resource files in the PacletInfo.m file. For every (major) function, I write a test-file
or
and add it to the Paclet. Here is an example from my Multiplets paclet:
Paclet[
Name -> "Multiplets",
Version -> "0.1",
MathematicaVersion -> "11.0+",
Description -> "Defines multiplets and Young tableaux.",
Thumbnail -> "multiplets.png",
Creator -> "Johannes E. M. Mosig",
Extensions -> {
{"Kernel", Root -> ".", Context -> "Multiplets`"},
{"Resource", Root -> "Testing", Resources -> {
"MultipletDimension.mt",
"MultipletQ.mt",
"MultipletReduce.mt",
"Tableau.mt",
"TableauAppend.mt",
"TableauClear.mt",
"TableauDimension.mt",
"TableauFirst.mt",
"TableauFromMultiplet.mt",
"TableauQ.mt",
"TableauRest.mt",
"TableauSimplify.mt",
"TableauToMatrix.mt",
"TableauToMultiplet.mt"
}
}
}
]
In Mathematica, I can then load my paclet
<< Multiplets`
and check if any given function works as expected
TestReport@PacletResource["Multiplets", "MultipletReduce.mt"]
I can also open the test file within Mathematica, using
NotebookOpen@PacletResource["Multiplets", "MultipletReduce.mt"]
The front end opens *.mt and *.wlt files as plain text files, however, so if the notebook interface is important to you, then you may want to safe them as *.wl files instead.
In addition, one may add a test-script that just calls all other test scripts, for convenience.
Comments
Post a Comment