Skip to main content

Granular versus terse coding


While reading Leonid's grand answers to General strategies to write big code in Mathematica? I came across something that goes against my own practices. I do not disagree with the principle but the degree to which it is taken feels both alien and counterproductive to me. Quite possibly Leonid is right, he usually is, but I wish to indulge in a counterargument even if it ultimately only proves his point.


He gives as his example of granular coding this:


ClearAll[returnedQ,randomSteps,runExperiment,allReturns,stats];


returnedQ[v_,steps_]:=MemberQ[Accumulate[v[[steps]]],{0,0}];

randomSteps[probs_,q_]:=RandomChoice[probs->Range[Length[probs]],q];

runExperiment[v_,probs_,q_]:= returnedQ[v,randomSteps[probs,q]];

allReturns[n_,q_,v_,probs_]:= Total @ Boole @ Table[runExperiment[v,probs,q],{n}]

stats[z_,n_,q_,v_,probs_]:=Table[allReturns[n,q,v,probs],{z}];


I have expressly left out the explanatory comments. Answering questions on Stack Exchange has taught me that code often doesn't do what descriptions claim it does, and it is better to read and understand the code itself for a true understanding.


I find the level of granularity illustrated above distracting rather than illuminating.




  • There is quite a lot of abstract fluff in the form of function names to tell me what code does rather that just showing me what it does in simple, reabable steps.




  • Each subfunction has multiple parameters and the relationship between these functions is not clear at a glance. The evaluation order ultimately proves simple but the code itself feels convoluted.





  • To follow this code I have to read it backwards, working inside out, and I have to keep track of multiple arguments at each step. Leonid wisely keeps the parameters consistent throughout but this cannot be assumed at first read, therefore additional mental effort must be expended.




Conversely in my own terse paradigm I would write the function as follows:


ClearAll[stats2]

stats2[z_, n_, q_, v_, probs_] :=
With[{freq = probs -> Range @ Length @ probs},
(

v[[ freq ~RandomChoice~ q ]]
// Accumulate
// MemberQ[{0, 0}]
// Boole
) ~Sum~ {n} ~Table~ {z}
];

I find this greatly superior for personal ease of reading and comprehension.


I know that my style is unconventional and at times controversial; some no doubt flinch at my use of ~infix~ operators. Nevertheless I stand by my assertion that once this becomes familiar the code is very easy to read.




  • The entire algorithm is visible in one compact structure

  • The relationship of the parts of the code is quickly apparent

  • The code can be read in a straightforward top-to-bottom, left-to-right manner

  • This has almost no abstract fluff; the code is what it does, one comprehensible step at at time

  • There is little need to visually or mentally jump around the code in the process of following it

  • There are a minimum of arguments to keep track of at each step; each function is a built-in and each here has only one or two arguments, most instantly apparent from the syntax itself, e.g. 1 // f or 1 ~f~ 2.

  • Each parameter (of stats2) is used only once, with the exception of probs; there is no interwoven handing off of arguments to track or debug (e.g. accidentally passing two in reverse order)

  • There is virtually no need to count brackets or commas


I feel that as illustrated stats2 is a sufficiently granular piece of code and that understanding and debugging it in its entirety is faster and easier than the same process on Leonid's code.



So where are the questions in all of this?




  1. Who is right here? ;^) I know that my code is faster for me to read and understand, now and later. But what do others make of it? Surely some readers are already familiar with my style (perhaps grudgingly!) -- do they find stats2 easy to read?




  2. If as I believe there should be a balance of granularity and terseness how might the optimum degree be found?




  3. Is my finding Leonid's code comparatively slow to read and follow peculiar? What methods might I employ to improve my comprehension of that style?





  4. If my code is not easy for others to read and follow how can I identify and address the barriers that make it so?




  5. Am I missing the point? Are ease and speed of reading and debugging not the primary goals of the coding style Leonid illustrated in this example? What then is, and does my style fail to meet this goal in this specific example?






Reply 1



This is a reply specifically to Leonid, not because other answers are not equally welcome and valid but because I chose his statements and code as the basis for my argument.


I suspect that there is little in this that I truly disagree with and that further dialog will bring me closer to your position. I have neither the breadth (multiple languages) nor depth (large projects, production code) of your experience.


I suspect that this is the crux of the problem: "It is somewhat an art to decide for each particular case, and this can not be decided without a bigger context / picture in mind." I think that art is what I wish to explore here.


It is somewhat unfair to pick apart your example without context but since none was provided I see no other option.


I am certainly guilty of crafting "write-only code" at times; sometimes I even find this amusing. However I do not think stats2 is a case of this. To the contrary I find it more read-friendly than your code which is largely the foundation of this entire question.


I abhor code redundancy to the point of compulsively compacting other people's answers(1)(2), so your claim (if I read it correctly) that my style is inherently more redundant is simultaneously promising and exasperating. :^)


Surely I believe in code reusability, but I favor shorthand and abstractions that are broadly applicable rather than limited to a small class or number of problems. What experienced coder doesn't have a shorthand for Range @ Length @ x because that comes up frequently in a broad range of problems? But when I am going to use returnedQ again and is it worth the mental namespace to remember what it does? Am I going to be looking for element {0,0} again or might it be something else? Might I want Differences instead of Accumulate? Is it easier to make returnedQ sufficiently general or to simply call // MemberQ[foo] when I need it?


You wrote:



My guess is that you like terse code because it brings you to the solution most economically. But when / if you want to solve many similar problems most economically, then you will notice that, if you list all your solutions, and compare those, your terse code for all of them will contain repeated pieces which however are wired into particular solutions, and there won't be an easy way to avoid that redundancy unless you start making your code more granular.




Perhaps surprisingly this is actually rather backward from the way it seems to play out for me. It is easy to churn out verbose code with little thought for brevity and clarity; that is economic of my time to write. But spending the effort to write terse and clear code as I attempted to do with stats2 returns economy when reading and reusing that code because I can quickly re-parse and understand this code holistically rather than getting lost in a tangle of abstractions as I do with your code example. (Sorry, but that's how I feel in this case.) I do not want to have to run code to understand what it does; I want to be able to simply read it in the language I am acquainted with (Mathematica).


If in the course of solving multiple related problems I realize that there is redundancy in my code I can still pull out those elements and refactor my code. The simple, visibly apparent structure makes this easy.


I think the only way I shall be able to see this from your perspective is to work on a sufficiently large example where your principles become beneficial, and where our styles would initially diverge. I wonder if we can find and use such an example without pointlessly spending time on something arbitrary.




Reply 2


Your updated answer reads:



What I didn't realize was that often, when you go to even more granular code, dissecting pieces that you may initially consider inseparable, you suddenly see that your code has a hidden inner structure which can be expressed even more economically with those smaller blocks. This is what Sessions has repeatedly and profoundly demonstrated throughout his book, and it was an important lesson for me.




I welcome this epiphany! To remove redundancy from my code and make it even more terse is something I have striven for for years. I think this can only come through a direct example (or series of examples) as in the microcosm your granularity is verbose rather than condensing. How large a code base would we need to have for this level of granularity to condense code rather than expand it?


C is so verbose that I doubt I would be able to fully appreciate and internalize examples from the referenced book. Does a Mathematica-specific example come to mind?



Answer



My path to prefer granularity


This is probably more an extended comment and a complementary answer to an excellent one by Anton. What I want to say is that for a long time, I had been thinking exactly along Mr.Wizard's lines. Mathematica makes it so easy to glue transformations together (and keep them readable and understandable!), that there is a great temptation to always code like that. Going to extreme granularity may seem odd and actually wrong.


What changed my mind almost a decade ago was a tiny book by Roger Sessions called Reusable data structures for C. In particular, his treatment of linked lists, although all other things he did were also carrying that style. I was amazed by the level of granularity he advocated. By then, I've produced and / or studied several other implementations for the same things, and was sure one can't do better / easier. Well, I was wrong.


What I did realize by that time was that once you've written some code, you can search for repeated patterns and try to factor them out - and as long as you do that reasonably well, you follow the DRY principle, avoid code duplication and everything is fine. What I didn't realize was that often, when you go to even more granular code, dissecting pieces that you may initially consider inseparable, you suddenly see that your code has a hidden inner structure which can be expressed even more economically with those smaller blocks. This is what Sessions has repeatedly and profoundly demonstrated throughout his book, and it was an important lesson for me.


Since then, I started actively looking for smaller bricks in my code (in a number of languages. While I mostly answer Mathematica questions, I wrote reasonably large volumes of production code also in Java, C, javascript and Python), and more often than not, I was finding them. And almost in all cases, going more granular was advantageous, particularly in the long term, and particularly when the code you write is only a smaller part of a much larger code base.


My reasons to prefer granularity


Now, why is that? Why I think that granular code is very often a superior approach? I think, there are a few reasons. Here are some that come to mind



Conceptual advantages




  • It helps to conceptually divide code into pieces which for me make sense by themselves, and which I view as parts deserving their own mental image / name.


    More granular functions, when the split of a larger chunk of code is done correctly, represent inner "degrees for freedom" in your code. They expose the ideas behind the code, and the core elements which combine to give you a solution, more clearly.


    Sure, you can see that also in a single chunk of code, but less explicitly. In that case, you have to understand the entire code to see what is supposed to be the input for each block, just to understand how it is supposed to work. Sometimes that's Ok, but in general this is an additional mental burden. With separate functions, their signatures and names (if chosen well) help you with that.




  • It helps to separate abstraction levels. The code combined from granular pieces reads like DSL code, and allows me to grasp the semantics of what is being done easier.


    To clarify this point, I should add that when your problem is a part of a larger code base, you often don't recall it (taken separately) as clearly as when it is a stand-alone problem - simply because most of such functions solve problems which only make sense given a larger context. Smaller granular functions make it easier for me to reconstruct that context locally without reading all the big code again.





  • It is often more extensible


    This is so because I can frequently add more functionality by overloading some of the granular functions. Such extension points are just not visible / not easily possible in the terse / monolithic approach.




  • It often allows one to reveal certain (hidden) inner structure, cross-cutting concerns, and new generalization points, and this leads to significant code simplifications.


    This is particularly so when we talk not about a single function, but about several functions, forming a larger block of code. It frequently happens that when you split one of the functions into pieces, you then notice that other functions may reuse those components. This sometimes allows one to discover a new cross-cutting concern in code, which was previously hidden. Once it is discovered, one can make efforts to factor it from the rest of the code and make it fully orthogonal. Again, this is not something that is observable on the level of a single function.





  • It allows you to easily create many more combinations


    This way you can get solutions to similar (perhaps somewhat different) problems, without the need to dissect your entire code and rewrite it all. For example, if I had to change the specific way the random walk in that example was set up, I only had to change one tiny function - which I can do without thinking about the rest.




Practical advantages




  • It is easier to understand / recall after a while


    Granular code, at least for me, is easier to understand, when you come to it after a while, having forgotten the details of it. I may not remember exactly what was the idea behind the solution (well-chosen names help here), as well as which data structures were involved in each transformation (signatures help here). It also helps when you read someone else's code. Again, this is particularly true for larger code bases.





  • More granular functions are easier to test in isolation.


    You can surely do that with the parts of a single function too, but it is not as straightforward. This is particularly true if your functions live in a package and are parts of a larger code base.




  • I can better protect such code from regression bugs


    Here I mean the bugs coming from changes not propagated properly through entire code (such as changes of types / number of arguments for some functions), since I can insert argument checks and post-conditions easier. When some wrong / incomplete change is made, the code breaks in a controlled, predictable and easy-to-understand fashion. In many ways, this approach complements unit tests, code basically tests itself.




  • It makes debugging much simpler. This is true because:




    • Functions can throw inner exceptions with the detailed information where the error occurred (see also previous point)


    • I can access them easier in running code, even when they are in packages.


      This is actually often a big deal, since it is one thing to run and test a tiny function, even private one, and it is another thing to deal with a larger and convoluted function. When you work on the running code, and have no direct access to the source (such that you can easily reload an isolated function), the smaller the function is that you may want to test, the easier it is.






  • It makes creating workarounds, patches, and interactions with other code much easier. This I have experienced myself a lot.





    • Making patches and workarounds.


      It often happens that you don't have access to the source, and have to change the behavior of some block of functionality at runtime. Being able to just simply overload or Block a small function is so much better than having to overload or redefine huge pieces of code, without even knowing what you may break by doing so.




    • Integrating your functionality with code that does not have a public extension API


      The other, similar, issue is when you want to interact with some code (for example, make some of its functions work with your data types and be overloaded on them). It is good if that other code has an API designed for extensions. But if not, you may for example use UpValues to overload some of those functions. And there, having such granular functions as hooks really saves the day. In such moments, you really feel grateful for the other person who wrote their code in a granular fashion. This happened to me more than once.







Implications for larger programs


There surely isn't a single "right" way to structure code. And you may notice, that in most of the answers I post here on M SE, I do not follow the granularity principle to the extreme. One important thing to realize here is that the working mode where one solves a very particular problem is very different from the working mode when one is constructing, extending and / or maintaining larger code bases.


The whole ability to glue together things insanely fast works against you in the long term, if your code is large. This is a road to writing so-called write-only code, and for software development that is a road to hell. Perl is notorious for that - which was the reason why lots of people switched to Python from Perl despite the unquestionable power of Perl. Mathematica is similar, because it shares with Perl the property that there are typically a large number of ways to solve any given problem.


Put another way, the Mathematica language is very reusable, but that doesn't mean that it is very easy to create reusable code with it. It is easy to create the code that solves any particular problem fast, but that's not the same thing. Smaller granularity I view as an idiomatic (in Mathematica) way to improve reusability. What I wanted to stress was that reusability comes from the right separation of concerns, factoring out different pieces. It is obvious for the larger volumes of code, but I think this is no less true also for smaller functions.


When we typically solve some problem in Mathematica, we don't have reusability in mind all that much, since our context is usually confined to that particular problem. In such a case, reusability is a foreign concept and gets in the way. My guess is that you like terse code because it brings you to the solution most economically. But when / if you want to solve many similar problems most economically, then you will notice that, if you list all your solutions, and compare those, your terse code for all of them will contain repeated pieces which however are wired into particular solutions, and there won't be an easy way to avoid that redundancy unless you start making your code more granular.


My conclusions


So, this really boils down to a simple question: do you need to solve some very specific problem, or do you want to construct a set of bricks to solve many similar problems. It is somewhat an art to decide for each particular case, and this can not be decided without a bigger context / picture in mind. If you are sure that you just need to solve a particular problem, then going to extreme granularity is probably an overkill. If you anticipate many similar problems, then granularity offers advantages.


It so happens that large code bases frequently automate a lot of similar things, rather than solve a single large problem. This is true even for programs like compilers, which do solve a single large problem, but in reality lots of sub-problems will reuse the same core set of data structures. So, I was particularly advocating granularity in the context of development of large programs - and I would agree that for solving some particular very specific problem, making it too granular might result in too much of a mental overhead. Of course, that also greatly depends on personal habits - mine have been heavily influenced in recent years by dealing with larger chunks of code.



Comments

Popular posts from this blog

plotting - Plot 4D data with color as 4th dimension

I have a list of 4D data (x position, y position, amplitude, wavelength). I want to plot x, y, and amplitude on a 3D plot and have the color of the points correspond to the wavelength. I have seen many examples using functions to define color but my wavelength cannot be expressed by an analytic function. Is there a simple way to do this? Answer Here a another possible way to visualize 4D data: data = Flatten[Table[{x, y, x^2 + y^2, Sin[x - y]}, {x, -Pi, Pi,Pi/10}, {y,-Pi,Pi, Pi/10}], 1]; You can use the function Point along with VertexColors . Now the points are places using the first three elements and the color is determined by the fourth. In this case I used Hue, but you can use whatever you prefer. Graphics3D[ Point[data[[All, 1 ;; 3]], VertexColors -> Hue /@ data[[All, 4]]], Axes -> True, BoxRatios -> {1, 1, 1/GoldenRatio}]

plotting - Filling between two spheres in SphericalPlot3D

Manipulate[ SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, Mesh -> None, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], {n, 0, 1}] I cant' seem to be able to make a filling between two spheres. I've already tried the obvious Filling -> {1 -> {2}} but Mathematica doesn't seem to like that option. Is there any easy way around this or ... Answer There is no built-in filling in SphericalPlot3D . One option is to use ParametricPlot3D to draw the surfaces between the two shells: Manipulate[ Show[SphericalPlot3D[{1, 2 - n}, {θ, 0, Pi}, {ϕ, 0, 1.5 Pi}, PlotPoints -> 15, PlotRange -> {-2.2, 2.2}], ParametricPlot3D[{ r {Sin[t] Cos[1.5 Pi], Sin[t] Sin[1.5 Pi], Cos[t]}, r {Sin[t] Cos[0 Pi], Sin[t] Sin[0 Pi], Cos[t]}}, {r, 1, 2 - n}, {t, 0, Pi}, PlotStyle -> Yellow, Mesh -> {2, 15}]], {n, 0, 1}]

plotting - Mathematica: 3D plot based on combined 2D graphs

I have several sigmoidal fits to 3 different datasets, with mean fit predictions plus the 95% confidence limits (not symmetrical around the mean) and the actual data. I would now like to show these different 2D plots projected in 3D as in but then using proper perspective. In the link here they give some solutions to combine the plots using isometric perspective, but I would like to use proper 3 point perspective. Any thoughts? Also any way to show the mean points per time point for each series plus or minus the standard error on the mean would be cool too, either using points+vertical bars, or using spheres plus tubes. Below are some test data and the fit function I am using. Note that I am working on a logit(proportion) scale and that the final vertical scale is Log10(percentage). (* some test data *) data = Table[Null, {i, 4}]; data[[1]] = {{1, -5.8}, {2, -5.4}, {3, -0.8}, {4, -0.2}, {5, 4.6}, {1, -6.4}, {2, -5.6}, {3, -0.7}, {4, 0.04}, {5, 1.0}, {1, -6.8}, {2, -4.7}, {3, -1....