in reply to Maintainable code is the best code
Having just worked on a data analysis that involved orthogonality, I just want to point out that "orthogonal functions" does not necessarily mean functions do one task, etc.
A set of orthogonal items means that between any two items, there is no overlap of their 'function'; for vectors in 3D Euclidean space, this means that no more than 3 vectors can be orthogonal to each other, and that for 3 vectors, they must be at right angles to each other.
In programming, this corresponds to functions having no utility that overlap. That is, if you have a function that reads, parses, and prints data, and another one that just prints data, these are not orthogonal.
Now, good programming practice, planning, and repeative refactoring are similar to mathematical tools that can help find the optimal orthogonal set of functions/vectors/whatever. That is, with programmming, several rounds of refactoring will help you to not only identify a set of orthoganal functions, but functions that also perform only small tasks such that they can be combined in some manner to do complex ones.
So while the statement above is not *wrong* per se, it's not entirely accurate, and I'm just trying to clear that up :)

Dr. Michael K. Neylon  mneylonpm@masemware.com

"You've left the lens cap of your mind on again, Pinky"  The Brain
It's not what you know, but knowing how to find it if you don't know that's important
Re:{2} Maintainable code is the best code  principal components
by jeroenes (Priest) on Oct 03, 2001 at 02:15 UTC

Orthogonality... yeah, nice features. Since a few months I've been grinding orthogonality in the sense of principal component analysis (single value decomposition)  mind you, I'm not a mathematician, I just apply stuff on physiology.
When I read your post Masem it occurred to me that good sets of functions are just the opposite of principal components. For you who are not familiar with this (intriguing) subject, principal components are a set of orthogonal axes of data that are chosen (like rotating and scaling the 3D euclidian space) in such a way that as much of the variability as possible goes into the first axis, from the remaining data as much as possible in the 2nd component etc etc. Moreover, all these components are orthogonal. Apart from the physiological applications I'm familiar with, there are also mathematical implications, like matrix rank, finding solutions and so on.
A good set of functions is the opposite of principal components.
You don't want as much functionality as possible crammed into a single function and all remaining bits scattered around a bunch of insignifant code. You want each function to have a clear and confined scope, so that each of the functions is as meaningfull and concise as possible without a lot of flags and parameters and other confusing stuff.
I don't know the opposite of single value decomposition in a statistical sense, but it probably is an illposed problem. You can recombine orthogonal axes into an infinite number of other orthogonal axes (coordinate systems) but only one set forms the principal components. The reverse, equal division of information over all axes is possible in many different ways and therefore it is hard to come up with a solution (I figure, but I don't have hard proof for this... does anyone?).
This compares nicely with writing code. There are a lot of ways to write orthogonal functions, but it is hard to divide the functional space equally over functions.
Is there a name for this? Rural components maybe?
Jeroen
"We are not alone"(FZ)  [reply] 

Actually, I was also approproaching orthogonality from a principle component (PCAnalysis) standpoint (though for experimental data analysis).
Now to go over the heads of everyone else that has no idea what PCA is :), the programming equivalent is that you have M 'overall functions' that your software will want to do. A good refactoring down to an orthogonal set in programming should result in N small functions, with N >> M. As jeroenes indicates, this is illdefined from a PCA, as with PCA, you'd want to select a small number ( < M ) to approximate the job. However, unstated in the refactoring process is the fact that you should be thinking in the future and the past, and in reality, you might have P projects, each with M_sub_i (i = 1 to P) 'overall functions', such that the total of all functions over all projects past and present and future will result in M', with M' >> N >> M.
In plain text, you should be refactoring to find an orthogonal set of functions that are reusable for other problems, including functions that might have been created already, and ones that might be part of future programs. This is the same conclusion the parent thread reaches as well as numerous other texts on programming, for for those with a mathematical bent, there's some empricalness to it as well.

Dr. Michael K. Neylon  mneylonpm@masemware.com

"You've left the lens cap of your mind on again, Pinky"  The Brain
It's not what you know, but knowing how to find it if you don't know that's important
 [reply] 

No wonder that your first reply reminded me of PCA.
May I ask what kind of data you use PCA for?
It apperently
gets more popular these days. I have seen it used for genetic
chimera analysis and DNA arrays as a prelude to clustering.
When I started to use it, my mentor was very sceptic about
how acceptable it would be. While the statistican who helps
me told me it was a technique of about a century old so
nobody should complain.
Anyway, I use it for clustering analysis as well, but than
for extracellularly recorded neuronal spike waveforms. So
I sample spike waveforms from an electrode that was placed in
a brain slice and turn a PCA routine loose on it. Mostly just
the first two components are enough to get your clusters.
Jeroen
"We are not alone"(FZ)
 [reply] 


I think that principle components analysis is the wrong way to think about this problem.
First of all the analogy does not really carry. Principle components analysis depends on having some metric for how "similar" two vectors are which corresponds to the geometric "dot product". While many real world situations fit, and in many more you can fairly harmlessly just make one up, I don't think that code manages to fit this description very well.
But secondly, even if the analogy did carry, the basic problem is different. Principle component analysis is about taking a complex multidimensional data structure and summarizing most of the information with a small number of numbers. The remaining information is usually considered to be "noise" or otherwise irrelevant. But a program has to remain a full description.
Instead I think a good place to start thinking about this is Larry Wall's comment about Huffman coding in Apocalypse 3. That is an extremely important comment. As I indicated in Re (tilly) 3: Looking backwards to GO forwards, there is a connection between understanding well, and having mental models which are concise. And sourcecode is just a perfectly detailed mental model of how the program works, laid down in text.
As observing the results of Perl golf will show you, shortness is not the only consideration for well laidout programs. However it is an important one.
So if laying out a program for conciseness matters, what does that tell us? Well basic information theory says a lot. In information theory, information is stated in terms of what could be said. The information in a signal is measured by how much it specified the overall message, that is how much it cut down the problem space of what you could be saying. This is a definition that depends more on what you could be saying more than what you are saying. Anyways from information theory, at perfect compression, every bit will carry just as much information about the overall message as any other bit. From a human point of view, some of those bits carry more important information. (The average color of a picture winter scene has more visual impact than the placement of an edge of a snowflake.) But the amount of information is evenly distributed.
And so it is with programming. Wellwritten sourcecode is a textual representation of a model that is good for thinking about the problem. It will therefore be fairly efficient in its representation (although the text will be inefficient in ways that reduce the amount of information a human needs to understand the code). Being efficient, functions will convey a similar amount of surprise, and the total surprise per block is likely to be fairly large.
In short, there will be a good compression in the following sense. A fixed human effort spend by a good programmer in trying to master the code, should be result in a relatively large portion of the system being understood. This is far from a compact textual representation. For instance the human eye finds overall shapes easy to follow, therefore it is good to have huge amounts of text be spent in allowing instant pattern recognition of the overall code structure. (What portion of your sourcecode is taken up with spaces whose purpose is to keep a consistent indentation/brace style?)
Of course, though, some of that code will be highorder design, and some will be minor details. In terms of how much information is passed, they may be similar. But the importance differs...
 [reply] 

Actually, I think that principal components is a horrible way of look at programming. Programming is, essentially, the art of instructing a being as to what to do. This being has an IQ of 0, but perfect recall, and will do actions over and over until told to stop. There is no analysis by the being as to what it's told to do.
As for a human reader, the analysis is focused on atoms, which can be viewed as roughly analogous to principal components, but they're not.
The first principal component is meant to convey the most information about the data space/solution space. The next will convey the most of whatever the first couldn't convey, and so on.
In programming, the goal is for each atom (or component) to convey only as much information as is necessary for it to be a meaningful atom. Thus, the programmer builds up larger atoms from smaller atoms. The goal is to eventually reach the 'topmost' structure, which would be the main() function in C, for example. That function is built completely of calls to language syntax and other atoms, whose names should reflect what that syntax or atom is doing. Thus, we don't have if doing what while would do, and vice versa.
In data analysis, you want to look at the smallest number of things that give you the largest amount of knowledge of your dataset. But, you're not analyzing data. You're reading algorithms, which do not compose a dataset in the same way that observed waveforms would. To understand an algorithm, you have to understand its component parts, or atoms.
Think of it this way  when you explain a task to someone else, say a child, you break it down into smaller tasks. You keep doing so until each task is comprehensible by the recipient. At that point, you have transmitted atoms. At no point have you attempted to convey as much information as possible in one task. Each task is of similar complexity, or contains similar amounts of information.
 We are the carpenters and bricklayers of the Information Age. Don't go borrowing trouble. For programmers, this means Worry only about what you need to implement.
 [reply] 




I see what you mean.
Want to clarify a bit, though, as I didn't say that principal
components analysis was a good analogy. I rather said
coding should accomplish the opposite, that is
spreading to information into the functions, dividing it
equally among them.
However stated, the analogy goes wrong because with principal
components we talk about orthogonal vectors in space, while
with programming we have hierarchial functions.
These create a subspace of each own, and you just can not
do a PCA on different subspaces. Is that what you more or less
ment, tilly?
/me notes with a smile in his face how everyone approaches
PCA from its own angle...., Masem from the chemical spectra
point of view, tilly from a encoding point of view while
I think more in pattern deviation scemes.
 [reply] 

