So... I haven't posted in a while; here's why: I have been using the firefox 3 nightlies for a long time now and blogger was crashing the past couple of times I tried posting. As I was not about to give up Firefox 3 (it is definitely one of the best pieces of software ever, I am completely dependant on the
awesomeBar; using the star for bookmarking and the speed aren't bad either; next post will talk more). It appears as though I can now post from Fx3. That means more posts again.
This was something I posted a while ago on the alt.net list and am reposting it here to keep it for remembering to try out sometime in the future:
In regards to:
http://www.sei.cmu.edu/str/descriptions/mitmpm.html#78991
...
I don't believe this equation (from that article) will give very meaningful results. For one thing it places a very odd value on the percent of comments.
For that portion, here is a table:
perCM | 50 * sin (sqrt(2.4 * perCM))
0 | 0
1 | 50
2 | 40
3 | 22
4 | 2
5 | -16
...
10 | -50
15 | -15
20 | 30
25 | 50
So, 1% comments is worth the same amount as 25%? And 10% is downright awful (note that the perCM variable is not used in the data, implying that it may not be a good variable)
171 - 5.2 * ln(aveV) - 0.23 * aveV(g') - 16.2 * ln (aveLOC) + 50 * sin (sqrt(2.4 * perCM))
The coefficients are derived from actual usage (see Usage Considerations). The terms are defined as follows:
aveV = average Halstead Volume V per module (see Halstead Complexity Measures)
aveV(g') = average extended cyclomatic complexity per module (see Cyclomatic Complexity)
aveLOC = the average count of lines of code (LOC) per module; and, optionally
perCM = average percent of lines of comments per module
Also I don't know any tools that will compute the Halstead Volume for .NET. In general I think that those 3 metrics would be useful in determining maintainability and that aveV and aveLOC should probably be on a log scale, but the numbers appear to be arbitrarily picked to support the conclusions of the research.
There is a tool that can be used to get many useful metrics out of a codebase:
http://www.ndepend.com/Metrics.aspx
I think the following on that page are particularly useful for measuring code quality:
overall:
NbLinesOfCode
PercentageCoverage
averaged (per assembly)
Instability
Abstractness
Distance from main sequence (abs(I+A-1), ideally as close to 0 as possible)
averaged (per type)
LCOM HS (Lack of Cohesion Of Methods - Henderson-Sellers; basically can tell you if your type is physically disregarding Separation of Concerns by not being cohesive)
ILCC (IL level cyclomatic complexity; afiak the only reason to use this one is because it can be computed for any .NET code, not just C#)
Depth of Inheritance Tree
averaged (per method)
IL Nesting Depth
Additionally, some other functions could be very useful:
average(PercentageCoverageMethodLevel * MethodRank) (will cause more important methods to be weighted more when computing code coverage)
or
average(PercentageCoverageTypeLevel * TypeRank) (would do the same thing for types)
I'd say if you want to have some sort of scale to come up with good individual functions for how each factor contributes to overall quality, and then either do a sum of the functions or some kind of weighted average.
For example, a decent function for basing quality solely on code coverage by unit tests could be (each function has been normalized to give output on a 10 point scale):
Q(x) = (arcTan(10x-5)+pi/2-.1)*11/pi
where x is [0,1], the decimal version of %coverage
or
Q(x) = (arcTan(10x-7)+pi/2-.1)*11/pi
where x is [0,1] = AVERAGE(the decimal version of %coverage at the method/type level * method/type rank)
LCOM HS could be:
R(x) = -12.5x^3 + 31x^2 - 28x + 10
x is [0,1]
ILCC could be:
S(x) = 10.4e^(-0.06x)
x is [1,inf)
and LOC could be:
T(x) = -0.4*ln(x) + 10
x is [1,inf)
And the final score could be the average of all of those parts.