I am using a new color settings file. It is based on Vibrant Jedi that Charlie Calvert linked to a little while ago (here). I'm using it with the Inconsolata font.
I modified it a little bit to work better with Resharper. If you want it you can get it here. It works both when you have Resharper->Options->Code Inspection->Settings->Color Identifiers on and off, with different results in each case. I have it off right now because I think it gets to be a little bit too busy with it on, and when on the ability to have types be a different color from interfaces is lost. I'll probably tweak the settings a little as I haven't really used it outside C# at all (and I don't really like the @"" strings or comments on the current line).
Thursday, June 5, 2008
Pex + NUnit + R# + regexes == VS blowup and R# reinstall required
So... I just thought I'd let it out there:
- Pex doesn't come with NUnit support out of the box (apparently it only has MSTest). I cannot use it until this is fixed. Apparently there is supposed to be an extension project, but I can't find it anywhere.
- The Resharper release candidate is now available. Definately awesome.
- If you try using Pex with NUnit and strings that you make into regexes as parameters to the PexMethod, Pex apparently crashes visual studio. Worse yet, after the crash my R# settings are reset (I had the beta installed) and the fonts and colors editor doesn't even have the settings available anymore. As I write this post the release candidate is installing.
- After updating R# the fonts and color available settings have returned and apparently the settings aren't reset; VS just isn't honoring them. Changing one of them to something else and back again appears to get it to apply my settings.
Sunday, June 1, 2008
Firefox 3 == awesome
Firefox 3 is almost certainly one of the best things ever! In the new url bar (aka the awesomebar I can type random things that I seem to remember from the title or url of random sites that I happen to have gone to and it knows where I want to go. For example, I type "s" and it knows I want the sprint board for work; "l" or "d" bring up my local development pages (l - localhost... d - localhost/dotnetnuke); "a" gets me Oren's blog. And those are just the pages I visit enough for Firefox to figure out that I want to get to them with only a single letter. Some others: "chad" (actually goes to another page in his blog, but that is one where he talks about a post I made), "scot", "guid" (a page I use to generate guids), "pex", and last but not least "still alive".
Also very cool is the star in the url bar: it "bookmarks" whatever page I want. Really all one click does is makes it so that firefox never forgets about a page you have visited (pretty much that is all I ever do, the bar is smart enough to figure out the page I want to visit without using the tag feature). But sometimes I need to give it a little more information, so I double click the star to give the page a tag or two (that way when I type in the tag, the bar responds with the tagged pages first).
That is really what I like about Fx3, but it is way more than enough. I've been using it since it was about to enter alpha 3 (I switched to the minefield nightlies at that time) and I haven't used Fx2 or IE since (well maybe IE every once in a while when I was tired of specific sites crashing in Fx or I needed to test my development items).
There are a couple things about it that I don't like:
Also very cool is the star in the url bar: it "bookmarks" whatever page I want. Really all one click does is makes it so that firefox never forgets about a page you have visited (pretty much that is all I ever do, the bar is smart enough to figure out the page I want to visit without using the tag feature). But sometimes I need to give it a little more information, so I double click the star to give the page a tag or two (that way when I type in the tag, the bar responds with the tagged pages first).
That is really what I like about Fx3, but it is way more than enough. I've been using it since it was about to enter alpha 3 (I switched to the minefield nightlies at that time) and I haven't used Fx2 or IE since (well maybe IE every once in a while when I was tired of specific sites crashing in Fx or I needed to test my development items).
There are a couple things about it that I don't like:
- The skin; I can't stand it (I always liked Qute best; and luckilly Qute is available for 3)
- The dropdown listing all my open tabs (I tend to have 1 window with several hundred tabs open; the whole tab idea just doesn't seem to scale that far, I often lose tabs and find myself with 3 or 4 instances of the same tab open; some that I haven't viewed in days)
- The bookmark toolbar: totally pointless now (I have upwards of 2k bookmarks), except for the places folder (which is really cool and useful). I wish I could place the Places folder in the actual toolbar, just left of the back button.
possible code quality equations
So... I haven't posted in a while; here's why: I have been using the firefox 3 nightlies for a long time now and blogger was crashing the past couple of times I tried posting. As I was not about to give up Firefox 3 (it is definitely one of the best pieces of software ever, I am completely dependant on the awesomeBar; using the star for bookmarking and the speed aren't bad either; next post will talk more). It appears as though I can now post from Fx3. That means more posts again.
This was something I posted a while ago on the alt.net list and am reposting it here to keep it for remembering to try out sometime in the future:
In regards to:
http://www.sei.cmu.edu/str/descriptions/mitmpm.html#78991
...
I don't believe this equation (from that article) will give very meaningful results. For one thing it places a very odd value on the percent of comments.
For that portion, here is a table:
So, 1% comments is worth the same amount as 25%? And 10% is downright awful (note that the perCM variable is not used in the data, implying that it may not be a good variable)
171 - 5.2 * ln(aveV) - 0.23 * aveV(g') - 16.2 * ln (aveLOC) + 50 * sin (sqrt(2.4 * perCM))
The coefficients are derived from actual usage (see Usage Considerations). The terms are defined as follows:
aveV = average Halstead Volume V per module (see Halstead Complexity Measures)
aveV(g') = average extended cyclomatic complexity per module (see Cyclomatic Complexity)
aveLOC = the average count of lines of code (LOC) per module; and, optionally
perCM = average percent of lines of comments per module
Also I don't know any tools that will compute the Halstead Volume for .NET. In general I think that those 3 metrics would be useful in determining maintainability and that aveV and aveLOC should probably be on a log scale, but the numbers appear to be arbitrarily picked to support the conclusions of the research.
There is a tool that can be used to get many useful metrics out of a codebase:
http://www.ndepend.com/Metrics.aspx
I think the following on that page are particularly useful for measuring code quality:
overall:
NbLinesOfCode
PercentageCoverage
averaged (per assembly)
Instability
Abstractness
Distance from main sequence (abs(I+A-1), ideally as close to 0 as possible)
averaged (per type)
LCOM HS (Lack of Cohesion Of Methods - Henderson-Sellers; basically can tell you if your type is physically disregarding Separation of Concerns by not being cohesive)
ILCC (IL level cyclomatic complexity; afiak the only reason to use this one is because it can be computed for any .NET code, not just C#)
Depth of Inheritance Tree
averaged (per method)
IL Nesting Depth
Additionally, some other functions could be very useful:
average(PercentageCoverageMethodLevel * MethodRank) (will cause more important methods to be weighted more when computing code coverage)
or
average(PercentageCoverageTypeLevel * TypeRank) (would do the same thing for types)
I'd say if you want to have some sort of scale to come up with good individual functions for how each factor contributes to overall quality, and then either do a sum of the functions or some kind of weighted average.
For example, a decent function for basing quality solely on code coverage by unit tests could be (each function has been normalized to give output on a 10 point scale):
Q(x) = (arcTan(10x-5)+pi/2-.1)*11/pi
where x is [0,1], the decimal version of %coverage
or
Q(x) = (arcTan(10x-7)+pi/2-.1)*11/pi
where x is [0,1] = AVERAGE(the decimal version of %coverage at the method/type level * method/type rank)
LCOM HS could be:
R(x) = -12.5x^3 + 31x^2 - 28x + 10
x is [0,1]
ILCC could be:
S(x) = 10.4e^(-0.06x)
x is [1,inf)
and LOC could be:
T(x) = -0.4*ln(x) + 10
x is [1,inf)
And the final score could be the average of all of those parts.
This was something I posted a while ago on the alt.net list and am reposting it here to keep it for remembering to try out sometime in the future:
In regards to:
http://www.sei.cmu.edu/str/descriptions/mitmpm.html#78991
...
I don't believe this equation (from that article) will give very meaningful results. For one thing it places a very odd value on the percent of comments.
For that portion, here is a table:
perCM | 50 * sin (sqrt(2.4 * perCM))
0 | 0
1 | 50
2 | 40
3 | 22
4 | 2
5 | -16
...
10 | -50
15 | -15
20 | 30
25 | 50
So, 1% comments is worth the same amount as 25%? And 10% is downright awful (note that the perCM variable is not used in the data, implying that it may not be a good variable)
171 - 5.2 * ln(aveV) - 0.23 * aveV(g') - 16.2 * ln (aveLOC) + 50 * sin (sqrt(2.4 * perCM))
The coefficients are derived from actual usage (see Usage Considerations). The terms are defined as follows:
aveV = average Halstead Volume V per module (see Halstead Complexity Measures)
aveV(g') = average extended cyclomatic complexity per module (see Cyclomatic Complexity)
aveLOC = the average count of lines of code (LOC) per module; and, optionally
perCM = average percent of lines of comments per module
Also I don't know any tools that will compute the Halstead Volume for .NET. In general I think that those 3 metrics would be useful in determining maintainability and that aveV and aveLOC should probably be on a log scale, but the numbers appear to be arbitrarily picked to support the conclusions of the research.
There is a tool that can be used to get many useful metrics out of a codebase:
http://www.ndepend.com/Metrics.aspx
I think the following on that page are particularly useful for measuring code quality:
overall:
NbLinesOfCode
PercentageCoverage
averaged (per assembly)
Instability
Abstractness
Distance from main sequence (abs(I+A-1), ideally as close to 0 as possible)
averaged (per type)
LCOM HS (Lack of Cohesion Of Methods - Henderson-Sellers; basically can tell you if your type is physically disregarding Separation of Concerns by not being cohesive)
ILCC (IL level cyclomatic complexity; afiak the only reason to use this one is because it can be computed for any .NET code, not just C#)
Depth of Inheritance Tree
averaged (per method)
IL Nesting Depth
Additionally, some other functions could be very useful:
average(PercentageCoverageMethodLevel * MethodRank) (will cause more important methods to be weighted more when computing code coverage)
or
average(PercentageCoverageTypeLevel * TypeRank) (would do the same thing for types)
I'd say if you want to have some sort of scale to come up with good individual functions for how each factor contributes to overall quality, and then either do a sum of the functions or some kind of weighted average.
For example, a decent function for basing quality solely on code coverage by unit tests could be (each function has been normalized to give output on a 10 point scale):
Q(x) = (arcTan(10x-5)+pi/2-.1)*11/pi
where x is [0,1], the decimal version of %coverage
or
Q(x) = (arcTan(10x-7)+pi/2-.1)*11/pi
where x is [0,1] = AVERAGE(the decimal version of %coverage at the method/type level * method/type rank)
LCOM HS could be:
R(x) = -12.5x^3 + 31x^2 - 28x + 10
x is [0,1]
ILCC could be:
S(x) = 10.4e^(-0.06x)
x is [1,inf)
and LOC could be:
T(x) = -0.4*ln(x) + 10
x is [1,inf)
And the final score could be the average of all of those parts.
Subscribe to:
Posts (Atom)