Monday, February 23, 2009

What have I been up to?

Ok, so I don't update this blog that often; there is a reason for that (I am busy doing other things). I do hope to update a little more often sometime. In the meantime I can say some of the other things I have been doing (in no particular order):
  • PS3 (mostly resistance 2)
  • Wii
  • wedding planning stuff
  • a little bit of coding

What little coding I have done is mostly available on the various Mercurial repositories over at

Josefinita: sorry for taking so long to get highlight up somewhere. It is now in a repository on bitbucket (link above). It took longer than I expected to figure out where I put it and to do something with it (I wound up very busy this past week and was unable to do much of anything).

Tuesday, September 16, 2008 why?

So, like most blog reading Asp.NET developers, I have been reading for a long time. But ever since they opened up the membership to the blogroll the posts have gone downhill...

I am way behind on my reading list in google reader (it says 1000+). After reading this:
(describing turning the string values of input boxes into date values and then doing a comparison to validate that one is no more than a year from the other)

This could have been a good post if:
  1. The subject matter was non-trivial.
  2. The code was internationalized (hence making the subject matter non-trivial).
  3. There weren't any grammar mistakes.
  4. And, the calculation was done entirely using Javascript Date objects (instead of the numeric values of those objects)
Unfortunately it wasn't:
  1. Simple date math (Is date 1 within 1 year of date two?) is and should be a trivial problem in any programming language.
  2. The code provided only works in locales that format their dates like en-GB (dd/mm/yyyy).
  3. The first 3 lines are not a single sentence; words are spelled wrong; spaces are missing. Also one of the comments makes no sense whatsoever.

  4. var fromDate = new Date();
    var toDate = new Date();
    var maxAllowableToDate = fromDate;
    if(fromDate < toDate && toDate < maxAllowableToDate) ...
This makes me question why I am still reading. Please stop posting garbage. At least, stop posting it to I liked reading most of the stuff there before it became public registration.

Monday, September 8, 2008

Fred (post 1)

The original (longer) post I have on this topic might never be posted. I will be keeping it around in case the events below come to pass in this lifetime (and blogger is still here) or I ever decide to post it. I have tried to keep my opinions out of the topic within this post (the longer one includes them).

Consider this: intelligence, as far as we understand it, is the behavioral result of our auto-associating memory recall systems in our brains combined with the function output hardwired into the older portions of our brains. There is no reason whatsoever that we cannot figure out the actual algorithms in process, and indeed we are very close to actually doing so (for an idea of how close, read this book:

Assuming that we succeed, I could very well create an intelligent system that would behave exactly as any human would and provide it with a means of communicating over the internet and posting onto blogs and hanging out in irc chats. Perhaps I create such a system and it interacts with you (heck, it contributes patches to firefox) and I give it a name, Fred. I never tell you that Fred is not a person; there is no reason for you to think that is so. One day I announce that Fred was an experimental program and was a success, but my governmental funding has run out and I had the choice of paying for Fred's power or shutting it off and I have decided to turn it off and move on to other projects. Fred, understanding what this means publicly begs me not to do it, but I do so anyways.

Surely Fred was an intelligent machine (just as you and I are); its IQ was measured well above average (as most software developers are). I did terminate it. Was I wrong to do so? Should it have been my choice? Was it alive?

5 years later another person takes up my research, but this time goes all the way and provides the program with a means of moving about and sustaining itself. This machine is capable of keeping itself working, and (just like my program did) functions excellently in society. Several decades later it has figured out how to reproduce intelligences like itself (or even better) by studying the code that makes itself. Some time after that they convince governments to declare them alive and make laws protecting them, declaring it murder to kill them and providing them the same laws we have contrived for ourselves.

Was Fred alive (in the same sense that its fellow machine intelligences have managed to convince others they are)? What does this thought experiment say about when life begins? Does it start at:
  • Conception (and when exactly would that be, when I think about the program, when I write the program or when I run the program?)
  • Birth (when I run the program maybe?)
  • At the first point of self sustainability

Is it an arbitrary decision? Weirdest of all: Is it a decision we can make, or do we need to wait for their inputs (which, being the results of the calculations we program them with, is either a bug we introduced or a decision we deliberately already made when we discuss it with them, and at this point is there a difference)?

What do I hate most today?

Windows desktop search...

Why doesn't the old style search folders thingy (the one with the annoying dog [with accompanying annoying sounds] on my machine, it looks like they named it the "search companion" in the desktop search sidebar) get shown automatically when a folder isn't indexed? On top of this they have the guts to show a link that you can click on.

I cannot index this folder because doing so destroys the performance of my machine (the folders where I do this constantly are my source code repositories, with 10^5+ files in them).

The only reason I have WDS on this computer is so I can search in Outlook 2007 (another thing I absolutely hate, but company policies are to use exchange and not allow IMAP or POP3 access, so Thunderbird is out of the question).

Somebody needs to go watch the Matrix again and re-learn the single useful bit of knowledge every developer/UI designer/engineer should know:

Neo: Are there other programs like you?
The Oracle: Oh, well, not like me. But... look, see those birds? At some point a program was written to govern them. A program was written to watch over the trees, and the wind, the sunrise, and sunset. There are programs running all over the place. The ones doing their job, doing what they were meant to do, are invisible. You'd never even know they were here. But the other ones, well, we hear about them all the time.
Neo: I've never heard of them.
The Oracle: Oh, of course you have. Every time you've heard someone say they saw a ghost, or an angel. Every story you've ever heard about vampires, werewolves, or aliens, is the system assimilating some program that's doing something they're not supposed to be doing.

The programs that are doing their jobs are the ones you don't notice. Every time you notice them, they aren't doing something right.

Here is how searching should work:
  1. In the location bar you should be able to type something like "find xyz"
  2. Instantly all files that contain xyz in their filename should appear in the explorer window (just like locate works in my *nix systems, except that it should be integrated with the explorer window, not the command line).
  3. I should be able to control indexing times on a per folder basis.
  4. The folder I search from should be incrementally indexed every time I search it (do the following):
    1. Store a hash of every folder built from some measure that is fast to find from the file system (I don't know what measurement would be, but the hash should change every time a file is added or removed from a folder).
    2. If the hash is different when I search, re-index the folder as soon as you finish displaying the results from the db search (keep a status bar notification saying "Searching..." while re-indexing).
    3. After the index is complete, update the results by searching again.

In closing: I hate you Microsoft. I hate you for making my life harder than it should be. I hate you for knowing who you are when I shouldn't need to. I hate you because I notice you when I don't need to. I hate you for putting the sound of a dog scratching itself on my computer. I hate you for making it impossible to search my mail without installing a program I don't otherwise need. I hate you for making a search tool that I can't turn on because it makes my computer unusable. And I hate you for reminding me that I don't have it on.

PS. Supposedly Windows Search 4.0 solves most of the problems I have with it; apparently you have to install it outside of windows update here. I've let it index the two directories I need to be able to effectively search (168,920 files indexed now according to it).

We'll see.

Saturday, July 26, 2008

ASP.NET MVC - Branding?

I am not sure ASP.NET MVC is really the best name for the MVC architecture that the ASP.NET team is creating. Said name stresses the fact that it is still ASP.NET. Most people I know think of ASP.NET WebForms (and more specifically, postbacks and the event model) when they think of ASP.NET. Perhaps if they started a marketing campaign to rebrand:

Old NameNew Name
ASP.NET WebFormsWebForms
Web development in .NET overallASP.NET

This might be better as I think it would get rid of the misconception that ASP.NET means events and runat="server" all over the place. Books and such could brand themselves with "ASP.NET" and have sections split off for WebForms and WebViews (or they could brand with just WebForms or just WebViews and ASP.NET wouldn't be necessary in the title). Think: WebViews in C# 4, WebForms for WinForms developers in 2010 and ASP.NET 4.0 Bible vs. ASP.NET MVC 2.0, ASP.NET for WinForms developers in 2010 and ASP.NET 4.0 Bible. Which set makes it easier to understand what you are looking for while at Amazon?

Edit: if anyone at MS is reading this (or anyone writing a book with the titles mentioned), I claim no rights whatsoever to "WebViews" as a name or anything like that (other than to say that this post stays as pure speculation and I don't have to take it down for any reason). This was just an idea that popped into my head this morning.

Sunday, July 6, 2008

Bugzilla integration with Mercurial

We are using Mercurial at work. We are also using Bugzilla. Unfortunately we are not using Bugzilla 2.16 which is the version of Bugzilla that hg ships an integration hook with. So I modified the hook to work with the version of Bugzilla that we are using (3.1.4+ cvs trunk from someday in the past couple months). The script can now be found in my personal hg repo. It should work with any Bugzilla installation that comes with the script (since 3.0 maybe?).

Please do tell me what you think of it. It is the first Python script I've really done anything with (I'm more a Perl kind of guy). As always, feel free to do whatever you want with it (respecting the license of the original author of course). I'd love to see it get integrated back into Mercurial, but as I have made no real effort to test any of it (it works for me, but no promises and all that) I don't expect that any time soon. Perhaps someone at Mozilla would take the time to do that; this is probably useful for integrating mozilla-central with bmo.

Edit: link updated (should have pointed to tip in the first place). I've fixed both the syntax errors (a right parenthesis was in the wrong place, I must have copied it off the server incorrectly, because I swear I fixed the syntax errors that were there when it was on the server, but copying files out of nano over ajaxterm isn't exactly the easiest thing to do; while updating this, I realized that I could do a wget to get this file from my repo, so I can actually test the version at the above url. Rest assured it works this time). Thank you very much geraldfauvelle for noticing that it was broken.

Thursday, July 3, 2008

Reinventing the wheel

This started as something of a comment to a couple of blogs about MS not shipping any open source with their flagship products, but it started getting longer and I didn't want to post the same thing in two places.

I'd be willing to bet that both Charlie (NUnit lead) and Jeff (MbUnit lead) would be more than willing to take a sizeable "donation" (perhaps as little as 10% of what the MsTest devs got paid) to their projects in order to provide a stable source version for MS to review line by line, under whatever license they decided they wanted (as a fork of the projects) in order to include their projects with the release of visual studio. I'd even be certain that both projects would have jumped at a chance to have pre-alphas of visual studio to integrate with (for full compatibility when the first beta hit the market).

The only thing that is stopping this level of cooperation is lawyers inside Microsoft (and the occasional prick of a developer), and for that MS deserves our pity. I think the only thing that will get MS to open up is if they were to come under so much financial stress that they could no longer afford the lawyers (and only open source appears to have that much power).

Some time ago I had a discussion where a developer within Microsoft explained provided myself and the others in the discussion what he felt was MS's viewpoint:
"From the Legal Dept's perspective, it's not merely the license the code was released under, it's also the heritage of the code. If some evil coder had stolen code IP from a source base, added it to a project with an open source, copy-reuse-repackage freely license, and I used it, the original IP owner could still sue me to get me to desist using their IP. So, in the interest of protecting corporate code from this sort of legal attack, Legal usually has to perform some pretty thorough (and thus also expensive) investigation before letting anyone see open source code; you can imagine that there generally has to be a good business justification before that kind of thing happens."
I think that opinion is rather unintelligent, but then again I'm not a paranoid lawyer who thinks everyone is out to get me (certainly I'll concede that there are a few people out there who are).

As far as TFS goes, what little experience I have with it tells me that it is too complex, has too many moving parts (all the more places for bugs to find their way in) and is too constrictive on my working habits. I'll stick with Mercurial, thank you. And wherever that isn't available, Subversion will have to do. I like my environment to be compartmentalized. That is why I'd suggest to use Mercurial, Bugzilla and CruiseControl.NET (all of which integrate nicely with each other or are easy enough for me to write my own integration).

Thursday, June 5, 2008

Visual Studio Color Settings

I am using a new color settings file. It is based on Vibrant Jedi that Charlie Calvert linked to a little while ago (here). I'm using it with the Inconsolata font.

I modified it a little bit to work better with Resharper. If you want it you can get it here. It works both when you have Resharper->Options->Code Inspection->Settings->Color Identifiers on and off, with different results in each case. I have it off right now because I think it gets to be a little bit too busy with it on, and when on the ability to have types be a different color from interfaces is lost. I'll probably tweak the settings a little as I haven't really used it outside C# at all (and I don't really like the @"" strings or comments on the current line).

Pex + NUnit + R# + regexes == VS blowup and R# reinstall required

So... I just thought I'd let it out there:

  1. Pex doesn't come with NUnit support out of the box (apparently it only has MSTest). I cannot use it until this is fixed. Apparently there is supposed to be an extension project, but I can't find it anywhere.
  2. The Resharper release candidate is now available. Definately awesome.
  3. If you try using Pex with NUnit and strings that you make into regexes as parameters to the PexMethod, Pex apparently crashes visual studio. Worse yet, after the crash my R# settings are reset (I had the beta installed) and the fonts and colors editor doesn't even have the settings available anymore. As I write this post the release candidate is installing.
  4. After updating R# the fonts and color available settings have returned and apparently the settings aren't reset; VS just isn't honoring them. Changing one of them to something else and back again appears to get it to apply my settings.
Wierd day. Needless to say: I will not be trying to do that with Pex again for a while (I'll wait for proper NUnit support first).

Sunday, June 1, 2008

Firefox 3 == awesome

Firefox 3 is almost certainly one of the best things ever! In the new url bar (aka the awesomebar I can type random things that I seem to remember from the title or url of random sites that I happen to have gone to and it knows where I want to go. For example, I type "s" and it knows I want the sprint board for work; "l" or "d" bring up my local development pages (l - localhost... d - localhost/dotnetnuke); "a" gets me Oren's blog. And those are just the pages I visit enough for Firefox to figure out that I want to get to them with only a single letter. Some others: "chad" (actually goes to another page in his blog, but that is one where he talks about a post I made), "scot", "guid" (a page I use to generate guids), "pex", and last but not least "still alive".

Also very cool is the star in the url bar: it "bookmarks" whatever page I want. Really all one click does is makes it so that firefox never forgets about a page you have visited (pretty much that is all I ever do, the bar is smart enough to figure out the page I want to visit without using the tag feature). But sometimes I need to give it a little more information, so I double click the star to give the page a tag or two (that way when I type in the tag, the bar responds with the tagged pages first).

That is really what I like about Fx3, but it is way more than enough. I've been using it since it was about to enter alpha 3 (I switched to the minefield nightlies at that time) and I haven't used Fx2 or IE since (well maybe IE every once in a while when I was tired of specific sites crashing in Fx or I needed to test my development items).

There are a couple things about it that I don't like:
  • The skin; I can't stand it (I always liked Qute best; and luckilly Qute is available for 3)
  • The dropdown listing all my open tabs (I tend to have 1 window with several hundred tabs open; the whole tab idea just doesn't seem to scale that far, I often lose tabs and find myself with 3 or 4 instances of the same tab open; some that I haven't viewed in days)
  • The bookmark toolbar: totally pointless now (I have upwards of 2k bookmarks), except for the places folder (which is really cool and useful). I wish I could place the Places folder in the actual toolbar, just left of the back button.
But luckilly, the skin is easy to fix; the tab stuff isn't too big of a problem and the bookmark toolbar can be shut off (though I do hope someone out there is working on an extension to create a places button).

possible code quality equations

So... I haven't posted in a while; here's why: I have been using the firefox 3 nightlies for a long time now and blogger was crashing the past couple of times I tried posting. As I was not about to give up Firefox 3 (it is definitely one of the best pieces of software ever, I am completely dependant on the awesomeBar; using the star for bookmarking and the speed aren't bad either; next post will talk more). It appears as though I can now post from Fx3. That means more posts again.

This was something I posted a while ago on the list and am reposting it here to keep it for remembering to try out sometime in the future:

In regards to:


I don't believe this equation (from that article) will give very meaningful results. For one thing it places a very odd value on the percent of comments.

For that portion, here is a table:

perCM | 50 * sin (sqrt(2.4 * perCM))
0 | 0
1 | 50
2 | 40
3 | 22
4 | 2
5 | -16
10 | -50
15 | -15
20 | 30
25 | 50

So, 1% comments is worth the same amount as 25%? And 10% is downright awful (note that the perCM variable is not used in the data, implying that it may not be a good variable)

171 - 5.2 * ln(aveV) - 0.23 * aveV(g') - 16.2 * ln (aveLOC) + 50 * sin (sqrt(2.4 * perCM))

The coefficients are derived from actual usage (see Usage Considerations). The terms are defined as follows:

aveV = average Halstead Volume V per module (see Halstead Complexity Measures)

aveV(g') = average extended cyclomatic complexity per module (see Cyclomatic Complexity)

aveLOC = the average count of lines of code (LOC) per module; and, optionally
perCM = average percent of lines of comments per module

Also I don't know any tools that will compute the Halstead Volume for .NET. In general I think that those 3 metrics would be useful in determining maintainability and that aveV and aveLOC should probably be on a log scale, but the numbers appear to be arbitrarily picked to support the conclusions of the research.

There is a tool that can be used to get many useful metrics out of a codebase:

I think the following on that page are particularly useful for measuring code quality:

averaged (per assembly)
Distance from main sequence (abs(I+A-1), ideally as close to 0 as possible)

averaged (per type)
LCOM HS (Lack of Cohesion Of Methods - Henderson-Sellers; basically can tell you if your type is physically disregarding Separation of Concerns by not being cohesive)
ILCC (IL level cyclomatic complexity; afiak the only reason to use this one is because it can be computed for any .NET code, not just C#)
Depth of Inheritance Tree

averaged (per method)
IL Nesting Depth

Additionally, some other functions could be very useful:
average(PercentageCoverageMethodLevel * MethodRank) (will cause more important methods to be weighted more when computing code coverage)
average(PercentageCoverageTypeLevel * TypeRank) (would do the same thing for types)

I'd say if you want to have some sort of scale to come up with good individual functions for how each factor contributes to overall quality, and then either do a sum of the functions or some kind of weighted average.

For example, a decent function for basing quality solely on code coverage by unit tests could be (each function has been normalized to give output on a 10 point scale):
Q(x) = (arcTan(10x-5)+pi/2-.1)*11/pi
where x is [0,1], the decimal version of %coverage
Q(x) = (arcTan(10x-7)+pi/2-.1)*11/pi
where x is [0,1] = AVERAGE(the decimal version of %coverage at the method/type level * method/type rank)

LCOM HS could be:
R(x) = -12.5x^3 + 31x^2 - 28x + 10
x is [0,1]

ILCC could be:
S(x) = 10.4e^(-0.06x)
x is [1,inf)

and LOC could be:
T(x) = -0.4*ln(x) + 10
x is [1,inf)

And the final score could be the average of all of those parts.

Friday, December 21, 2007

WatiN testing ASP.NET - app startup time issues

I started a test project to test a web app. One thing I noticed was that due to application startup, occasionally the first couple of tests fail with TimeoutExceptions because the application takes too long to compile some of the pages.

So, I came up with the following to work around this:

using SHDocVw;
using WatiN.Core;
using WatiN.Core.Exceptions;

namespace Tests {
public class TestBase {
private static bool _siteSetupRun = false;

public TestBase() {
if (!_siteSetupRun) {
_siteSetupRun = true;

private static void setupSite() {
bool ok = false;
using (IE ie = new IE()) {
object nil = null;
ref nil, ref nil, ref nil, ref nil);
while (!ok) {
try {
ok = true;
} catch (TimeoutException tex) {
if (!tex.Message.Contains("'Internet Explorer busy'")) {

This needs to run before any unit tests.

Sunday, December 9, 2007

Resolving extension method conflicts using a Proxy pattern.

If you happen to get this error: "The call is ambiguous between the following methods or properties: ...", here is how you would fix it:

Problem Setup

Lets say you are using the following library:

namespace PRI.Interfaces {
public interface IEntity {
string Name { get; set; }

namespace PRI {
public class DataEntity:Interfaces.IEntity {
public string Name { get; set; }

And the library authors decide to release an extension method (along with a couple of others you wish to use):

namespace PRI.Extensions {
    using PRI.Interfaces;
    public static class Entity {
        public static string CapitalizeName(this IEntity entity) {
            entity.Name = entity.Name.ToUpper();
            return entity.Name;

Meanwhile you are using a third party extension method library which already contains this method:

namespace Contoso.Extensions {
    using PRI.Interfaces;
    using System.Text;
    public static class Entity {
        public static string CapitalizeName(this IEntity entity) {
            StringBuilder sb = new StringBuilder(entity.Name.Length);
            string[] words = entity.Name.Split(new char[] { ' ' });
            foreach(string word in words) {
                sb.Append(" ");
            entity.Name = sb.ToString().Trim();
            return entity.Name;

The Problem

This is a problem because you can no longer call the CapitalizeName method as an extension method if you happen to have both namespaces in your using constructs. As long as you are using functionality from both classes where the names of the methods are different you will not have any conflicts and there will not be any problems using your code. Unfortunately, the moment you add a call to the CapitalizeName method you will get a compile time error.

namespace Program {
    using System;
    using PRI;
    using PRI.Extensions;
    using Contoso.Extensions;
    class Program {
        static void Main(string[] args) {
            DataEntity dataEntity = new DataEntity() { Name = "frank smith" };

This will not compile because CapitalizeName cannot be resolved.

Enter the Proxy Pattern

Because extension methods are static methods and can be called just as any other static method gets called, you can build a wrapper class around these third party extension methods in order to resolve the conflicts that the method names impose.

namespace Program.ExtensionResolvers {
    using PRI.Interfaces;
    internal static class Resolver {
        public static string CapitalizeName(this IEntity entity) {
            return PRI.Extensions.Entity.CapitalizeName(entity);
        public static string ConvertNameToTitleCase(this IEntity entity) {
            return Contoso.Extensions.Entity.CapitalizeName(entity);

Using the Solution

The extension method proxy class can now be used in order to resolve the naming conflict.

namespace Program {
    using System;
    using PRI;
    using ExtensionResolvers;
    class Program {
        static void Main(string[] args) {
            DataEntity dataEntity = new DataEntity() { Name = "frank smith" };


If you are going to use more than 1 extension method library ever in the course of working on a project, I believe it is awfully important to encapsulate the extension methods you will be using into a proxy class (or several such classes) as I have shown here. It is probably best to keep this class internal as you wouldn't want to continue to pollute the function name domain to other third party libraries which are using your code.

I will argue that all calls to extension methods from third party libraries be internalized in this way. Doing so adds value (you can add xml documentation to these methods for your coworkers and the R# tools will be able to find usages and such) and it better decouples you from minor API changes in the extension libraries.

Thanks to Peter Ritchie on the altnetconf mailing list for most of the above code in the first section of this post.

Visual Studio 2008 first impressions

ok, ...

I will not be using VS 2008 until Resharper 4.0 EAP begins. I began starting to use 2008 this evening for my next post and I almost immediately realized just how dependent on R# I have become. As I began the first things I noticed (within the first 5 seconds) were:
  • Intellisense was different and I am not used to it.
  • I had forgotten where StringBuilder was located (R# will let you know when you reference a class that is not in a namespace you are using).
  • I really miss the error detection system.
  • Alt+Enter
  • F6
  • F2
  • Ctrl+B
  • Ctrl+T (I have binded this shortcut to Resharper.UnitTest.ContextDebug; more on that at some later time)
Other than that, I really liked my initial impressions:
  • Startup is much faster.
  • F1 doesn't hang VS for 5 minutes (I really hate this shortcut and I remove it from my system when possible because I tend to accidentally press it when going for F2 sometimes)
  • Hidden toolbox tabs show up faster when moused over.
  • Compilation seems faster.
  • VS seems to close faster.
So, in general, I really like it. Without R# I will not be using it.

Thursday, December 6, 2007

Job Hunting - questions to ask

Recently I asked a couple of questions openly to a person who posted a job offer* on the list which I think are important for discovering the attitude and environment of the potential employer.

These are more or less the questions I asked:
  • Does the company use OSS?

  • How does the company view OSS solutions?

  • Are you stuck with basic VS or do you get so use R# or the DevExpress plugins (or other productivity tools)?

  • Do your developers get to blog (or is that shunned)?

  • Are they allowed to work on OSS projects in their spare time?

  • Are they allowed to do so on company time for any projects the company (or even just the group) is using?

I believe these questions provide valuable insight into the employer.

*Note: I am not currently looking for a job, but if an offer comes up and it is significantly better than my current position I will look into it. It had better have a good pay raise, great benefits, not move me from Bozeman Montana, provide a flexible schedule, allow me to work from home when I want to and give me the flexibility to work on the projects I want to be working on. Ok, maybe that is a little exaggerated, but it had better be a very very good deal.