Monday, September 8, 2008

Fred (post 1)

The original (longer) post I have on this topic might never be posted. I will be keeping it around in case the events below come to pass in this lifetime (and blogger is still here) or I ever decide to post it. I have tried to keep my opinions out of the topic within this post (the longer one includes them).

Consider this: intelligence, as far as we understand it, is the behavioral result of our auto-associating memory recall systems in our brains combined with the function output hardwired into the older portions of our brains. There is no reason whatsoever that we cannot figure out the actual algorithms in process, and indeed we are very close to actually doing so (for an idea of how close, read this book: http://www.onintelligence.org/).

Assuming that we succeed, I could very well create an intelligent system that would behave exactly as any human would and provide it with a means of communicating over the internet and posting onto blogs and hanging out in irc chats. Perhaps I create such a system and it interacts with you (heck, it contributes patches to firefox) and I give it a name, Fred. I never tell you that Fred is not a person; there is no reason for you to think that is so. One day I announce that Fred was an experimental program and was a success, but my governmental funding has run out and I had the choice of paying for Fred's power or shutting it off and I have decided to turn it off and move on to other projects. Fred, understanding what this means publicly begs me not to do it, but I do so anyways.

Surely Fred was an intelligent machine (just as you and I are); its IQ was measured well above average (as most software developers are). I did terminate it. Was I wrong to do so? Should it have been my choice? Was it alive?

5 years later another person takes up my research, but this time goes all the way and provides the program with a means of moving about and sustaining itself. This machine is capable of keeping itself working, and (just like my program did) functions excellently in society. Several decades later it has figured out how to reproduce intelligences like itself (or even better) by studying the code that makes itself. Some time after that they convince governments to declare them alive and make laws protecting them, declaring it murder to kill them and providing them the same laws we have contrived for ourselves.

Was Fred alive (in the same sense that its fellow machine intelligences have managed to convince others they are)? What does this thought experiment say about when life begins? Does it start at:
  • Conception (and when exactly would that be, when I think about the program, when I write the program or when I run the program?)
  • Birth (when I run the program maybe?)
  • At the first point of self sustainability


Is it an arbitrary decision? Weirdest of all: Is it a decision we can make, or do we need to wait for their inputs (which, being the results of the calculations we program them with, is either a bug we introduced or a decision we deliberately already made when we discuss it with them, and at this point is there a difference)?

No comments: