Principle of Extension

Posted 4 years, 7 months ago | Originally written on 18 Oct 2006

I know that I will need to put this in more precise words but I think it is an interesting idea.

Most computational devices run on an interesting principle which I think is the one reason we shall not be able to build Artificially Intelligent Systems that are as intelligent as humans are. The computational model employed is such as would not be implementable artificially. This is because of the Principle of Extension.

The principle holds that for a computational model to be devised, the conceptual framework - the algorithm, has to be developed beforehand. This essentially means that the solution is pre-thought by a human. The computational device merely extends the effect of the algorithm both in scale and speed.

Take for example an image processing application that has alter an image. The application is developed on a certain algorithm - every possible situation that the application is meant to deal with is dealt solved on a small scale. The solution works independent of the scale. The computer is only applied in dealing with an extended version of the pre-conceived problem - the only difference between the finaly application is that is just works on a bigger similar problem.

The task of developing artificially intelligent systems is thus composed of relieving the developer of the pre-conception and sharing the load with the computational device. If this can be achieved in any form then true artificial intelligence will be achieved.