Monthly Archives: October 2012

Could AI Limit the Effect of 100% of ‘Bugs’?

The problems that we heard about in the Plenary were mostly caused by software, whether it was due to the programmer or the executor.  A human brain with the capacity to think many times quicker and not make mistakes would theoretically be able to spot these mistakes or variables being sent within the software and stop them.  Is it possible to replicate this function in software?

What we would require is an intermediate stage ran at the end of all methods or when a variable is sent that used human intelligence to establish if something is behaving as would be expected.

A simpler implementation of a computer’s intelligence is tested by the Turing test.  Either by the end of this year, or early next year, it looks as if we will have computers that can successfully complete the Turing test, indicating we will be able to build intelligent computers (assuming we agree with the principles of the test).

So if this intelligence was to become more advanced, surely we could teach the computer what we would expect as throughput and output of computer programs?  If so, we would have a device (effectively a ‘supercharged’ human brain) capable of detecting a program that’s going to cause problems.

There are limits however – this idea would only prevent unexpected variables being passed to the wrong devices (such as the CD/Gear incident with the Jaguar Cars).

In 1963, Turing proved that no such program could ever exist with the functionality of detecting if a program will run into a loop (and crash), stopping it (called the ‘Halting Problem’, more information on this is here).

So, science has limits in the ways it can improve the reliability of computing – it’s never going to solve all our problems (crashing), but as technology progresses, intelligent machines should be able to prevent an increasing amount of erroneous input.