What exactly do Artificial Neural Networks learn?


There is a famous (probably apocryphal) story that in the 1980s the US Pentagon funded the development of an artificial neural network that would recognise photographs of tanks. To do this they took 100 photographs of tanks, and then took 100 photographs of fields with no tanks, and trained the artificial neural network on these photographs. When testing the system, it was discovered that system did not appear to be recognising tanks at all. There was puzzlement until someone figured out that all of the images with tanks had been taken on a cloudy day while all the images without tanks had been taken on a sunny day. Thus the network actually learned to recognise clouds ;-)

Even if this story isn't true, it is a perfect illustration of the biggest problem with neural networks, it is virtually impossible to analyze and understand what they are learning. One can't tell if a net has memorized inputs, or is 'cheating' in some other way. This project proposes we test this story by creating various sets of 100 images of a tank (well maybe a toy tank) on a cloudy day and then 100 images of no-tank on a sunny day. So the first set will be 10% sky and 90% ground, the next 20% sky and 80% ground, the next 30% sky and 70% ground, the next 40% sky and 60% ground, the final set 50% sky and 50% ground.

No comments:

Post a Comment