Last week I was at the succesful doctoral defence of Wouter on the role of the cerebellum while walking. During the questions, the topic of compensation came up: if a part of your neural system gets damaged can other parts (over time) take over some of its functionality? The answer is most probably yes, but only to some extent.

Following this reasoning, I was interested in how an artificial neural network would respond to damages to its internal (hidden) neurons. Intuitively I would say that it can compensate as long as the number of hidden nodes is large enough.

I started off with the classic Python script from Neil Schemenauer, simply used the incorporated XOR example and adjusted it a bit to my needs. To be clear, this specific back propagation neural network gets trained to correctly map two input values to the correct output value. The XOR dataset is shown below.

v1 v2 result
0 0 0
1 0 1
0 1 1
1 1 0

The network that we are training looks as follows. There are two input nodes, four hidden nodes and one output node.

The plot below shows the initial training results. The model needs about 200-250 iterations to reach its best performace.

What we’re gonna test next is, what if after 1000 iteration we kill node h1? Will there be a drop in performance? And, if so, will it be able to again reach its former best performace? While performing the test we’ll kill a hidden node after each 1000 iterations. So after 1000 iterations we’ll kill node h1, after 2000 iterations we’ll kill node h2 and so on until we don’t have any hidden nodes left.

The above graph shows the results of our test. While we see a strong drop in performance after killing node h1 and h2, the model however is able to recuperate from this, and rather quickly at that. But, after kill node h3, the performance doesn’t return previous levels. After killing the final node, h4, the input signal no longer reaches the output nodes and the model is no longer functioning.

This shows that the network can recover from internal damages to its network. I could also image that, depending on the modeled problem, a neural network can to some extent also recover form damages to its input nodes.

While this is a fun test, this could also help determine the minimum required size of the hidden layer. Or, when you have a distributed neural network, test how large the model’s buffer should be when you have an idea on clients’ down-time probabilities.

The adjusted can be found here.