What are fault conditions in Hopfield networks

Mock attracters in Hopfield networks


I think your intuition about the lower "energy ratio" of disturbance states that explain their greater susceptibility to unlearning may be correct.

In a Hopfield network, disturbance states are activity patterns that are not explicitly embedded in the synaptic matrix, but are nonetheless stable. In other words, they are "undesirable" attractor states that occur as a local minimum in the energy function due to a finite overlap with the "desired" attractor states. The unlearning rule in Hopfield et al. (1983) consists in modifying the synaptic matrix in such a way that the energy of the stable states in which the network dynamics are established is reduced, be they false or embedded states. Since the disturbance states have a higher energy than the embedded states, they are more affected by unlearning.

Why do disturbance states have a higher energy than the embedded attractor states? Well, this is generally not the case, but it is the case in a regime where the Hopfield network does not exceed its loading capacity, that is, when the number of patterns learned is above the number of units p / N. is lower than the critical capacity αc≤0.138. In this regime it is possible to estimate the overlap of the disturbance states with the learned patterns and to show that this is generally lower than 1 (the overlap of the learned patterns with themselves). Due to the Hebrew construction of the synaptic matrix in the Hopfield model, these overlaps are terms that occur in the energy function. Namely, the energy of a pattern is proportional to minus the square root of its overlap with the learned patterns. This means that the disturbance patterns have a higher energy than the learned ones.

In general, this kind of naive reasoning needs to be supported by more rigorous arguments based on probability theory. These indicate, for example, that also for the regime below αc The retrieved patterns are actually disturbance states, as soon as the number of embedded patterns is reached, p exceeds N.2lnN above. . Such disturbance states, however, have such a high degree of overlap with the learned patterns (0.97) that they basically agree with them.

This result and generalizations thereof for non-zero temperatures (i.e. noise in the dynamics) and beyond the critical capacity have been worked out in the following very technical document:

and in the book:

Artem Kaznatcheev

nice answer and references. Welcome to CogSci.SE, I am happy to have you here!

Chuck Sherrington

Welcome! I agree with @ ArtemKaznatcheev's sentiment.

Peter helper

Thank you for this interesting answer. I will follow up on these references.