mgng.mgng module

Implementation of the Merged Growing Neural Gas for temporal data.

class mgng.mgng.MergeGNG(n_neurons=100, n_dim=3, connection_decay=0.1, temporal_influence=0.5, memory_weight=0.5, life_span=10, learn_rate=0.2, learn_rate_neighbors=0.2, decrease_activity=0.8, delta=0.8, max_activity=2.0, allow_removal=True, creation_frequency=5, debug=False)[source]

Bases: object

Class that represents a Merge Growing Neural Gas.

Differences to default implementation

  • Neurons all kept in memory to allow numpy operations

  • Introduce half life and threshold for connections (planned). For now, only decrease.

  • Adaptation rate should depend on connection strength

  • Introduce method (half-life?, decay on all synapses) to remove very old movements (I am sure that the original implementation allows for orphans)

  • Compare with an approach of a regular neural gas with a refactory time

  • Add threshold for an activity to trigger a new neuron (hey, make a fifo). I really want to enforce this. If a neuron gets activated 3 times in a row it’s tiem for a new neuron!

  • REALLY CONSIDER REMOVING THE DIOGONAL ELEMENTS! Implement neighbor learn rate .. maybe weighted by synapse strength

  • Activity is never 0 unless it is a never used neuron or one removed because it had no connections

  • Todo: did we remove neurons without connections?

Parameters
  • n_neurons (int) – Max. number of neurons

  • n_dim (int) – Output dimension (of the feature space).

  • connection_decay (float) – Hyper parameter for influencing the decay of neuron connections. NOT USED RIGHT NOW

  • temporal_influence (float) – The influence of the temporal memory on finding the winning neuron

  • memory_weight (float) – Determines the influence of past samples in the sequence. (Kinda “how long” it looks back into the past).

  • life_span (int) – How many iterations until a synapse is deleted. Note .. synapses of the winning neuron only are decayed (it forgets “wrong” neighbors)

  • max_activity – Maximal activity allowed for a neuron (cf. refractory period). If a neuron is more active than this threshold, a new neuron is inserted between it and the second most active neuron. Note that each time the neuron is the winning neuron, it’s activity level is increased by 1.0 and then continuously decreases continuously in each iteration (c.f. decrease_activity) This is different to the reference paper where the network gros in regular intervals. Our approach reflects a more “on demand” approach and prevents the network from growing unnecessarily.

  • decrease_activity (float) – Less important .. the activity decreases exponentially … only interesting if there are only few iterations between reccuring sequences

  • learn_rate (float) – lorem ipsum

  • learn_rate_neighbors (float) – lorem ipsum

  • delta (float) – Need a better name, right? It’s a parameter that decides the neuron’s activity if a new neuron is added.

  • allow_removal (float) – lorem ipsum

_weights

The amount of neurons is constant in this implementation for simplicity reasons and speed (block operations).

Type

np.ndarray, \(n_{\text{neurons}} \times n_{\text{dim}}\)

_default_weights()[source]
_default_context()[source]
_default_global_context()[source]
_default_connections()[source]
_default_counter()[source]
_decay(first: int)[source]

Decrease all synapses of a neuron but don’t allow negative synampses.

Parameters

first (int) – Index of the neuron

kill_orphans()[source]
adapt(sample: numpy.ndarray)Tuple[int, int][source]

Single adaptation step

Parameters

sample (np.ndarray, shape: \((n_{ ext{dim}},)\)) – A single sample.

Returns

Optionally returns the first and second winning neurons used for Hebbian learning.

Return type

Tuple[int,int]

grow()[source]

Entropy maximization by adding neurons in regions of high activity.

Note: this picks the weakest neuron. TODO this needs to be implemented too!

kill_weakest()numpy.signedinteger[source]

Finds the weakest neuron (or the first with zero activity in the list) and returns its index

Returns

Index of the neuron

Return type

int

learn(samples: numpy.ndarray, epochs: int)[source]

Batch learning

Parameters
  • samples (np.ndarray) – Row array of points. Shape \(n_{\text{samples}} \times n_{\text{dim}}\).

  • epochs (int) – Number of repetitions.

get_active_weights()[source]