Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
c292cbc
implemented raw classical instantaneous stdp
ago109 Jul 7, 2024
2dce368
mod to classical stdp
ago109 Jul 7, 2024
d889e8a
mod to classical stdp syn
ago109 Jul 7, 2024
0935487
mod to stdp syn
ago109 Jul 7, 2024
d973183
mod to stdp syn
ago109 Jul 7, 2024
a0f6b19
mod to stdp syn
ago109 Jul 7, 2024
acae17a
mod to stdp syn
ago109 Jul 7, 2024
72476bc
mod to stdp syn
ago109 Jul 7, 2024
db15423
minor mod of syn
ago109 Jul 7, 2024
0e254a4
cleaned up stdp syn
Jul 7, 2024
73e9cde
cleaned up stdp syn
Jul 7, 2024
b300492
Sync up of main with release (#131)
ago109 Dec 8, 2025
63a4e98
added pointer/stub for ei-rnn song-et-al in museum doc
Dec 8, 2025
1d70611
update to ei-rnn doc
Dec 8, 2025
0e0d324
Merge branch 'release' into main
ago109 Dec 8, 2025
4ed1d8f
update to ei-rnn arch fig
Dec 9, 2025
2b418c1
Merge branch 'main' of github.com:NACLab/ngc-learn
Dec 9, 2025
009ab50
added log-gaussian initializer to distribution_generator
Dec 9, 2025
b0a4193
bug-fix to log-gaussian func
Dec 9, 2025
2e2dd5e
Refactor patch utility functions and add doc strings (#136)
Faezehabibi Dec 13, 2025
3f848a8
Rao1999 hpc (#135)
Faezehabibi Dec 13, 2025
16f4293
fixed minor errors in pc-rao doc
Dec 13, 2025
d21e1a5
made revisions to pc-rao doc
Dec 13, 2025
40ac018
mod to pc-rao doc
Dec 13, 2025
07b36a4
update to docs
Dec 23, 2025
01fec02
minor revision to h-h doc-string
Dec 29, 2025
1ecadf8
added lkwta utility
Jan 9, 2026
eb89be5
Add retinal ganglion cell input encoder (#137)
Faezehabibi Jan 24, 2026
2fd70fc
Refactor patch synapse (#138)
Faezehabibi Jan 24, 2026
4f434f1
feat: Integrate MPSSynapse Component (#140)
antonvice Mar 16, 2026
79d2aab
integrated working som-synapse into competitive sub-package for synapses
Mar 21, 2026
970cb74
cleaned up som-syn
Mar 21, 2026
59afc24
update test code for hebbian patch synapse
rxng8 Mar 22, 2026
e9e6c75
fix SOM Synapse bug
rxng8 Mar 27, 2026
a9cf886
Flexible batch size (#142)
Faezehabibi Mar 31, 2026
4c355a8
cleaned up graded/patched comps with inner batched_reset formulation
Mar 31, 2026
969bb1b
minor clean-up of som-syn
Mar 31, 2026
64eb27d
claned up ganglion-cell, added batched_reset
Mar 31, 2026
1569e31
minor cleanup
Apr 1, 2026
dbd1029
added working hopfield-syn/modern-hopfield-syn
Apr 4, 2026
d6b1ecf
update SOM synapse to batchified version
rxng8 Apr 5, 2026
1362b11
integrated prototype for vector-quantize memory model/synapse
Apr 5, 2026
49ce5b9
wrote/integrated an ART2A synapse model, batch-generalized
Apr 5, 2026
be47ab0
updates to art2a, cleanup of probes
Apr 6, 2026
e1773db
updates to art2a, cleanup of probes
Apr 6, 2026
5097184
added in knn-probe for utils.analysis
Apr 6, 2026
d21fc65
cleaned up vq-synapse
Apr 7, 2026
ef8627a
cleaned up vq-synapse
Apr 7, 2026
6b78909
tweaked/cleaned-up gaussian-error-cell
Apr 15, 2026
5eb6180
Update JaxProcessesMixin.py
willgebhardt Apr 26, 2026
46fd684
minor patch fixes, including making .mask a compartment in key syn
Apr 28, 2026
3660cba
patch to bernoulli/latency and wtas cells
Apr 29, 2026
6386e71
update reset function of the ganglion cell
rxng8 May 1, 2026
c89c245
minor mod to model_utils
May 1, 2026
8a7b3f5
docs now with a few more mods
May 1, 2026
c802473
Merge branch 'dev' into main
ago109 May 1, 2026
a8de5ab
Nudge to release of v3.1.0 (#146)
ago109 May 1, 2026
70a49dd
patched pkg_resources and versioning items to prep for v3.1.0
May 1, 2026
66d5e19
cleaned up stdp-syn error in nudge
May 1, 2026
46ca16f
Merge branch 'dev' into main
ago109 May 1, 2026
ebf9a11
Merge branch 'release' into main
ago109 May 1, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions AUTHORS
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,4 @@ Contributors
Maxbeth2 (Ohas)
pagrawal-psu
pulinagrawal
antonvice
11 changes: 4 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,15 +15,12 @@ which implements several historical models, can be found
<a href="https://github.com/NACLab/ngc-museum">here</a>.

The official blog-post related to the source paper behind this software library
can be found
can be found
<a href="https://go.nature.com/3rgl1K8">here</a>.<br>
You can find the related paper <a href="https://www.nature.com/articles/s41467-022-29632-7">right here</a>, which
was selected to appear in the Nature <i>Neuromorphic Hardware and Computing Collection</i> in 2023 and was
chosen as one of the <i>Editors' Highlights for Applied Physics and Mathematics</i> in 2022.

<!--The technical report going over the theoretical underpinnings of the
NGC framework can be found here. TO BE RELEASED SOON. -->

## Installation

### Dependencies
Expand All @@ -42,7 +39,7 @@ ngc-learn requires:
-->

---
ngc-learn 3.0.0 and later require Python 3.10 or newer as well as ngcsimlib >=3.0.0.
ngc-learn 3.1.0 and later require Python 3.10 or newer as well as ngcsimlib >=3.0.0.
ngc-learn's plotting capabilities (routines within `ngclearn.utils.viz`) require
Matplotlib (>=3.8.0) and imageio (>=2.31.5) and both plotting and density estimation
tools (routines within ``ngclearn.utils.density``) will require Scikit-learn (>=0.24.2).
Expand Down Expand Up @@ -75,7 +72,7 @@ Python 3.11.4 (main, MONTH DAY YEAR, TIME) [GCC XX.X.X] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import ngclearn
>>> ngclearn.__version__
'3.0.0'
'3.1.0'
```

<i>Note:</i> For access to the previous Tensorflow-2 version of ngc-learn (of
Expand Down Expand Up @@ -122,7 +119,7 @@ $ python install -e .
</pre>

**Version:**<br>
3.0.1 <!--1.2.3-Beta--> <!-- -Alpha -->
3.1.0 <!--1.2.3-Beta--> <!-- -Alpha -->

Author:
Alexander G. Ororbia II<br>
Expand Down
Binary file added docs/images/museum/hgpc/GEC.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/museum/hgpc/patch_input.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 2 additions & 3 deletions docs/museum/harmonium.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,8 @@ In this walkthrough, we will design a simple Harmonium, also known as the restri
specifically focus on learning its synaptic connections with an algorithmic recipe known as contrastive divergence (CD),
which can be considered to be a stochastic form of CHL. After going through this exhibit, you will:

1. Learn how to construct an `NGCGraph` that emulates the structure of an RBM and adapt the NGC settling process to
calculate approximate synaptic weight gradients in accordance to contrastive divergence.
2. Simulate fantasized image samples using the block Gibbs sampler implicitly defined by the negative phase graph.
1. Learn how to construct a model context that emulates the structure of an RBM and simulate its inference/reconstruction process to calculate approximate synaptic weight gradients in accordance to contrastive divergence (including an extension of it called persistent contrastive divergence).
2. Simulate fantasized image samples using a block Gibbs sampler that is defined by (re-)using a portion of the model's message-passing structure.

Note that the folders of interest to this walkthrough are:
+ `ngc-museum/exhibits/harmonium/`: this contains the necessary simulation scripts (which can be found
Expand Down
325 changes: 323 additions & 2 deletions docs/museum/pc_rao_ballard1999.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,333 @@
# Hierarchical Predictive Coding (Rao &amp; Ballard; 1999)
# Hierarchical Predictive Coding for Reconstruction (Rao &amp; Ballard; 1999)

In this exhibit, we create, simulate, and visualize the internally acquired receptive fields of the predictive coding
model originally proposed in (Rao &amp; Ballard, 1999) [1].

The model code for this exhibit can be found
[here](https://github.com/NACLab/ngc-museum/tree/main/exhibits/pc_recon).

## Setting Up Hierarchical Predictive Coding (HPC) with NGC-Learn

### The HPC Model for Reconstruction Tasks

To build an HPC model, you will first need to define all of the components inside of the model.
After doing this, you will next wire those components together under a specific configuration, depending
on the task.
This setup process involves doing the following:
1. **Create neural component**: instantiating neuronal unit (with dynamics) components.
2. **Create synaptic component**: instantiating synaptic connection components.
3. **Wire components**: defining how the components connect and interact with each other.

<!-- ################################################################################ -->

### 1: Create the Neural Component(s):

<!-- ################################################################################ -->


**Representation (Response) Neuronal Layers**
<br>

If we want to build an HPC model, which is a hierarchical neural network, we will need to set up a few neural layers. For predictive coding with real-valued (graded) dynamics, we will want to use the library's in-built `RateCell` components ([RateCell tutorial](https://ngc-learn.readthedocs.io/en/latest/tutorials/neurocog/rate_cell.html)).
Since we want a 3-layer network (i.e., an HPC model with three hidden, or "representation", layers), we need to define three components, each with an `n_units` size for their respective hidden representations. This is done as follows:

```python
with Context("Circuit") as circuit: ## set up a (simulation) context for HPC model w/ 3 hidden layers
z3 = RateCell("z3", n_units=h3_dim, tau_m=tau_m, act_fx=act_fx, prior=(prior_type, lmbda))
z2 = RateCell("z2", n_units=h2_dim, tau_m=tau_m, act_fx=act_fx, prior=(prior_type, lmbda))
z1 = RateCell("z1", n_units=h1_dim, tau_m=tau_m, act_fx=act_fx, prior=(prior_type, lmbda))
```

<!-- ################################################################################ -->

<br>
<br>

<img src="../images/museum/hgpc/GEC.png" width="120" align="right" />

**Error Neuronal Layers**
<br>


For each (`RateCell`) layer's activation, we will also want to setup an additional set of neuronal
layers -- with the same size as the representation layers -- to measure the prediction error(s)
for the sets of individual `RateCell` components. The error values that this layers will emit will
be later used to calculate the (free) **energy** for each layer as well as the whole model. This is
specified like so:

```python
e2 = GaussianErrorCell("e2", n_units=h2_dim) ## e2_size == z2_size
e1 = GaussianErrorCell("e1", n_units=h1_dim) ## e1_size == z1_size
e0 = GaussianErrorCell("e0", n_units=in_dim) ## e0_size == z0_size (x size) (stimulus layer)
```

<br>
<br>

<!-- ################################################################################ -->

### 2: Create the Synaptic Component(s):

<!-- ################################################################################ -->

<br>
<br>

<!-- <img src="images/GEC.png" width="120" align="right"/> -->

**Forward Synaptic Connections**
<br>

To connect the layers of our model to each other, we will need to create synaptic components
(which will project/propagate information across the layers); ultimately, this means we need
to construct the message-passing scheme of our HPC model. In order to send information in a
"forward pass" (from the stimulus/input layer into deeper hidden layers, in a bottom-up stream),
we make use of `ForwardSynapse` components. Please check out
[Brain's Information Flow](https://github.com/Faezehabibi/pc_tutorial/blob/main/information_flow.md#---information-flow-in-the-brain--) for a more detailed explanation of the flow of information that we use in the context
of brain modeling.
Setting up the forward projections/pathway is done like so:

```python
E3 = ForwardSynapse("E3", shape=(h2_dim, h3_dim)) ## pre-layer size (h2) => (h3) post-layer size
E2 = ForwardSynapse("E2", shape=(h1_dim, h2_dim)) ## pre-layer size (h1) => (h2) post-layer size
E1 = ForwardSynapse("E1", shape=(in_dim, h1_dim)) ## pre-layer size (x) => (h1) post-layer size
```

<!-- ################################################################################ -->

<br>
<br>

<!-- <img src="images/GEC.png" width="120" align="right"/> -->

**Backward(s) Synaptic Connections**
<br>

For each `ForwardSynapse` component that sends information upward (i.e., the "bottom-up" stream),
there exists a `BackwardSynapse` component that reverses the flow of information flow by sending
signals back downwards (i.e., the "top-down" stream -- from the top layer to the bottom/input ones).
Again, we refer you to this resource [Information Flow](https://github.com/Faezehabibi/pc_tutorial/blob/19b0692fa307f2b06676ca93b9b93ba3ba854766/information_flow.md) for more information.
To set up the backwards/message-passing connections, you will write to the following:

```python
W3 = BackwardSynapse("W3",
shape=(h3_dim, h2_dim), ## pre-layer size (h3) => (h2) post-layer size
optim_type=opt_type, ## optimization method (sgd, adam, ...)
weight_init=w3_init, ## W3[t0]: initial values before training at time[t0]
w_bound=w_bound, ## -1 for deactivating the bouding synaptic value
sign_value=-1., ## -1 means M-step solve minimization problem
eta=eta, ## learning-rate (lr)
)
W2 = BackwardSynapse("W2",
shape=(h2_dim, h1_dim), ## pre-layer size (h2) => (h1) post-layer size
optim_type=opt_type, ## Optimizer
weight_init=w2_init, ## W2[t0]
w_bound=w_bound, ## -1: deactivate the bouding
sign_value=-1., ## Minimization
eta=eta, ## lr
)
W1 = BackwardSynapse("W1",
shape=(h1_dim, in_dim), ## pre-layer size (h1) => (x) post-layer size
optim_type=opt_type, ## Optimizer
weight_init=w1_init, ## W1[t0]
w_bound=w_bound, ## -1: deactivate the bouding
sign_value=-1., ## Minimization
eta=eta, ## lr
)
```

<br>
<br>
<!-- ----------------------------------------------------------------------------------------------------- -->

### Wiring the Component(s) Together:


The signaling pathway that we will create is in accordance with <b>[1]</b> (Rao and Ballard's classical model).
Error (mismatch signals) is information that goes from the bottom (layer) of the model to its top (layer) in
the forward pass(es).
Corrected prediction information will come back from the top (layer) to the bottom (layer) in the backward
pass(es).

The following code block will set up the top-down projection message-passing pathway:

```python
######### Feedback pathways (Top-down) #########
### Actual neural activations
z2.z >> e2.target ## Layer 2's target is z2's rate-value `z`
z1.z >> e1.target ## Layer 1's target is z1's rate-value `z`
## Note: e0.target will be clamped to input data `x`

### Top-down predictions
z3.zF >> W3.inputs ## pass phi(z3) down W3
W3.outputs >> e2.mu ## prediction `mu` for (layer 2) z2's `z`
z2.zF >> W2.inputs ## pass phi(z2) down W2
W2.outputs >> e1.mu ## prediction `mu` for (layer 1) z1's `z`
z1.zF >> W1.inputs ## pass phi(z1) down W1
W1.outputs >> e0.mu ## prediction `mu` for (input layer) z0=x

### Top-down prediction errors
e1.dtarget >> z1.j_td
e2.dtarget >> z2.j_td
```

The following code-block will set up the error-feedback, bottom-up message-passing pathway:

```python
######### Forward propagation (Bottom-up) #########
## feedforward the errors via synapses
e2.dmu >> E3.inputs
e1.dmu >> E2.inputs
e0.dmu >> E1.inputs

## Bottom-up modulated errors
E3.outputs >> z3.j
E2.outputs >> z2.j
E1.outputs >> z1.j
```

Finally, to enable learning, we will need to set up simple 2-term/factor Hebbian rules like so:

```python
########### Hebbian learning ############
### Set up terms for 2-term Hebbian rules
## Pre-synaptic activation (terms)
z3.zF >> W3.pre
z2.zF >> W2.pre
z1.zf >> W1.pre

## Post-synaptic residual error (terms)
e2.dmu >> W3.post
e1.dmu >> W2.post
e0.dmu >> W1.post
```

<br>
<br>
<!-- ----------------------------------------------------------------------------------------------------- -->

#### Specifying the HPC Model's Process Dynamics:

The only remaining thing to do for the above model is to specify its core simulation functions
(known in NGC-Learn as `MethodProcess` mechanisms). For an HPC model, we want to make sure
we define how it's full message-passing is carried out as well as how learning (synaptic plasticity)
occurs. Ultimately, this will follow the (dynamic) expectation-maximization (E-M) scheme we have
discussed in other model exhibits, e.g., the [sparse coding and dictionary learning exhibit](sparse_coding.md).

The method-processes for inference (expectation) and adaptation (maximization) can be written out under
your model context as follows:

```python
reset_process = (MethodProcess(name="reset_process") ## reset-to-baseline
>> z3.reset
>> z2.reset
>> z1.reset
>> e2.reset
>> e1.reset
>> e0.reset
>> W3.reset
>> W2.reset
>> W1.reset
>> E3.reset
>> E2.reset
>> E1.reset)
advance_process = (MethodProcess(name="advance_process") ## E-step
>> E1.advance_state
>> E2.advance_state
>> E3.advance_state
>> z3.advance_state
>> z2.advance_state
>> z1.advance_state
>> W3.advance_state
>> W2.advance_state
>> W1.advance_state
>> e2.advance_state
>> e1.advance_state
>> e0.advance_state)
evolve_process = (MethodProcess(name="evolve_process") ## M-step
>> W1.evolve
>> W2.evolve
>> W3.evolve)
```

Below we show a code-snippet depicting how the HPC model's ability to process a stimulus input
(or batch of inputs) `obs` -- or observation -- is carried out in practice:

```python
######### Process #########

#### reset/set all neuronal components to their resting values / initial conditions
circuit.reset.run()

#### clamp the observation/signal obs to the lowest layer activation
e0.target.set(obs) ## e0 contains the place where our stimulus target goes

#### pin/tie feedback synapses to transpose of forward ones
E1.weights.set(jnp.transpose(W1.weights.value))
E2.weights.set(jnp.transpose(W2.weights.value))
E3.weights.set(jnp.transpose(W3.weights.value))

#### apply the dynamic E-M algorithm on the HPC model given obs
inputs = jnp.array(self.advance_proc.pack_rows(T, t=lambda x: x, dt=dt))
stateManager.state, outputs = self.process.scan(inputs) ## Perform several (T) E-steps
circuit.evolve.run(t=T, dt=1.) ## Perform M-step (scheduled synaptic updates)

#### extract some statistics for downstream analysis
obs_mu = e0.mu.value ## get reconstructed signal
L0 = e0.L.value ## calculate reconstruction loss
free_energy = e0.L.value + e1.L.value + e2.L.value ## F = Sum_l Sum_j [e^l_j]^2
```

Note that we make use of NGC-Learn's backend state-manager (`ngcsimlib.global_state.StateManager`) to
roll-out the `T` E-steps carried out above efficiently (and effectively using JAX's scan utilities;
see the NGC-Learn configuration documents, such as the one related to the
[global state](../tutorials/configuration/global_state.md) for more information).

<br>
<br>
<br>
<br>
<!-- ----------------------------------------------------------------------------------------------------- -->
<!-- ----------------------------------------------------------------------------------------------------- -->


### Train the PC model for Reconstructing Image Patches

<img src="../images/museum/hgpc/patch_input.png" width="300" align="right"/>

<br>

In this scenario, the input image is not the full scene (or complete set of pixels that fully describe an image);
instead, the input is locally "patched", which means that is has been broken down into smaller $K \times K$
blocks/grids. This input patch exctraction scheme changes the information processing of the neuronal
units within the network, i.e., local features are now important. The original model(s) of Rao and Ballard's
1999 work <b>[1]</b> are also in a patched format, modeling how retinal processing units are localized in nature.
Setting up the input stimulus in this manner also results in models that acquire filters (or receptive fields)
similar to those acquired in convolutional neural networks (CNNs).

<br>

```python
for nb in range(n_batches):
Xb = X[nb * images_per_batch: (nb + 1) * images_per_batch, :] ## shape: (mb_size, 784)
Xb = generate_patch_set(Xb, patch_shape, center=True)

Xmu, Lb = model.process(Xb)
```


<!-- -------------------------------------------------------------------------------------
### Train PC model for reconstructing the full image

```python
for nb in range(n_batches):
Xb = X[nb * mb_size: (nb + 1) * mb_size, :] ## shape: (mb_size, 784)
Xmu, Lb = model.process(Xb)
```
------------------------------------------------------------------------------------- -->


<!-- references -->
## References
<b>[1]</b> Rao, Rajesh PN, and Dana H. Ballard. "Predictive coding in the visual cortex: a functional interpretation of
some extra-classical receptive-field effects." Nature neuroscience 2.1 (1999): 79-87.
some extra-classical receptive-field effects." Nature neuroscience 2.1 (1999): 79-87.
Loading
Loading