Skip to content

Note

Click here to download the full example code

Fit Head-direction population

Learning objectives

  • Learn how to add history-related predictors to nemos GLM
  • Learn about nemos Basis objects
  • Learn how to use Basis objects with convolution
import jax
import matplotlib.pyplot as plt
import nemos as nmo
import numpy as np
import pynapple as nap

import workshop_utils

# Set the default precision to float64, which is generally a good idea for
# optimization purposes.
jax.config.update("jax_enable_x64", True)
# configure plots some
plt.style.use(workshop_utils.STYLE_FILE)

Data Streaming

  • Stream the head-direction neurons data
path = workshop_utils.data.download_data("Mouse32-140822.nwb", "https://osf.io/jb2gd/download",
                                         '../data')

Pynapple

  • load_file : open the NWB file and give a preview.
data = nap.load_file(path)

data
  • Load the units
spikes = data["units"]

spikes
  • Load the epochs and take only wakefulness
epochs = data["epochs"]
wake_ep = data["epochs"]["wake"]
  • Load the angular head-direction of the animal (in radians)
angle = data["ry"]
  • Select only those units that are in ADn
spikes = spikes.getby_category("location")["adn"]
  • Restrict the activity to wakefulness (both the spiking activity and the angle)
spikes = spikes.restrict(wake_ep).getby_threshold("rate", 1.0)
angle = angle.restrict(wake_ep)
  • Compute tuning curves as a function of head-direction
tuning_curves = nap.compute_1d_tuning_curves(
    group=spikes, feature=angle, nb_bins=61, minmax=(0, 2 * np.pi)
)
fig, ax = plt.subplots(1, 2, figsize=(12, 4))
ax[0].plot(tuning_curves.iloc[:, 0])
ax[0].set_xlabel("Angle (rad)")
ax[0].set_ylabel("Firing rate (Hz)")
ax[1].plot(tuning_curves.iloc[:, 1])
ax[1].set_xlabel("Angle (rad)")
plt.tight_layout()
  • Let's visualize the data at the population level.
fig = workshop_utils.plotting.plot_head_direction_tuning(
    tuning_curves, spikes, angle, threshold_hz=1, start=8910, end=8960
)
  • Take the first 3 minutes of wakefulness to speed up optimization
wake_ep = nap.IntervalSet(
    start=wake_ep.loc[0, "start"], end=wake_ep.loc[0, "start"] + 3 * 60
)
  • bin the spike trains in 10 ms bin
bin_size = 0.01
count = spikes.count(bin_size, ep=wake_ep)
  • sort the neurons by their preferred direction using pandas
pref_ang = tuning_curves.idxmax()

count = nap.TsdFrame(
    t=count.t,
    d=count.values[:, np.argsort(pref_ang.values)],
)

Nemos

Self-Connected Single Neuron

  • Start with modeling a self-connected single neuron
  • Select a neuron and visualize the spike count time course

Features Construction

  • Use the past counts over a fixed window to predict the current sample
  • Roll your window one bin at the time to predict the subsequent samples
  • Form a predictor matrix by vertically stacking all the windows (you can use a convolution).
  • Check the shape of the counts and features.
  • Match time axis.
  • Plot the convolution output.
  • Convert the features back to a pynapple TsdFrame.

Fitting the model

  • Split your epochs in two for validation purposes.
  • Fit a GLM to the first half.
  • Plot the weights.

Inspecting the results

  • Fit on the other half and compare results.

Reducing feature dimensionality

  • Visualize the raised cosine basis.
  • Define the raised cosine basis through the "nemos.basis" module.
  • Create the basis kernel matrix (window_size, n_basis_funcs) with the "evaluate_on_grid" method.
  • Check that we can approximate the "decay" in the history filter with the basis. Use least-squares to find choose appropriate weights.
  • Convolve the counts with the basis functions.
  • Visualize the output.

Fit and compare the models

  • Fit the model using the compressed features.
  • Reconstruct the history filter.
  • Compare with the raw count history model.
  • Fit the other half of the data.
  • Plot and compare the results.
  • Use the score function to evaluate the GLM predictions.
  • Predict the rates and plot the results.

All-to-all Connectivity

Preparing the features

  • Convolve all counts.
  • Print the output shape
  • Reshape the convolved counts to define the feature matrix.

Fitting the Model

  • Loop over the neurons
  • Fit each neuron
  • Store the result in a list

Comparing model predictions.

  • Predict the firing rate of each neuron, store it in an array of shape (num_sample_points - window_size, num_neurons)
  • Convert the array to a pynapple TsdFrame
  • Visualize the predicted rate and tuning function.
  • Visually compare all the models.

Visualizing the connectivity

  • Compute tuning curves from the predicted rates using pynapple.
  • Multiply the weights by the basis, to get the history filters.
  • Plot the connectivity map.

Exercise

# 1. What would happen if we regressed explicitly the head direction?
# 2. What would happen to the connectivity if we fit on the sleep epochs?
# 3. How would we sparsify the connectivity?

Total running time of the script: ( 0 minutes 0.000 seconds)

Download Python source code: 02_head_direction_users.py

Download Jupyter notebook: 02_head_direction_users.ipynb

Gallery generated by mkdocs-gallery