Behavioural Projects Overview

SWC/GCNU Neuroinformatics Unit

Niko Sirmpilatze

RSEs working on behaviour

Neuroinformatics Unit (NIU)

  • Niko Sirmpilatze
  • Chang Huan Lo
  • Adam Tyson

UCL Advanced Research Computing (UCL ARC)

  • Sofia Miñano
  • Sam Cunliffe
  • Nik K.N. Aznan

Behavioural experiments at SWC

Diversity of projects

  • large collaborative projects
    • Aeon
    • Crabs
    • Zoo
  • lab-specific projects

Aeon

  • Mice foraging in a large arena
  • 24-hour recordings
  • Multiple cameras
  • Multi-animal tracking (SLEAP)
  • Plans to combine with Neuropixels

Crabs

  • Fiddler crabs
    • both in the wild
    • and in a naturalistic lab setting
  • Detection and tracking with custom CNNs

Zoo

  • Videos acquired at the London Zoo
  • Multiple species:
    • from 🐝 to 🐘
  • Diverse environments
  • Pose estimation and tracking (DeepLabCut)

Lab-specific projects

Diversity of needs

  • species
  • environments
  • number of animals
  • pose estimation and tracking tools
  • performance

Our priority

Standardised video analysis pipeline, but:

  • Flexible
  • Modular
  • Extensible
flowchart TB
    classDef emphasis fill:#03A062;

    video -->|compression/re-encoding | video2["compressed video"]
    video2 -->|pose estimation + tracking| tracks["pose tracks"]
    tracks --> |calculations| kinematics
    tracks -->|classifiers| actions["actions / behav syllables"]
    video2 --> |comp vision| actions

How to achieve this?

  • Make existing tools easier to use for researchers
    • documentation and teaching
    • build easy-to-use interfaces/wrappers
  • Develop new tools to fill gaps in the ecosystem
    • movement Python package
  • Make existing and new tools interoperable
    • The PoseInterface project?

Working with existing tools

Pose estimation libraries

Pose estimation libraries:

But:

  • Know-how required
  • GPU-intensive
flowchart TB
    classDef emphasis fill:#03A062;

    video -->|compression/re-encoding | video2["compressed video"]
    video2 -->|pose estimation + tracking| tracks["pose tracks"]
    tracks --> |calculations| kinematics
    tracks -->|classifiers| actions["actions / behav syllables"]
    video2 --> |comp vision| actions

    linkStyle 1 stroke:#03A062, color:;
    class video2 emphasis
    class tracks emphasis

Pose estimation libraries

  • Centrally installed modules on HPC cluster
module load deeplabcut
module load SLEAP

Accessible interfaces

WAZP web app

Developing new tools

What happens after pose estimation and tracking?

flowchart TB
    classDef emphasis fill:#03A062;

    video -->|compression/re-encoding | video2["compressed video"]
    video2 -->|pose estimation + tracking| tracks["pose tracks"]
    tracks --> |calculations| kinematics
    tracks -->|classifiers| actions["actions / behav syllables"]
    video2 --> |comp vision| actions

    linkStyle 2 stroke:#03A062, color:;
    class tracks emphasis
    class kinematics emphasis

Some common metrics

flowchart TB
    classDef emphasis fill:#03A062;

    tracks[keypoint tracks] --> cleaning

    subgraph cleaning
    direction TB

    drop["drop bad values"]
    interp["interpolate"]
    smooth["smooth"]
    transform
    end

    subgraph fit["fit shape"]
    direction TB

    ellipse
    bbox["bounding box"]
    center["center-of-mass"]
    end

    subgraph kinematics
    direction TB

    size
    angle["joint angle"]
    orientation
    velocity["(angular) velocity"]
    acceleration["(angular) acceleration"]
    end


    fit -.-> kinematics
    cleaning ---> kinematics
    cleaning -.-> fit

    class tracks emphasis
    class velocity emphasis
    class acceleration emphasis
    class orientation emphasis
    class size emphasis
    class speed emphasis
    class angle emphasis

Some common metrics

flowchart TB
    classDef emphasis fill:#03A062;
    classDef aside fill:#EB7AC9;

    tracks[keypoint tracks] --> cleaning
    ROIpre([ROI]) -.-> |transform| ROI([ROI])
    ROI --> ROI_metrics


    subgraph cleaning
    direction TB

    drop["drop bad values"]
    interp["interpolate"]
    smooth["smooth"]
    transform
    end

    subgraph fit["fit shape"]
    direction TB

    ellipse
    bbox["bounding box"]
    center["center-of-mass"]
    end

    subgraph ROI_metrics["ROI metrics"]
    direction TB

    distance[distanco to]
    time[time spent in]
    entries[entries/exits]
    end

    subgraph kinematics
    direction TB

    size
    angle["joint angle"]
    orientation
    velocity["(angular) velocity"]
    acceleration["(angular) acceleration"]
    end

    cleaning--> ROI_metrics
    ROI -.-> fit
    fit -.-> kinematics
    fit -.-> ROI_metrics
    cleaning ---> kinematics
    cleaning -.-> fit

    class ROIpre aside
    class ROI aside
    class distance aside
    class time aside
    class entries aside

    class tracks emphasis
    class velocity emphasis
    class acceleration emphasis
    class orientation emphasis
    class size emphasis
    class speed emphasis
    class angle emphasis

Existing tools

Tool Stars Last Commit
movingpandas ~1k Aug 2023
AmadeusGPT 117 Sep 2023
DLC2Kinematics 109 Jun 2023
traja 79 Oct 2022
PyRat 30 Feb 2023

Enter movement

Kinematic analysis of animal movements for neuroscience and ethology research.

movement features

  • I/O: (see example)
    • ✅ import pose tracks from DeepLabCut and SLEAP
    • ✅ represent pose tracks in common data structure
    • 🤔 export data for downstream analysis (e.g. for classifiers)
  • Plotting: 🏗️ plot pose tracks, ROIs, etc.
  • Preproc: 🤔 pose track interpolation, smoothing, resampling, etc.
  • Kinematics: 🤔 velocity, acceleration, orientation, etc.
  • Arena: 🤔 ROI support and coordinate transformations

movement GUI

Why napari?

It already comes with built-in layers for:

  • Image: video frames?
  • Points: keypoints?
  • Shapes: ROIs?
  • Tracks: pose tracks?
  • Vectors: velocity, acceleration, etc.?

Interoperability

Video compression

flowchart TB
    classDef emphasis fill:#03A062;

    video -->|compression/re-encoding | video2["compressed video"]
    video2 -->|pose estimation + tracking| tracks["pose tracks"]
    tracks --> |calculations| kinematics
    tracks -->|classifiers| actions["actions / behav syllables"]
    video2 --> |comp vision| actions

    linkStyle 0 stroke:#03A062, color:;
    class video emphasis
    class video2 emphasis
  • What’s the best format/codec for saving videos?
  • Trade-off between file size and quality
  • Compressed videos must be readable by all major pose estimation libraries
  • Ideally videos are compressed during (or directly after) acqusition

Pose Interface?

flowchart TB
    classDef emphasis fill:#03A062;

    video -->|compression/re-encoding | video2["compressed video"]
    video2 -->|pose estimation + tracking| tracks["pose tracks"]
    tracks --> |calculations| kinematics
    tracks -->|classifiers| actions["actions / behav syllables"]
    video2 --> |comp vision| actions

    linkStyle 1 stroke:#03A062, color:;
    class video2 emphasis
    class tracks emphasis

Similar to SpikeInterface

  • Common input format (label data once)
  • Choose a number of pose estimation libraries
  • Common output format

From behaviour to actions

flowchart TB
    classDef emphasis fill:#03A062;

    video -->|compression/re-encoding | video2["compressed video"]
    video2 -->|pose estimation + tracking| tracks["pose tracks"]
    tracks --> |calculations| kinematics
    tracks -->|classifiers| actions["actions / behav syllables"]
    video2 --> |comp vision| actions

    linkStyle 3 stroke:#03A062, color:;
    linkStyle 4 stroke:#03A062, color:;
    class tracks emphasis
    class video2 emphasis
    class actions emphasis

Several tools:

Appendix

xarray data structures

DataArray is an N-dimensional generalisation of pandas Series.

  • values: a numpy.ndarray holding the array’s values
  • dims: names for each axis (e.g., [‘time’, ‘individuals’, ‘keypoints’])
  • coords: e.g., list of animal names, bodypart names
  • attrs: an OrderedDict to hold arbitrary metadata (attributes).

Dataset is a collection of aligned DataArray objects.

xarray pros and cons

Pros:

  • label-based indexing
  • numpy-like vectorisation and broadcasting
  • pandas-like aggregation + groupby
  • dask + zarr integration for parallel computing

Cons:

  • not as widely known as numpy/pandas
  • learning curve

movement design aspirations

  • Modularity: small, reusable, composable functions
  • Flexibility: future-proof, extensible, configurable
  • Accessibility: API + GUI, muli-platform, docs, tutorials
  • Maintainability: Tests, CI/CD
  • Performance: parallelization