DeepLabCut Workflow Cont.

Create a Python Script#

  • Create a python script for your DLC project.
$ nano <project_name>.py
  • Copy and paste the following script, then make sure to update cfg and videofile_path with the appropiate paths.
import tensorflow as tf
import os
os.environ["DLClight"] = "True"
import deeplabcut
cfg = '<project_path>/config.yaml' # update
deeplabcut.create_training_dataset(cfg, Shuffles=[1])
deeplabcut.train_network(cfg,
shuffle=1,
saveiters=1000,
displayiters=1000,
maxiters=200000)
deeplabcut.evaluate_network(cfg, Shuffles=[1], plotting=True)
videofile_path = ['<video_file_path>'] # update
deeplabcut.analyze_videos(cfg, videofile_path, videotype='mp4')
deeplabcut.create_labeled_video(cfg, videofile_path, filtered=False)
deeplabcut.plot_trajectories(cfg,
videofile_path,
videotype='.mp4',
filtered=False)
prj_path = cfg.replace('/config.yaml', '/videos/')
os.rename(prj_path + 'plot-poses', prj_path + 'plot-poses-unfiltered')
deeplabcut.filterpredictions(cfg,
videofile_path,
shuffle=1,
trainingsetindex=0,
filtertype='arima',
p_bound=0.01,
ARdegree=3,
MAdegree=1,
alpha=0.01)
deeplabcut.create_labeled_video(cfg,
videofile_path,
videotype='.mp4',
filtered=True)
deeplabcut.plot_trajectories(cfg,
videofile_path,
videotype='.mp4',
filtered=True)
os.rename(prj_path + 'plot-poses', prj_path + 'plot-poses-filtered')
  • When you are done, hit CTRL+O, ENTER, CTRL+X to save and exit.

Create a GPU Job#

  • Now we will submit the python script you created earlier as a GPU job. To do this, create a new file <project_name>.slurm.
$ nano <project_name>.slurm
  • Copy and paste the following script, then update <username> and <project_name> to your username and project folder name.
#!/bin/bash
#SBATCH --time=08:00:00
#SBATCH --mem-per-cpu=1024
#SBATCH --job-name=TOXO-DLC
#SBATCH --partition=gpu
#SBATCH --gres=gpu
#SBATCH --constraint='gpu_v100'
#SBATCH --mem=16gb
#SBATCH --ntasks-per-node=21
#SBATCH --nodes=1
#SBATCH --mail-type=ALL
#SBATCH --mail-user=<email_address>
#SBATCH --error=/work/chaselab/<username>/dlc-chaselab/<project_name>/%J.err
#SBATCH --output=/work/chaselab/<username>/dlc-chaselab/<project_name>/%J.out
cd /home/chaselab/<username>
export HOME=/work/chaselab/<username>
export WORK=/work/chaselab/<username>
cd $WORK
module load rclone
module load cuda
module load anaconda
conda activate DLC-GPU
yes | python $WORK/dlc-chaselab/<project_name>/<project_name>.py
rclone copy $WORK/dlc-chaselab/<project_name> GoogleDrive:/dlc-chaselab/<project_name>
  • Save CTRL+O ENTER and exit CTRL+X.

Submit Your GPU Job#

  • To submit the job, run:
$ sbatch <project_name>.slurm

Check Job Status#

  • To check your submitted job status, run:
$ squeue -u <username>

Job state (ST) codes: R - Running, PD - Pending: Job is awaiting resource allocation.

If you want to monitor the job, you can add watch before the squeue command (i.e., $ watch squeue -u <username>).