Document Actions

You are here: Home / News / Research News / New software facilitates research on animal movement

New software facilitates research on animal movement

DeepLabCut uses Artificial Intelligence to track the movements of animals

Image: Ella Maru Studio

A new software will help to analyze videos of animal movements with ease and accuracy:  DeepLabCut. Developed by a research team from the Universities of Tübingen and Harvard, the software is based on machine learning and can estimate the pose of animals in videos after a short training session. It is an open source software, which can be used without programming knowledge: https://mousemotorlab.org/deeplabcut. The research results achieved with DeepLabCut have now been published in Nature Neuroscience. 

"Until recently, it has been very time-consuming to track the movements of body parts in videos," explains Dr. Alexander Mathis, neuroscientist at the Bernstein Center for Computational Neuroscience at the University of Tübingen and Harvard University "DeepLabCut only needs few example images to learn how to determine the exact position of body parts in every frame of a video. Mathis led the project together with Dr. Mackenzie Mathis from the Rowland Institute at Harvard and Dr. Matthias Bethge, Professor for Computational Neuroscience and Machine Learning at the University of Tübingen.

It started with a double challenge: Mackenzie Mathis wanted to investigate how the mice’s hand is controlled by the brain and had recorded grasping movements of mice on video. Alexander Mathis had filmed a mouse under laboratory conditions, which follows an olfactory trail printed on a treadmill. He wanted to find out how mice put together the individual scent fragments that arise with each inhalation to skilfully follow the trail. For both projects, the researchers had to measure the movement sequences in detail, but conventional methods in neuroscience did not provide satisfactory results.

Research colleague Matthias Bethge suggested to retrain an existing network, i.e transfer learning in technical jargon. Impressive methods for the reflector-free recognition of human limbs had been developed in machine vision. Bethge believed they could serve as a basis for their animal movement research. They chose DeeperCut algorithm as their basis, because it is state of the art for analyzing human postures due to technical reasons.

Accordingly, the researchers trained their own networks on images from animal videos of mice and fruit flies by manually marking the body parts to be observed. With the help of this training, they hoped the computer should be able to recognize and automatically mark the corresponding body parts on all other pictures of the videos. Their attempt was extremely successful.

With a little training, DeepLabCut can extract the pose of almost any animal without a model of the animal. The program is modular and very user-friendly; programming skills are helpful but not required. "The software is freely available," Mackenzie Mathis emphasizes, "We would like as many scientists and their research to benefit from it as possible." More than fifty laboratories are already using the program to measure the gait of horses, to investigate octopuses or to record the movements of surgical robots.

Without the work on the DeeperCut algorithm, the researchers would not have been able to develop DeepLabCut so quickly. As a tribute to the work of the research colleagues, the name of DeepLabCut is therefore closely related to DeeperCut. "Sharing results, data and algorithms is essential for scientific progress," Bethge emphasizes. "We developed DeepLabCut as an open source software and are very happy about the positive feedback of our research colleagues in the neurosciences".

Translated and modified after press release from University Tübingen (German text)

>> original press release (German text)

>> English press release published in The Harvard Gazette

Publication: 

Mathis et al, DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nature Neuroscience, August 20th, 2018 DOI: 10.1038/s41593-018-0209-y