Literature DB >> 28493862

FIMTrack: An open source tracking and locomotion analysis software for small animals.

Benjamin Risse1, Dimitri Berh2, Nils Otto3, Christian Klämbt3, Xiaoyi Jiang2.   

Abstract

Imaging and analyzing the locomotion behavior of small animals such as Drosophila larvae or C. elegans worms has become an integral subject of biological research. In the past we have introduced FIM, a novel imaging system feasible to extract high contrast images. This system in combination with the associated tracking software FIMTrack is already used by many groups all over the world. However, so far there has not been an in-depth discussion of the technical aspects. Here we elaborate on the implementation details of FIMTrack and give an in-depth explanation of the used algorithms. Among others, the software offers several tracking strategies to cover a wide range of different model organisms, locomotion types, and camera properties. Furthermore, the software facilitates stimuli-based analysis in combination with built-in manual tracking and correction functionalities. All features are integrated in an easy-to-use graphical user interface. To demonstrate the potential of FIMTrack we provide an evaluation of its accuracy using manually labeled data. The source code is available under the GNU GPLv3 at https://github.com/i-git/FIMTrack and pre-compiled binaries for Windows and Mac are available at http://fim.uni-muenster.de.

Entities:  

Mesh:

Year:  2017        PMID: 28493862      PMCID: PMC5444858          DOI: 10.1371/journal.pcbi.1005530

Source DB:  PubMed          Journal:  PLoS Comput Biol        ISSN: 1553-734X            Impact factor:   4.475


This is a PLOS Computational Biology Software paper.

Introduction

For most animals the ability to move is essential to survive. A complex nervous system, built-up by neurons and glial cells, allows sophisticated locomotion control. The analysis of locomotion of freely moving animals is crucial to gather insights into the nervous system functionality. Especially Drosophila melanogaster larvae and Caenorhabditis elegans worms are popular model organisms in neuro- and behavioral biology since sophisticated genetic tools and a well-established knowledge base provide advantages like cell specific manipulations and ease behavioral inferences [1, 2]. Different tracking and locomotion analysis tools have been proposed including commercially available (e.g. EthoVision [3]) and custom solutions (e.g. MWT [4], MAGAT [5], SOS [6]). In the past we have introduced a novel imaging technique called FIM [7] to gather high-contrast recordings of the aforementioned model organisms. The associated open-source tracking software FIMTrack has already been used in a variety of studies [7-11] and a video tutorial has been published in [12] to demonstrate its biological usability. For example, FIMTrack has successfully been used to identify a central neural pathway for odor tracking in Drosophila [9] and to study the behavioral changes of uba-5 knockout C. elegans worms [13]. Here we elaborate on the technical aspects and algorithms implemented in FIMTrack for a better understanding of the resultant quantities. Additionally, we provide an accuracy quantification using manually labeled data. FIMTrack offers several advantages compared to state-of-the-art tracking tools: The assignment of animals across frames is implemented in a modular fashion, offering different combinations of assignment strategies and cost functions, making FIMTrack more flexible for a wider range of model organisms, locomotion types, and camera properties. FIMTrack extracts a huge variety of posture and motion-related features with a very high tracking accuracy which is evaluated using labeled data. Our tracking program has an intuitive graphical user interface allowing the inspection of most of the calculated features, an option for manual tracking, and an easy integration of stimulus regions. FIMTrack does not rely on commercial packages and is available in source code and as pre-compiled binaries for Windows and Mac. The software is implemented in an object-oriented fashion to improve re-usability and enable extensibility. The main purposes of this paper are: Elaborate the algorithmic insights of the widely used FIMTrack software to enable easier usage and extensibility. Provide a ground truth-based evaluation of the tracking performance. Give an update on the current state of the program featuring a variety of novel functionality compared to its first usage in 2013 [7]. Introduce FIMTrack as a tool for other communities dealing with other model organisms.

Design and implementation

FIMTrack is written in C++ and is easily extendable since the object-oriented programming paradigm is used. We utilize the OpenCV library and the Qt framework in combination with QCustomPlot (http://qcustomplot.com/) for image processing and the graphical user interface. Generally, FIMTrack consists of three main modules, namely the tracker, the results viewer, and the input-output (IO) module.

Tracker module

The main flow of the tracking module is given in Fig 1 and can be separated into image processing, model extraction, and tracking.
Fig 1

Flow chart of the tracking module.

Image processing

Let be the gray scale image at time t and assume that N animals in total need to be tracked. Prior to further image analysis we compute a static background image which includes almost all immovable artifacts. Since images produced by FIM have a black background with bright foreground pixels and since we assume that an animal moves more than its own body length during the recording, the calculation of the background image can be done using the minimal pixel intensity value over time, resulting in where is the pixel intensity at row r and column c at time t. Subsequently, the foreground image containing almost all objects of interest without the artifacts present in the background image is obtained by with where τ is a user set gray value threshold. Given the contours of the animals are calculated by using the algorithm proposed in [14] resulting in a set of contours . N might differ from N since animals can be in contact with each other (leading to merged contours) or impurities on the substrate which are not included in the background image lead to artifacts. However, the contours in C can be filtered to identify single animals by assuming that all imaged animals cover approximately the same area. The filtered set of contours is given by where λmin < λmax are two user defined thresholds and is the contour area given by the number of pixels enclosed by . Both the contours that fulfill which are assumed to represent colliding animals and contours with which are assumed to be artifacts are ignored in further calculations.

Model extraction

For each contour we compute a model representation of the associated animal. First, the spine is calculated based on curvature values which are obtained for each point p on the contour using the first pass of the IPAN algorithm [15]. Given all curvatures ∠ the two regions with sharpest acute angle are located using a sliding window approach as illustrated in Fig 2a. The point with the sharpest overall mean angle is identified as head h and the point with the second sharpest mean angle (with an appropriate distance δ to h) is identified as tail t. This assignment is done during the model extraction for each frame individually. In order to avoid head and tail switches and to ensure a reliable identification of these points for different organisms like larvae or C. elegans, the positions of the head and tail are refined in a post processing step using posture and motion features over time (see Post processing section).
Fig 2

Calculation of the animal representation.

(A) Example of the sliding window algorithm for points and with a window size of 5. Contour points with sharp angles are given in red. (B) Animal representation including the notation given in the text. (C) Body bending is calculated based on Eq (3). An animal is not bended if γ = 180°, bended to the left if γ > 180° and bended to the right if γ < 180°.

Calculation of the animal representation.

(A) Example of the sliding window algorithm for points and with a window size of 5. Contour points with sharp angles are given in red. (B) Animal representation including the notation given in the text. (C) Body bending is calculated based on Eq (3). An animal is not bended if γ = 180°, bended to the left if γ > 180° and bended to the right if γ < 180°. The initial h and t identification is used to split the contour into two halves and . Without loss of generality, let . Then is re-sampled, so that by utilizing linear interpolation between points in . Now each point corresponds to a unique point . The spine points s1, …,s are calculated by determining L − 2 equidistant pairs along and and by setting . The radii r are calculated similarly by . In addition, the center of mass m is calculated based on . As a result, each animal i at time t is defined by a model which is depicted in Fig 2b.

Tracking

Considering a sufficient spatio-temporal resolution tracking can be done by assigning animals between consecutive frames: all active animals at time t need to be associated with either one detected animal from the set at time t + 1 or removed from the set of active animals. Mathematically this assignment is known as a bipartite matching between and . Let the costs for an assignment between an animal i at time t and an animal j at time t + 1 be κ. This leads to the following cost matrix: The following three cost measurements are implemented in FIMTrack: Euclidean distance between the center of masses (Fig 3a):
Fig 3

Different cost measurements.

(A) Center of mass-based cost. (B) Spine-based cost. (C) Contour-based cost.

Euclidean distance between the mid-points (Fig 3b): The intersecting area in pixels given two consecutive contours (Fig 3c) Since the center of mass is extracted from the contour, the translation between consecutive frames is very smooth. In contrast, the middle spine point contains more jitter, but is forced to be inside the contour. Costs derived from the overlap of two consecutive contours fulfill both criteria as they are smooth and based on pixels within the contours.

Different cost measurements.

(A) Center of mass-based cost. (B) Spine-based cost. (C) Contour-based cost. To solve the aforementioned assignment problem two algorithms are implemented in FIMTrack. One is the Hungarian algorithm [16, 17] which has one drawback: Given distance-based costs, the algorithm will find assignments for all animals. For example, if animal i disappears at time t while another animal j appears at time t + 1, the algorithm will assign these two animals even if the Euclidean distance between them is very large. Thus, we check each assignment if at least one point of the model of animal i is inside the contour of animal j. Otherwise animal i is considered to be inactive, the associated trajectory is terminated, and animal j is initialized as a new animal. The second algorithm follows the greedy pattern. Given and the algorithm determines sequentially for each animal the best matching animal given one cost measurement (note that each match has to be unique). To exclude irrational assignments this algorithm requires an additional threshold τgreedy specifying the maximal distance between two consecutive points if distance-based costs are used or the minimal amount of overlap in case of contour-based costs. The possibility to choose between multiple costs and optimization algorithms extends the range of organisms which can be analyzed even at various spatial and temporal resolutions. For example, during peristaltic forward locomotion of Drosophila larvae, the Hungarian algorithm in combination with overlap-based costs is feasible to associate the larvae over time (Fig 4a). In contrast, rolling larvae with unsuitable temporal resolution lead to false assignments using the Hungarian algorithm with overlapping contour costs: due to strong changes in the body bending and the relatively fast lateral locomotion no overlaps can be detected within contours of consecutive frames (Fig 4b). Similarly, C. elegans moves in a snake like motion so that contour-overlap-based assignments may fail (Fig 4c).
Fig 4

Examples of different assignment strategies.

Overlapping regions are given in red and locomotion is indicated by arrows. (A) During forward locomotion of Drosophila larvae the association can be done using overlap-based costs and the Hungarian algorithm. (B) Given rolling behavior of Drosophila larvae, an assignment using contour overlaps as costs for the Hungarian algorithm might fail due to inappropriate frame rates. (C) For tracking the snake-like motion of C. elegans, contour-based assignments might be insufficient.

Examples of different assignment strategies.

Overlapping regions are given in red and locomotion is indicated by arrows. (A) During forward locomotion of Drosophila larvae the association can be done using overlap-based costs and the Hungarian algorithm. (B) Given rolling behavior of Drosophila larvae, an assignment using contour overlaps as costs for the Hungarian algorithm might fail due to inappropriate frame rates. (C) For tracking the snake-like motion of C. elegans, contour-based assignments might be insufficient. After processing all frames, the overall path of an animal i is given by where t1 specifies the first frame the animal appears at and t2 indicates the last valid measurement (1 ≤ t1 ≤ t2 ≤ T; T represents the total number of frames).

Post processing

The initial definition of the larval orientation is based on two regions with the sharpest acute angles. Due to the non-rigid body wall or low per-animal resolutions this assumption is not true in some frames leading to alternating head/tail assignments. Furthermore, other model organisms like C. elegans can imply other curvature characteristics compared to larvae which leads to the necessity to correct the initial h and t calculation. To adjust the assignments in (Eq 2) first distinct sequences where the animal is not coiled are determined. Afterwards, a probability indicating whether the head/tail assignment is correct or not is calculated for all tuple based on the following constrains: Locomotion conformity (i.e. the mid-point/head vector points in the direction of the locomotion) Bending conformity (i.e. larvae move the head but not the tail during reorientations) If the head and tail are swapped, the spine points s and the radii r are reversed, too. Furthermore, all spine-point derived features are recalculated. Note that, although these constraints are derived from larval locomotion, the resultant probability is still valid if a C. elegans worm moves forward in more than 50% of the frames. It is worth mentioning that, after identifying h and t, the position of these points along the spine is fixed by assigning each subsequent head/tail point based on the respective predecessor with smallest Euclidean distance. Furthermore, even if the head and tail are swapped one click in the results viewer module is sufficient to correct the model throughout the entire recording (see Results reveiwer module).

Feature calculation

To quantify the locomotion in more detail several primary, secondary, and motion-related features are calculated by FIMTrack. Primary features. In addition to the representation of an animal (Eq 1) the area A and the perimeter P are calculated. Secondary features. The main body bending angle γ is calculated based on the head h, the mid spine point , and the tail t. Given the vectors and , Eq (3) is used to calculate the bending in degree. As a consequence, an animal is not bended if γ = 180°, bended to the left if γ > 180° and bended to the right if γ < 180° (Fig 2c). Since a user-specified number of spine points can be extracted these points can be used to extract other bendings along the spine by using Eq (3) with appropriate v and v (e.g. to quantify the stereotypical S-shape of C. elegans). Given a threshold τbend, an animal sweeps to the left if γ ≥ 180° + τbend and sweeps to the right if γ ≤ 180° − τbend. Furthermore, the spine length S is calculated by summing up the Euclidean distances between the head, all spine points, and the tail: In case of coiled animals (Fig 5a) the spine calculation fails to extract the posture correctly. As a consequence, all spine-related features are not reliable. To mark these ambiguous situations, a binary indicator c? is introduced to determine coiled states. The coiled indicator is true if one of the following constraints is satisfied:
Fig 5

Two constraints are used to determine if an animal is in a coiled state.

(A) Larva in a coiled state. (B) Perimeter P to spine length L fraction. (C) Mid spine point radius rmid to perimeter P ratio.

The perimeter to spine length fraction converges to π (; Fig 5b) The circle given by the mid spine point radius divided by the perimeter converges to 1 (; Fig 5c)

Two constraints are used to determine if an animal is in a coiled state.

(A) Larva in a coiled state. (B) Perimeter P to spine length L fraction. (C) Mid spine point radius rmid to perimeter P ratio. Motion-related features. Most of the motion-related features are calculated based on the animal’s center of mass m since this point is calculated directly from the contour and does not depend on the spine calculation. The accumulated distance for an animal at time t is calculated by and the distance to origin is given by Furthermore, the velocity for time t is calculated by where fps is the given frame rate. In a similar fashion, the acceleration is obtained by using consecutive velocities: To identify if an animal is in a go phase, a binary indicator g? is used. The following constraints must be valid for a go phase: The velocity v is above a certain threshold There is no strong bending γ of the animal’s body To avoid alternating go phase measurements, a user-specified minimal go phase length (τgo) is used to extract continuous phases. This implies that the number of consecutive g? = true measurements has to be ≥ τgo to classify this sequence as a go phase. An animal is in a reorientation phase if g? is false. Stimulus-related features. In order to extend the capabilities of FIMTrack it is possible to place different stimulus markers on the raw image in the results viewer. The following markers are supported: Point which is a (x, y) position Line which is a straight line segment Rectangle which is an arbitrarily sized axis-aligned 2D rectangle Ellipsoid which is an arbitrarily sized axis-aligned 2D ellipsoid For all markers the additional features distance to stimulus, bearing angle to stimulus, and is in stimulus region are calculated for each time point and animal. The distance to a stimulus is given by the Euclidean distance between the center of mass m of the animal and the point p representing the nearest point on the stimulus. The bearing angle β is obtained by using Eq (3) with v = m − t and v = p − t (note that the tail t is used since it is not affected by head casts). Given a point stimulus the necessary computations are straight forward. The nearest point between the animal and a line stimulus is obtained by performing an orthogonal projection of m on the line defined by the line segment. If the projected point p is not located on the line segment we take one of the two endpoints of the line segment as the nearest point which has the minimal Euclidean distance to p. In case of a rectangular stimulus the nearest point p is calculated by p = arg min ∥m − p ∥2, where p is the associated orthogonal protection of m onto each of the four boundaries of the stimulus. Since the calculation of the exact nearest point p of m on a ellipsoid stimulus cannot easily be performed in an analytical fashion, we use an approximation of p. Given an axis-aligned ellipse centered at c = (c, c). First, a line l going through m and c is determined. Next the intersection points p and p between l and the ellipse are obtained and the intersection point with the minimum distance to m is taken as the approximation of p.

Results viewer module

The results viewer module offers the possibility to review the calculated features. The experimenter can load, display, and manually correct the posture and motion-related features or even manually track some animals if they could not be recognized automatically. If an animal model is adjusted manually all features are updated accordingly. The results viewer module itself is divided into three main parts, namely the image view (Fig 6a), the table view (Fig 6b) and the animal view (Fig 6c).
Fig 6

Results viewer module.

(A) Image view with the raw image, an overlay of the color coded features, and two stimuli marker with the notations given in the text. (B) Table view. (C) Animal view with both a cropped region of a single larva and plots of some features.

Results viewer module.

(A) Image view with the raw image, an overlay of the color coded features, and two stimuli marker with the notations given in the text. (B) Table view. (C) Animal view with both a cropped region of a single larva and plots of some features. The image view provides a qualitative impression of the tracking results. Most of the calculated features are plotted color coded for each animal as an overlay onto the raw images. Moreover, the user can manually change the calculated model or merge/remove trajectories. This is particularly helpful to resolve ambiguous situations like coiled or colliding animals. In the table view all calculated features can be inspected in a table showing animals in columns and the associated features in rows. The animal view can be used to inspect the results for a single animal in more detail: both a cropped region of a single animal and plots for relevant features can be obtained simultaneously.

IO module

This module is responsible for reading and writing files. Currently the image file formats TIFF and PNG are supported. A CSV file containing all calculated features, a YML file including the same measurements as the CSV file in combination with some additional informations like the processed images, and an image with color coded trajectories are generated after tracking has been performed. It should be mentioned that the CSV format is standardized and can directly be imported into a variety of analysis programs (e.g. MATLAB, Excel, R).

Results

Most of the calculated features rely on a precise calculation of the center of mass (e.g. accumulated distance, velocity, bearing, etc.). Furthermore, good candidates to assess the quality of the calculated model are the mid spine point and the body bending (most of the secondary and motion-related features are derived from the underlying model). Here, we evaluate the accuracy of the software regarding these features.

Ground truth data

An image sequence with a resolution of 2040 × 2048 pixels acquired with a Basler acA2040-25gm camera equipped with a 16mm objective (KOWA LM16HC) containing 15 larvae over 211 frames was used to generate ground truth data (Fig 7a and 7b). All animals were associated with a larval model consisting of the head, tail, and 5 equidistant spine points associated with appropriate radii. Furthermore, the center of mass was calculated based on these models.
Fig 7

An image from the ground truth dataset and the manually generated model.

(A) Exemplary image used for ground truth generation. (B) Close up of the dashed box from Fig 7a.

An image from the ground truth dataset and the manually generated model.

(A) Exemplary image used for ground truth generation. (B) Close up of the dashed box from Fig 7a. For the subsequent analysis we considered the 10 trajectories of the larvae which could be tracked by FIMTrack over all 211 frames (i.e. larvae which crawl at least by there own body length and do not collide).

Measured deviations

Deviations from the ground truth are determined by calculating the Euclidean distances between the tracking results and ground truth data for both the center of mass and the central spine point. For the body bending, absolute differences are used to determine the accuracy. Table 1 illustrates the deviations.
Table 1

Deviations for the examined parameters.

Body bending is given in degree, all other parameters are given in pixels. Max⋆ represents the values obtained by including outliers whereas OL gives the number of outliers.

DeviationsMean (±Std)MedianMinMaxMaxOL
Center of mass1.86 (±0.23)1.851.212.552.8417 (0.80%)
Central spine point1.84 (±1.10)1.570.504.5816.8461 (2.89%)
Body bending3.54 (±7.51)2.550.0013.16171.0046 (2.18%)

Deviations for the examined parameters.

Body bending is given in degree, all other parameters are given in pixels. Max⋆ represents the values obtained by including outliers whereas OL gives the number of outliers. Obviously the deviation of the center of mass and the deviation of the central spine point are below 2 pixels in mean and median. It should be noted that during tracking no sub-pixel accuracy is used and thus the minimum possible error is 1 pixel if the displacement happens either in x or y direction. For a diagonal displacement the minimum possible error is pixels. In combination with the area of the animals which range from 232.50 to 454.50 square pixels, this leads us to the assertion that deviations below 2 pixels are caused due to noise.

Center of mass

A detailed overview of the center of mass progress is given in Fig 8a. Each boxplot represents the center of mass deviation for the respective larva. None of the measurements has a median deviation above 3 pixels. This suggests that the divergence is rather the result of an inaccuracy of the tracking algorithm but more likely caused by the previously performed image processing and definitely influenced by the non-contour-based center of mass extraction in the ground truth data.
Fig 8

Measured deviations.

(A) Center of mass deviations. (B) Central spine point deviations. (C) Body bending angle deviations. The mean divergence of the body bending is sketched by the light yellow area in the larva image at the top left corner. (D) The coiled structure of larva 6 (at t = 2, 3) causes outliers in the measurements (compare to Table 1). The head is given in red and tail is given in blue.

Measured deviations.

(A) Center of mass deviations. (B) Central spine point deviations. (C) Body bending angle deviations. The mean divergence of the body bending is sketched by the light yellow area in the larva image at the top left corner. (D) The coiled structure of larva 6 (at t = 2, 3) causes outliers in the measurements (compare to Table 1). The head is given in red and tail is given in blue.

Central spine point

The central spine point location contains more outliers compared to the center of mass measurements. The maximal deviation (including outliers) between a measurement and the ground truth is 16.84 pixels (Table 1). As illustrated in Fig 8b measurements for larva 6 include most outliers. These inaccuracies are caused within several frames in which the animal is coiled resulting in an erroneous spine calculation (Fig 8d). The median spine point deviation is below 2 pixels and after removing the outliers the maximum distance decreases to 4.58 pixels (Table 1). To further study the accuracy an overlay of ground truth and calculated central spine point trajectories is given in Fig 9. Since no sub-pixel accuracy is used for tracking, the calculated path contains more straight lines interrupted by edges. However, the deviation from the ground truth path is rarely more than one pixel.
Fig 9

Resultant center of mass trajectories compared to the ground truth paths.

(A) Center of mass point of the ground truth and tracked larvae. (B) Close-up or the dashed box from Fig 9a.

Resultant center of mass trajectories compared to the ground truth paths.

(A) Center of mass point of the ground truth and tracked larvae. (B) Close-up or the dashed box from Fig 9a.

Body bending

A high tracking precision can also be observed within the body bending quantification: the mean deviation is below 4° (Table 1) which is depicted in the top left corner of Fig 8c where the head of the larva is given in red, the tail in blue, and the central spine point in black. The mean deviation is indicated by the light yellow area visible at both sides of the spine segment connecting the head and the tail. By taking a closer look at the plots given in Fig 8c, it can be seen that again only larva 6 includes several frames with a very strong deviation. Since the deviations go up to 171° the head and the central spine point are swapped which can be observed in Fig 8d.

Availability and future directions

FIMTrack is freely available as a pre-built binary package for Windows and Mac at http://fim.uni-muenster.de. Further documentation and exemplary FIM-images for testing purposes are available at the same website. An open-access video tutorial for experimental biologists illustrating the usage of our system with and without stimulation can be found in [12]. The source code of FIMTrack is licensed under the GNU GPLv3 and can be obtained from https://github.com/i-git/FIMTrack. Users implementing new features or extensions are encouraged to submit their work via GitHub’s pull request mechanism for inclusion in a coming release. In the past, several others groups successfully used FIMTrack to differentiate between different behavioral phenotypes (examples for Drosophila larvae can be found in [9, 18, 19] and for C. elegans in [13]). Furthermore, the software has been used as the basis for extensions in order to address more specific biological questions [8, 10, 20]. It should be noted that FIMTrack has been initially developed for FIM images and Drosophila larvae [7]. For example, the algorithms described above only segment the animals if the background is darker than the foreground (i.e. the animals). However, we successfully adopted the algorithm to track images recorded with transmitted light illumination by inverting the images before passing them to FIMTrack. Furthermore, some of the extracted features are only valid for larval behavior like the stop and go classification. Otherwise, since the complete model of the animal (Eq 1) obtained after tracking, prepossessing, and maybe some user adjustments is saved in a standardized file format (i.e. CSV), higher-level features for other model organisms can be derived easily. Finally, FIMTrack does not include a module to resolve colliding animals so that the identities of animals participating in a collision get lost and the trajectories of these animals terminate. After the ending of the collision the associated animals receive new identities and are treated as newly appeared. In the future, we are going to extend FIMTrack by optimizing the tracking for other model organisms like flatworms. In order to overcome the problem of losing identities and behavioral quantities during animal-animal contact, we are working on a statistical approach capable of resolving colliding animals.

In order to quantify the accuracy of FIMTrack we have manually tracked 15 larvae over 211 frames.

The resultant quantities and the used evaluation script are provided in order to guarantee reproducibility of our results. Note that the images can be downloaded at http://fim.uni-muenster.de. (ZIP) Click here for additional data file.

FIMTrack manual describing the work flow.

(PDF) Click here for additional data file.
  15 in total

Review 1.  Drosophila: genetics meets behaviour.

Authors:  M B Sokolowski
Journal:  Nat Rev Genet       Date:  2001-11       Impact factor: 53.242

2.  FIM$^{2c\;}$: Multicolor, Multipurpose Imaging System to Manipulate and Analyze Animal Behavior.

Authors:  Benjamin Risse; Nils Otto; Dimitri Berh; Matthias Kiel; Christian Klambt
Journal:  IEEE Trans Biomed Eng       Date:  2016-05-18       Impact factor: 4.538

3.  Controlling airborne cues to study small animal navigation.

Authors:  Marc Gershow; Matthew Berck; Dennis Mathew; Linjiao Luo; Elizabeth A Kane; John R Carlson; Aravinthan D T Samuel
Journal:  Nat Methods       Date:  2012-01-15       Impact factor: 28.547

4.  The EthoVision video tracking system--a tool for behavioral phenotyping of transgenic mice.

Authors:  A J Spink; R A Tegelenbosch; M O Buma; L P Noldus
Journal:  Physiol Behav       Date:  2001-08

5.  Biallelic Variants in UBA5 Reveal that Disruption of the UFM1 Cascade Can Result in Early-Onset Encephalopathy.

Authors:  Estelle Colin; Jens Daniel; Alban Ziegler; Jamal Wakim; Aurora Scrivo; Tobias B Haack; Salim Khiati; Anne-Sophie Denommé; Patrizia Amati-Bonneau; Majida Charif; Vincent Procaccio; Pascal Reynier; Kyrieckos A Aleck; Lorenzo D Botto; Claudia Lena Herper; Charlotte Sophia Kaiser; Rima Nabbout; Sylvie N'Guyen; José Antonio Mora-Lorca; Birgit Assmann; Stine Christ; Thomas Meitinger; Tim M Strom; Holger Prokisch; Antonio Miranda-Vizuete; Georg F Hoffmann; Guy Lenaers; Pascale Bomont; Eva Liebau; Dominique Bonneau
Journal:  Am J Hum Genet       Date:  2016-08-18       Impact factor: 11.025

6.  A central neural pathway controlling odor tracking in Drosophila.

Authors:  Gemma Slater; Peter Levy; K L Andrew Chan; Camilla Larsen
Journal:  J Neurosci       Date:  2015-02-04       Impact factor: 6.167

7.  High-throughput behavioral analysis in C. elegans.

Authors:  Nicholas A Swierczek; Andrew C Giles; Catharine H Rankin; Rex A Kerr
Journal:  Nat Methods       Date:  2011-06-05       Impact factor: 28.547

8.  The Hunchback temporal transcription factor establishes, but is not required to maintain, early-born neuronal identity.

Authors:  Keiko Hirono; Minoree Kohwi; Matt Q Clark; Ellie S Heckscher; Chris Q Doe
Journal:  Neural Dev       Date:  2017-01-31       Impact factor: 3.842

9.  Automated tracking of animal posture and movement during exploration and sensory orientation behaviors.

Authors:  Alex Gomez-Marin; Nicolas Partoune; Greg J Stephens; Matthieu Louis
Journal:  PLoS One       Date:  2012-08-09       Impact factor: 3.240

10.  Interactions among Drosophila larvae before and during collision.

Authors:  Nils Otto; Benjamin Risse; Dimitri Berh; Jonas Bittern; Xiaoyi Jiang; Christian Klämbt
Journal:  Sci Rep       Date:  2016-08-11       Impact factor: 4.379

View more
  19 in total

Review 1.  A review of 28 free animal-tracking software applications: current features and limitations.

Authors:  Veronica Panadeiro; Alvaro Rodriguez; Jason Henry; Donald Wlodkowic; Magnus Andersson
Journal:  Lab Anim (NY)       Date:  2021-07-29       Impact factor: 12.625

2.  Drosophila ßHeavy-Spectrin is required in polarized ensheathing glia that form a diffusion-barrier around the neuropil.

Authors:  Nicole Pogodalla; Holger Kranenburg; Simone Rey; Silke Rodrigues; Albert Cardona; Christian Klämbt
Journal:  Nat Commun       Date:  2021-11-04       Impact factor: 14.919

3.  TRex, a fast multi-animal tracking system with markerless identification, and 2D estimation of posture and visual fields.

Authors:  Tristan Walter; Iain D Couzin
Journal:  Elife       Date:  2021-02-26       Impact factor: 8.140

4.  Mechanosensory input during circuit formation shapes Drosophila motor behavior through patterned spontaneous network activity.

Authors:  Arnaldo Carreira-Rosario; Ryan A York; Minseung Choi; Chris Q Doe; Thomas R Clandinin
Journal:  Curr Biol       Date:  2021-09-02       Impact factor: 10.900

5.  The sulfite oxidase Shopper controls neuronal activity by regulating glutamate homeostasis in Drosophila ensheathing glia.

Authors:  Nils Otto; Zvonimir Marelja; Andreas Schoofs; Holger Kranenburg; Jonas Bittern; Kerem Yildirim; Dimitri Berh; Maria Bethke; Silke Thomas; Sandra Rode; Benjamin Risse; Xiaoyi Jiang; Michael Pankratz; Silke Leimkühler; Christian Klämbt
Journal:  Nat Commun       Date:  2018-08-29       Impact factor: 14.919

6.  Prepulse inhibition in Drosophila melanogaster larvae.

Authors:  Yutaro Matsumoto; Kazuya Shimizu; Kota Arahata; Miku Suzuki; Akira Shimizu; Koki Takei; Junji Yamauchi; Satoko Hakeda-Suzuki; Takashi Suzuki; Takako Morimoto
Journal:  Biol Open       Date:  2018-09-27       Impact factor: 2.422

7.  Intrinsic control of muscle attachment sites matching.

Authors:  Alexandre Carayon; Laetitia Bataillé; Gaëlle Lebreton; Laurence Dubois; Aurore Pelletier; Yannick Carrier; Antoine Wystrach; Alain Vincent; Jean-Louis Frendo
Journal:  Elife       Date:  2020-07-24       Impact factor: 8.140

8.  Regulation of subcellular dendritic synapse specificity by axon guidance cues.

Authors:  Emily C Sales; Emily L Heckman; Timothy L Warren; Chris Q Doe
Journal:  Elife       Date:  2019-04-23       Impact factor: 8.140

9.  Xrp1 genetically interacts with the ALS-associated FUS orthologue caz and mediates its toxicity.

Authors:  Moushami Mallik; Marica Catinozzi; Clemens B Hug; Li Zhang; Marina Wagner; Julia Bussmann; Jonas Bittern; Sina Mersmann; Christian Klämbt; Hannes C A Drexler; Martijn A Huynen; Juan M Vaquerizas; Erik Storkebaum
Journal:  J Cell Biol       Date:  2018-09-12       Impact factor: 10.539

10.  Synthetic Light-Activated Ion Channels for Optogenetic Activation and Inhibition.

Authors:  Sebastian Beck; Jing Yu-Strzelczyk; Dennis Pauls; Oana M Constantin; Christine E Gee; Nadine Ehmann; Robert J Kittel; Georg Nagel; Shiqiang Gao
Journal:  Front Neurosci       Date:  2018-10-02       Impact factor: 4.677

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.