| Literature DB >> 28506280 |
Abstract
BACKGROUND: Accurate determination of mouse positions from video data is crucial for various types of behavioral analyses. While detection of body positions is straightforward, the correct identification of nose positions, usually more informative, is far more challenging. The difficulty is largely due to variability in mouse postures across frames.Entities:
Keywords: Position analysis; Preference tests; Rodent behavior; Video data
Mesh:
Year: 2017 PMID: 28506280 PMCID: PMC5433172 DOI: 10.1186/s12915-017-0377-3
Source DB: PubMed Journal: BMC Biol ISSN: 1741-7007 Impact factor: 7.431
Fig. 1.The main modules in OptiMouse. The left side shows a workflow of the main analysis stages. The right image shows the main OptiMouse interface. Each of the four buttons evokes a GUI for the corresponding stage
Key stages in OptiMouse
| Stage | Goal | Comments |
|---|---|---|
| Preparation | Define arena boundaries | The user must define the region of interest for analysis and provide actual arena dimensions. |
| Preparation | Conversion of raw video data to MATLAB files | Once arenas are defined, the user can initiate conversion Conversion can be run in batch mode, allowing definition of multiple arenas and then running conversion for all, in a single operation. |
| Detection | Determine optimal detection settings | The user may simply accept the default algorithm and initiate detection; however, in most cases, detection of body and nose positions can be significantly improved by small adjustments of the detection algorithm parameters and, often, by definition of multiple algorithms. In this stage, the movie is browsed to find detection settings that minimize errors in the detection of body and nose positions. The goal is to define a minimum number of algorithms so that at least one is appropriate for each frame of the movie. If multiple settings are defined, one of them must be specified as the default. |
| Detection | Running detection | During detection, OptiMouse finds mouse positions in each frame according to the settings defined by the user. Like conversion, detection can be time consuming and thus the option for a batch mode is provided. Detection settings can be defined for multiple sessions, and then run in a single batch operation. Note that the output of the detection stage, the position files, can be immediately used for analysis manually or by using the OptiMouse analysis interface. |
| Reviewing (optional) | Testing, and if needed, correcting, of the detection settings; annotation | If the stage is bypassed, then the default setting will be applied to all frames. The reviewing stage allows overriding of the default algorithm positions. Individual frames or groups of frames can be assigned with any of the non-default settings defined by the user, or set manually. OptiMouse provides multiple tools to browse the video or to locate individual frames with specific attributes. The impact of reviewing on the final position data depends on the number of errors associated with the default detection setting. A major effort has been made in OptiMouse to provide tools to efficiently locate problematic frames. Nevertheless, this stage may be time consuming. During reviewing, it is also possible to add annotation to the video. |
| Analysis | Derive meaningful behavioral parameters from the position data | Several analysis options are provided at this stage via the graphical user interface. Additionally, task-specific analyses can be implemented by analyzing the Results file. In this stage, the user can define zones (within the arena) and analyze position data with respect to these zones. |
Frequently asked questions concerning the key properties of OptiMouse
|
|
| MATLAB with the image processing tool box. OptiMouse has been tested on MATLAB releases 2015b and 2016a and 2016b. OptiMouse is not compatible with some older versions of MATLAB which use different coding conventions for graphical interfaces. |
|
|
| OptiMouse was developed in Windows 8. However, it should be compatible with other operating systems that run MATLAB. |
|
|
| Running OptiMouse requires a very basic familiarity with MATLAB. At the minimum, it is required to set the path and call the program from the command line. More advanced data analyses, and modifications and extension of the code, naturally require MATLAB programming skills. |
|
|
| OptiMouse was tested with the following formats: mp4, mpg, and wmv. However, any format supported by the MATLAB VideoReader object should be valid. The list of supported formats is obtained by typing “VideoReader.getFileFormats” on the MATLAB command line. On MATLAB 2016b running on a windows OS, this yields: |
| .asx - ASX File |
| .avi - AVI File |
| .m4v - MPEG-4 Video |
| .mj2 - Motion JPEG2000 |
| .mov - QuickTime movie |
| .mp4 - MPEG-4 |
| .mpg - MPEG-1 |
| .wmv - Windows Media Video |
|
|
| No. OptiMouse is an offline analysis software. |
|
|
| No. OptiMouse is designed for analyzing the behavior of a single mouse. |
|
|
| OptiMouse is suitable for any test that involves a single mouse in a stationary arena. Examples of standard tests that fall into this category are place preference tests, open field behavior, plus mazes, three-chamber tests. |
|
|
| Yes. A key aspect in OptiMouse is the detection of body center and nose positions. Some of the detection algorithms in OptiMouse also detect the tail-end and tail-base as intermediate stages of nose detection. However, the coordinates of these positions are not used in other analyses. |
|
|
| No. OptiMouse does not provide automatic detection of body postures. Manual annotation on a frame by frame basis is possible in OptiMouse. |
|
|
| OptiMouse includes several detection algorithms, each of which includes several parameters that can be modified by the user. All of them rely on a color contrast (after the movie has been transformed to grayscale) between the mouse and the arena. Most built-in algorithms employ “peeling” of the mouse image perimeter, which allows detection of the tail and then assists detection of the nose. A detailed explanation is provided in the user manual. In addition, OptiMouse allows incorporation of custom written detection functions. |
|
|
| Yes. The coat color must be distinct from the arena’s background. Ideally, the entire mouse should be darker or lighter than the arena. Thus, a black mouse on a white arena or vice-versa are the ideal scenarios. However, if the mouse contains small patches of a different color on its body, this should not significantly impair detection. Current detection algorithms in OptiMouse will not perform well with a two-colored black and white animal on a gray background. |
|
|
| Yes. OptiMouse is designed to incorporate other detection algorithms. Custom algorithms are “declared” in one of the OptiMouse folders, and then are essentially incorporated into the user interface. Custom functions accept image data and other optional parameters, including user defined parameters. It is even possible to set user-defined parameters (which are not part of the current detection algorithm) graphically via the OptiMouse GUI. Custom written algorithms must, at minimum, return body and nose positions. See the user’s manual for a more detailed explanation of how to write and incorporate custom algorithms into OptiMouse. |
|
|
| The answer obviously depends on many factors. Under ideal conditions, which require minimal user intervention, the entire procedure to process a 10 minute video may require about 20–30 minutes. The actual values also depend on video frame rate, resolution, and computer processing speeds. Most of the time is spent on automated processing not requiring any user input. Such processing can be performed in batch mode, so that the actual user time (with minimal user intervention) is a few minutes. |
|
|
| The two most time consuming stages are setting detection parameters and reviewing the video after detection has been performed. In both cases, the required time depends on the video quality. Videos in which the mouse is always easily separated from the arena facilitate both stages. Videos with variable conditions and poor image signal-to-noise ratios will require more tweaking of the detection settings. Setting detection parameters involves browsing the video and deciding on a number of detection algorithms. This typically requires a few minutes for each video (settings can be saved and applied to other movies if they share similar attributes, thus reducing the time required by the user). Reviewing of the movie can be a lengthy process and depends on the performance of the detection algorithms and the desired accuracy. The reviewing stage includes multiple tools to easily identify frames with erroneous detection, as well as frames in which the mouse is in particular parts of the arena, allowing the user to focus reviewing efforts on the frames that matter most. |
|
|
| OptiMouse does not include a built-in synchronizing signal. However, the Results file contains a frame by frame account of various parameters such as body and nose position, body angle, speed, presence in a certain zone, and occurrence of annotated events. The Results file also contains a time stamp for every frame so that, if the first video frame is synchronized with other non-video data, all other OptiMouse values can be aligned as well. |
|
|
| OptiMouse is stored in Github, with the hope that this will facilitate a community based development of the code. OptiMouse is written in MATLAB and individual code files (m-files) are annotated. The user manual provides a description of file formats, algorithms, and data conversions. The graphical user interface has been designed with the MATLAB GUIDE tool, which can be used to modify the existing interfaces and the code associated with various controls. |
|
|
| In the analysis stage there is a distinction between zone dependent and zone independent analysis. Without defining zones, OptiMouse can generate graphical displays of positions (as tracks or heatmaps), speeds, and body angles as a function of time. If events have been annotated during the annotation stage, their total occurrence during the session or their distribution in time can also be plotted. In addition, the Analysis GUI allows the definition of zones (of arbitrary number and shapes within the arena). Once zones are derived, the analysis of positions and events can also be shown as a function of zone entries. |
|
|
| Yes. The Position file contains frame by frame mouse positional data as well as user annotated events. The Results file also contains zone-related information. Both files are MATLAB data files (*.mat) which can be used for more complicated analysis. A detailed description of the Results file format is provided in the user manual. In addition, the user manual provides example code for the analysis of freezing episodes from the Results file. |
|
|
| No. However, OptiMouse contains an option for adding tags to individual files. These tags then allow grouping of files for statistical comparison of groups of files. See the manual for more details on the use of “experiment tags”. |
|
|
| OptiMouse includes a detailed user manual. For issues that are not covered in the manual, contact the corresponding author of this manuscript by email. |
Fig. 2.Schematic description of the session preparation process. Preparation involves spatial definitions of one or more arenas and size calibration as well as optional removal of irrelevant video sections. Video data for each of the sessions is converted to grayscale images
Fig. 3.The Prepare GUI. The Prepare GUI is shown after definition of three arenas (named left, center, and right). The GUI for arena definition is accessed via the Define button (see the manual for details)
Fig. 4.Schematic of the detection stage. In very broad terms, one or more detection settings (up to six) are applied to each of the frames of the video. Each setting involves several user defined parameters and potentially also user specified algorithms. The selection among the various settings is applied in the review stage
Fig. 5.The Detect GUI. The Detect GUI is shown with one setting defined
Fig. 6.The detection process. a The key stages of nose and body detection. b Examples of detection of various frames in a single session. c Effects of changing the detection threshold. d Effects of changing the number of peeling cycles
Fig. 7.Examples of incorrect detection (left image in each panel) and their correction (right images). Some detection failures can be fixed by adjusting the detection threshold (i.e., a–c) but others require more extensive adjustments. In (d), the mouse is grooming its tail, with the nose positioned close to the tail base. Such cases are difficult to detect consistently in static images, but are apparent when viewed in the context of a movie. Although it is easy to modify the parameters to achieve correct detection in this frame, it is challenging to generate an algorithm that will reliably identify the nose under such cases. In some cases, application of another algorithm is required. For example, algorithm 7 (Additional file 1) is suitable when the tail is not included in the thresholded image. This indeed is the remedy for the examples in (d–g), sometimes combined with a modified threshold. In (f), the left image shows an obvious failure with the tail detected as nose. Detection is improved when the algorithm is changed, yet is still not perfect, since the shadow cast by the nose is detected as the nose. This problem is also beyond the scope of the built-in algorithms, as the shadow is darker than the nose and just as sharp
Fig. 8.A schematic overview of the reviewing stage. This graphic on top illustrates the operations that can be applied to each frame. The bottom panels show that such operations can be applied to individual frames, to a continuous segment of frames, and to frames sharing common attributes
Fig. 9.The review GUI. The review GUI is shown after four settings have been defined during the detection stage
Fig. 10.Examples illustrating the application of detection settings. a Application of different (predefined) settings to a single frame. The active setting is indicated by a larger circle denoting the nose position, a square denoting the body center, and a line connecting them. In the leftmost frame, the active setting is the default (first) setting. In each of the other frames, there is a different active setting. b A sequence of frames with incorrect detection. In this example, the default method fails for the entire segment of frames. c The solution involves three stages. First, a manual position, indicated in yellow, is defined for the first frame in the sequence (4840). Next, setting 3 (pink), is applied to the last frame in the sequence (4845). Finally, the set of frames is defined as a segment (Additional file 1), and the positions within it are interpolated. The interpolated positions are shown in ochre (frames 4841-4844). See the manual for a detailed description of the two available interpolation algorithms
Fig. 11.Examples of some parameter views. In all cases, the current frame is indicated by a diamond, and is shown to the right of each view. a Views associated with position. b Length verus mean intensity of the detected object. c Comparison of angles detected by each of the two settings. The settings which apply to each axis are indicated by the label colors (blue and green denoting the first and second settings, respectively). d View showing body angle change as a function of frame number. Extreme change values, as shown in this example, often reveal erroneous detections. e View showing the setting associated with each frame
Fig. 12.Procedure for marking frames with particular attributes. In this example, frames associated with particular nose positions are marked, and then examined for abrupt changes in direction. a One frame showing positions of odor plates in the arena. b View showing nose positions before marking. c Same view during the process of marking a subset of frames near upper odor plate. d In this view, the marked frames are highlighted. e After switching to a different view, the marked frames are still highlighted. In the present example, this allows identification of frames that are both associated with particular positions and with high values of body angle changes. f Selection of one such dot reveals a frame in which the mouse is near the upper plate and the tail is mistaken for the nose (g)
Fig. 13.Applying a setting to a set of frames. a Three frames that show a similar failure of the first setting cluster together in this view (b). Applying a different setting to all these frames using a polygon (c), also corrects detection in other frames in the cluster (d)
Fig. 14.Automatic correction of position detection errors. a Sequence of frames with a transient detection failure. b Schematic of angle changes in the sequence of frames; the magnitude of angle changes is shown qualitatively. c Actual angle changes before correction. d Angle changes after correction. e Interpolated positions in the two frames that were initially associated with errors
Fig. 15.Schematic of possible types of analysis. Flow chart provides a very general description of possible analyses. Results can be shown as figures, saved as MATLAB data files, and for some analyses, displayed on the command prompt
Fig. 16.The Analysis GUI after the eight zones were defined
Fig. 17.Examples of some graphical analyses. a The arena with eight zones defined (same zones shown in Fig. 16). b Tracks defined by the body center. Colored zones indicate zone entries. Dots representing positions assume colors of corresponding zones. c Tracks made by the nose. d Heatmap of zone positions. e Zone occupancy as a function of time. Each row corresponds to one of the zones. When the nose is inside a specific zone at a particular frame, this is indicated by a dot. Dots in this display are so dense that they appear as lines. f Enrichment score (of the nose) as a function of time. g Total nose time in each zone. h Enrichment score at the end of the session
Fig. 18.Comparison of positional analysis with and without reviewing for three different videos. Each video is shown in one column, and each row represents one type of analysis. The first row from the top shows body position coordinates. The second row shows enrichment scores of body positions in each of five different zones, whose coordinates relative to the arena are shown at the bottom of the figure. The third and fourth rows from the top are similar to the upper rows, except that nose, rather than body positions, are shown. Each panel contains two plots. The plots on the left (in black) show the results using the default setting, while plots on the right (blue) show the same analyses after the application of non-default settings, including manual settings