Literature DB >> 35259156

A bio-mimetic miniature drone for real-time audio based short-range tracking.

Roei Zigelman1, Ofri Eitan2, Omer Mazar3, Anthony Weiss1, Yossi Yovel2,3,4.   

Abstract

One of the most difficult sensorimotor behaviors exhibited by flying animals is the ability to track another flying animal based on its sound emissions. From insects to mammals, animals display this ability in order to localize and track conspecifics, mate or prey. The pursuing individual must overcome multiple non-trivial challenges including the detection of the sounds emitted by the target, matching the input received by its (mostly) two sensors, localizing the direction of the sound target in real time and then pursuing it. All this has to be done rapidly as the target is constantly moving. In this project, we set to mimic this ability using a physical bio-mimetic autonomous drone. We equipped a miniature commercial drone with our in-house 2D sound localization electronic circuit which uses two microphones (mimicking biological ears) to localize sound signals in real-time and steer the drone in the horizontal plane accordingly. We focus on bat signals because bats are known to eavesdrop on conspecifics and follow them, but our approach could be generalized to other biological signals and other man-made signals. Using two different experiments, we show that our fully autonomous aviator can track the position of a moving sound emitting target and pursue it in real-time. Building an actual robotic-agent, forced us to deal with real-life difficulties which also challenge animals. We thus discuss the similarities and differences between our and the biological approach.

Entities:  

Mesh:

Year:  2022        PMID: 35259156      PMCID: PMC8932603          DOI: 10.1371/journal.pcbi.1009936

Source DB:  PubMed          Journal:  PLoS Comput Biol        ISSN: 1553-734X            Impact factor:   4.475


Introduction

Many animals are hypothesized to use real-time audio processing to localize and track conspecifics, mates, or prey [1-4]. Some organisms have even evolved special highly accurate ears for this purpose [3]. Especially noteworthy are animals that track sounds emitted by other aviators who are themselves in flight. These animals must apply rapid sensorimotor algorithms in real-time. Mosquitoes, for example, rely on sounds generated by their conspecifics’ wingbeat to maintain a cohesive flying group [4,5]. Some bird species recruit conspecifics for collective hunting using vocalizations [6,7] while others migrate at night supposedly using vocalizations to remain in a group [8]. Bats are hypothesized to move in groups in search of prey by eavesdropping on each other’s echolocation signals [9-12]. In all these cases, the aviators must detect the desired sounds and then localize them and adapt their own flight-trajectories within dozens of milliseconds. Such animals thus require especially fast sensorimotor algorithms which are both challenging to reveal and potentially beneficial to mimic. Some of the specific challenges include detecting the correct sound signals within background noise, matching the sound signals arriving at the two ears to estimate source-direction and applying relevant algorithms to control the necessary flight maneuver. Bio-mimetic robotics [13] has become a popular approach to study biological systems. Two of the main goals of this approach include (1) Improvement of current man-made technologies by mimicking animals’ abilities to sense and move, and (2) Understanding and testing models of animal behavior with technology that mimics animals’ performance while adhering to biological constraints. This second goal has an advantage over theoretical (computer) models, as building an actual device requires solving real-life problems such as overcoming natural external noise, which can be ignored or mitigated when using a computer model. Various previous studies have applied a bio-mimetic approach to study sonar based movement and localization [14-16]. Others have used this approach to produce bio-inspired sounds localization [17,18]. In this study, we aimed to replicate the ability of certain flying animals to detect, localize and track sounds emitted by another flying aviator in real-time. We equipped a miniature ~30 gr drone (similar in weight to many bat species) with a pair of synchronized ultrasonic microphones and a simple micro-controller, and we developed a fast sensorimotor approach to guide the drone in the direction of a moving platform emitting bat sound signals. In order to keep our approach bio-mimetic, we restricted ourselves to using only two microphones positioned in the same plane with a very short distance between them. Using more microphones or spacing them farther apart would improve localization, but would be less biological. We specifically focused on biological bat echolocation signals, but our approach could be generalized to other signals as well. In brief, our approach included: (1) an analog high-pass filter circuit that removed most of the low-frequency noise (including the rotor noise) and amplified the bat signals relative to the background noise. (2) Detection of the signals by a simple threshold crossing algorithm. (3) Measuring the time difference of arrival (TDoA) between the two microphones (simulating the ears) and using cross-correlation to assess the direction of the sound source. (4) Steering the drone towards the sound source by turning towards it and maintaining a constant flight speed, controlled by a microcontroller (See Methods for more details). After developing the system, we performed two experiments to examine its abilities. We show that a rather simple approach can pursue a moving sound source in real-time, and we discuss how our approach differs from that of echolocating bats or other mammals, and how it could be further generalized for both mimetic and technological aims.

Results

The experiments were conducted in a large anechoic room (5x4.5x2.5m3). In the first experiment, we tested the system’s ability to measure the direction of an in-plane (2D) moving sound source, and to turn towards it in real-time without pursuing it (Fig 1A). We played echolocation signals of a Pipistrellus kuhlii bat using an Avisoft playback system (UltraSoundGate 116Hm D/A converter) connected to a Vifa speaker. The speaker was held manually, at a distance of 0.4m from the drone (which was hovering in place). The speaker circled the drone from left to right and vice versa along a 180° arc at an angular speed of 15–20 deg/s. We played three bat calls per second at an intensity of 105dB SPL (measured at 1m)–mimicking a bat during search phase when the intervals between the calls are relatively long [9,19,20]. The system filtered out the low-frequency noise, including the noise generated by the rotors (which has substantial energy at frequencies of up to 25KHz), and detected the bat-signals at a very high rate from a distance of up to 0.8m (Fig 1B). We used video tracking to monitor the position and heading of the speaker and the drone in all experiments (see Methods). The drone successfully tracked the direction of the sound source, maintaining an angle of 34 o ± 27o (Mean ± SD) relative to the sound source and under 20 o during 35% of the time (Fig 1C and 1D and S1 Movie).
Fig 1

The drone tracked the direction of a sound source in real time.

(a) Schematic of the experiment. (b) The probability of detecting our played-back sound signals. A hundred signals were emitted for each distance and their detection rate was computed. (c) The heading angle of the drone (black) and the sound source (grey) relative to north for one example trial. (d) Overall orientation error histogram.

The drone tracked the direction of a sound source in real time.

(a) Schematic of the experiment. (b) The probability of detecting our played-back sound signals. A hundred signals were emitted for each distance and their detection rate was computed. (c) The heading angle of the drone (black) and the sound source (grey) relative to north for one example trial. (d) Overall orientation error histogram. The second experiment tested the ultimate goal of the system, that is, to pursue a moving target emitting ultrasonic signals. Here, too, the playback speaker was hand-held (while playing bat signals as above) and moved around at an average speed of up to 0.85 m/s. The experimenter moved the speaker in various trajectories including spherical, figure-eight and random movements. In all cases, the speed and the exact trajectory were inherently noisy due to the difficulty to maintain an accurate speed when moving the target manually. In all trials, the drone successfully tracked the target, managing to remain in close proximity to the target (the mean distance was 37 ± 8cm (Mean ± SD), see Fig 2 and S2 Movie). The drone’s tracking error, the distance between the robot and the source of transmission, was independent of the velocity of the source, in the range of velocities tested (0.18–0.28 m/s, there was no correlation between error and speed, Pearson correlation, R = -0.372, P = 0.412, N = 7).
Fig 2

Our mimetic-drone was able to track a sound-emitting moving target.

(a) Four examples of tracking trials, in a 2-Dimensional plane. Black line depicts the drone’s movement and grey is the target (i.e., the speaker). The z-axis shows time. Examples from top left to bottom right include–(a1) circular movement, (a2-3) two figure eight trajectories, (a4) and one random trajectory respectively. (a5-6) Two examples of the error (i.e., the distance between drone and source) over time during circular movement (trial a1) and figure eight (trial a2) respectively. (b) The overall, for all trials, pursuit distance histogram. (c) The drone’s tracking performance was independent of speed for the range of tested speeds.

Our mimetic-drone was able to track a sound-emitting moving target.

(a) Four examples of tracking trials, in a 2-Dimensional plane. Black line depicts the drone’s movement and grey is the target (i.e., the speaker). The z-axis shows time. Examples from top left to bottom right include–(a1) circular movement, (a2-3) two figure eight trajectories, (a4) and one random trajectory respectively. (a5-6) Two examples of the error (i.e., the distance between drone and source) over time during circular movement (trial a1) and figure eight (trial a2) respectively. (b) The overall, for all trials, pursuit distance histogram. (c) The drone’s tracking performance was independent of speed for the range of tested speeds.

Discussion

This work demonstrates that our approach, which relied on real-time processing using rather simple algorithms, was enough for tracking a slowly moving sound source. Our sensory processing included a high-pass filter, a threshold detector, and finding the peak of the cross-correlation between the two channels. The target in our experiments was moving rather slowly (more similar to an insect than a vertebrate) but it is likely that our system could be upgraded to allow tracking faster targets, without any dramatic changes to the algorithm. First, the microcontroller performed a cross-correlation between the signals arriving at the two microphones in windows of 28ms. This imposed a rather long integration time, and it dictated a slow response time. We could elevate the system’s response speed by using shorter integration windows. Notably, our target was also rather slow in its emission rate—emitting only 3 calls per second, less than a typical bat would do when flying with nearby foraging bats. The emission rate of the target will determine the maximum potential sensory update rate and will thus affect the response time of the tracker. Our approach was oversimplified in several ways that could be improved in the future. To simplify the analysis and due to the lack of a pinna-like model, we restricted the tracking to two dimensions. Notably, although bats move in 3D [21], their movement in the third dimension is much lesser in comparison to that in the horizontal plane when foraging in a restricted space. Our approach could be generalized to 3D by adding pinnae-like structures on-top of the microphones. As is well documented in mammals [22,23], such pinnae provide elevation-specific filtering which complete the time of arrival differences and provide 3D information. As observed in real bats such pinnae could be small and light-weight and could thus be carried even by our miniature drone. Moreover, the approach presented in this study only dealt with a single sound source and thus avoided the problem that often occurs in reality when more than a single sound source has to be tracked or when multiple sources have to be separated. A future algorithm will deal with such situations, localizing more than a single source and either selecting one of them or following some weighted average of their directions. In a previous work of our group, it was shown how multiple sources can be localized within a single sound-beam [24]. In our current trials, the acoustic target was hand-held and moved by a human experimenter who might have biased the movement to ease tracking (but also added stochasticity to the movement–see Fig 2). In future experiments, we aim to extend our approach to several drones autonomously moving while tracking each other according to a commonly simulated agent based model for collective behavior [25]. We did not use active sonar as we have done previously in a terrestrial system [24], but only mimicked passive hearing localization abilities. In this sense, our model is reduced in comparison to actual bats, but a future drone can hopefully integrate our passive tracking algorithm with active sonar. These conditions made the task easier for the system and it would be interesting (and difficult) to test it under more realistic multiple bat-situations. Finally, in this work, we focused on tracking bat signals, but our approach could be easily generalized to other signals as well.

Methods

Overall, the system consisted of an in-house developed PCB which included two microphones (Knowles, SPU0410LR5H), a (High Pass Filter) HPF circuit, 2 ADCs (Analog Digital Converters) with DMA (Direct Memory Access), a STM32F446 (ST) micro-controller and a UART Rx/Tx chip for communication with the drone. This system was installed on a Crazyflie 2.0 platform (by Bitcraze AB). The entire system can be seen in Fig 3A. The PCB is 28x20 mm2 large and weighs 1.4 gr, making the total weight of the drone 34 gr (similar to many bat species). The drone itself, with no additions, weighs 27 gr and has a maximal takeoff weight of 42 gr.
Fig 3

The drone found the angle to the source of transmission.

(a) Image of the entire system–the drone with the embedded electronics. (b) Recording of the bat chirp, as received by the system’s microphones. (c) Cross-correlation between the recordings from both microphones, peak-detection and angle-of-arrival calculation, as done by the PCB integrated microcontroller.

The drone found the angle to the source of transmission.

(a) Image of the entire system–the drone with the embedded electronics. (b) Recording of the bat chirp, as received by the system’s microphones. (c) Cross-correlation between the recordings from both microphones, peak-detection and angle-of-arrival calculation, as done by the PCB integrated microcontroller. The two synchronized microphones, acting as the system’s ears were mounted on the drone spaced 5cm apart. The Knowles microphones are almost omnidirectional and they are sensitive in the range between 10Hz to 100kHz, but their frequency response is highly nonlinear (link). The microphones input was filtered with a designed amplification circuit which helped compensating for this frequency response. The receiving circuit included a second-order HPF with a cutoff frequency of 48kHz, a damping factor of 0.5, and gain of 6dB. The system allowed recording of the bat signals after removing much of the noise at the lower frequencies. The signal was recorded by the STM32F446 micro-controller at a sampling rate of 144 kHz in each in windows of ~28 ms. The micro-controller cross-correlated the two recordings and performed a peak detection algorithm on the resulting signal (Fig 3B and 3C) to estimate the time difference between the two according to: where τ is the time difference of arrival between the two microphones, f and g are the sets of samples from each microphone, each with a length of 2*a samples, [−a,a] is the closed interval domain of the cross-correlation function between f and g, n is the independent discrete variable and f is the sampling frequency. Assuming the distance from the source of transmission is much greater than the distance between the microphones, a planar wave assumption can be made regarding the propagation of sound in air. This assumption allows the calculation of the angle of arrival (AoA) θ: Controlling the drone: Once a threshold of 0.65V above the microphones’ DC (Direct Current) offset was crossed in the ADC GPIOs (General Purpose Input/Output) (Fig 3) the microcontroller started recording the microphones, estimated the azimuth of the sound source and steered the drone accordingly. The drone is equipped with a microcontroller running embedded software tailored for its Real-Time Operating System (RTOS). Our sensory microcontroller thus transmitted the estimated turning angle to the drone which in response rotated by a degree of θ and at an angular velocity of up to 3 rad/s while remaining stationary in experiment 1 or while moving forward in which a constant speed of 0.3m/s in experiment 2. After reaching the required angle, the drone kept its direction of flight with constant velocity, till the next angle command from the microcontroller. The drone maintained a constant height above ground using an optic sensor.

Tracking the experiments

The sound-source (the playback speaker) and the drone were tracked using the motion analysis tracking system composed by 20 tracking cameras (16 Raptor E 1280 × 1024 pixels cameras, and four Raptor-12 4096 × 3072 pixels cameras, Motion-Analysis Corp.). Motion was tracked at 200 fps and with a spatial resolution of less than 1 mm (see full details regarding the tracking accuracy in supplementary Fig 1 of ref [26]). To enable tracking, 6 mm spherical reflective facial markers (3X3 Designs Corp.) were glued to the drone and speaker using a double-sided tape. Of the 20 trials we ran only eight contained usable data. In the other trials there was no data for various technical reasons: either the tracking system did not start, the drone’s battery finished or the drone exited the FOV covered by the system, etc. Seven of these eight trials were fully analyzed. During the first experiment (the drone didn’t fly towards the target) the detection probability of the sensory system was measured. The source of transmission was aimed at the microphones, and 100 chirps were transmitted at each distance. The motors were operating at full thrust for the purpose of maximal noise injection into the microphone recordings. The detection probability was calculated by:

Movie presenting the first experiment where the drone turns towards the direction of the sound source (a speaker held by the experimenter).

(MP4) Click here for additional data file.

Movie presenting the second experiment where the drone autonomously tracks the sound source (a speaker held by the experimenter).

(MP4) Click here for additional data file. 28 Nov 2021 Dear Mr Zigelman, Thank you very much for submitting your manuscript "A bio-mimetic miniature drone for real-time audio based tracking" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations. Specifically the title could better reflect the locality of the tracking capability. In addition it would be helpful to the reader to state clearly, early on, that sound localization is implemented in 2D. I also share a concern of a reviewer that “Of the 20 trials that were recorded the seven that had most consistent video tracking were analyzed.” - this looks dubious unless there is sufficient justification for selecting these data. It seems unlikely that such a tracking system would completely fail to track, so even if trajectories are interrupted it will be very helpful to include/show these data. Further comments and changes are listed below in the referee reports. Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. When you are ready to resubmit, please upload the following: [1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out [2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file). Important additional instructions are given below your reviewer comments. Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Iain Couzin Guest Editor PLOS Computational Biology Natalia Komarova Deputy Editor PLOS Computational Biology *********************** A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately: [LINK] Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: In the manuscript titled “A bio-mimetic miniature drone for real-time audio based tracking” the authors report a study on a performance of a small autonomous flying robot that uses two microphones, as well as a custom-developed microcontroller, to locate the orientation of a sound emitter. By orienting towards the sound source, the drone could follow a single speaker – given that the motion speed of the source is small enough so that the drone does not lose it, and its velocity does not change very abruptly. The authors discuss the performance and the limitation of their approach to some extent. The manuscript is generally well written, it is clear and easy to follow. This is a very exciting line of research, in my opinion, but I have several crucial concerns, which honestly makes it hard for me to give a clear recommendation. Major concerns: I am not convinced that the topic of the manuscript fits well in the field of Computational Biology. Although it’s connection to biology in its aims are obvious, the manuscript does not discuss any biological implications in detail. It is also not clear whether the actual implementation could or could not be used to study animals. The manuscript may better suit in a more technological journal? The title is not specific enough. It does not specify that the method could only work for locating a single sound source and within a very short range. I am sceptical about how the current implementation could overcome these limitations, so at least the title should clearly state them somehow. The method they use and report in this manuscript works for sound localisation only in 2D. Although the authors point this out in their discussion (2nd paragraph), but I think it is a major limitation that should be made clear from the beginning. Generalizing the system to work in 3D (using only 2 microphones) would be a hard challenge. Minor comments: Fig. 1c. The box for the keys is misaligned. Using “drone” and “source” here is a much better choice than “Crazyflie” and “Speaker” as on Fig. 2. This inconsistency should be resolved. Page 8, last line: N should be given for the Pearson correlation. Page 9, Fig. 2. Legend, last line: “Of the 20 trials that were recorded the seven that had most consistent video tracking were analyzed.” This sounds worrying, as it is not clear how the presented data was selected. At page 11, the Authors state about the tracking: “Motion was tracked at 200 fps and with a spatial resolution of less than 1 mm.” So either there they should report any other source of tracking error or proportion of missing track points, or they should report results calculated for an entire recorded data set. Some of the abbreviations are not introduced. In the 3rd paragraph of the discussion, the Authors refer to an unpublished work (Ref. 24. Elyakim, I., Kosa, G. & Yovel, Y. An autonomous navigating and mapping acoustic robat. Under review. ). Or is this meant to be a reference to Elyakim, I., Cohen, Z., Kosa, G. & Yovel, Y. A fully autonomous terrestrial bat-like acoustic robot, PLOS Computational Biology, 14.9 (2018)? Page 11, equations: The two equations should match in their appearance. If tau_difference is defined in the first equation, I see no reason why this is not used in the second equation. I think the figure titles could be improved by changing them to summarise what is illustrated on the figure, and not just state what the drone did. Typos: Fig. 2. Legend, line 2. “in2-Dimentional” Summary As a consequence of all the above, I can’t recommend publication in PLOS Computational Biology. Reviewer #2: This paper describes a biomimetic robot inspired by bats that uses two microphones and a time difference of arrival approach to track and turn toward a speaker producing high-frequency pings. Two experiments are performed to demonstrate the capabilities of the robot - one where the robot remains stationary and turns toward the speaker, and one where it moves with a constant velocity and pursues the speaker. The article is clearly written and the results demonstrates proof-of-principle that the system can perform this tracking task reasonably well. As I am not an expert in robotics or sound localisation, I cannot assess the novelty of the approach, however I found the paper interesting and the results seem sound. I have a few suggestions for improvement. 1. The authors provide a link to a Github repository containing the source code for the project. I am glad to the code is available, however there is very little documentation, which would make it difficult for someone to reproduce. I think this issue is especially relevant as the paper is really about demonstrating a method as a proof of principle, thus a large part of the value would be in allowing other researchers to use and expand on the system. I would therefore suggest better documentation to allow potential replication of the setup. 2. It is mentioned in the manuscript that a tracking system using 24 cameras was used to track the drone and playback speaker. Is this system described elsewhere? If so, it would be good to provide a reference, and if not, I think more detail should be included on the tracking system. 3. The tests have been carried out by a human manually moving a speaker around. This probably introduces some bias into the measurements, as the human (who probably wants the robot to succeed) may subtly respond to the robot by slightly modifying their behavior, e.g. to slightly wait for it if it is behind. The ideal case for testing would have been to have the speaker movement automated, however I recognise this might have been more trouble than it is worth. Overall, I don’t see this as a huge issue as the paper mainly serves as a proof of concept, but it might be worth just acknowledging this as a potential limitation. Line edits: As a general comment, it would be nice to provide line numbers to simplify review. I have done my best here to describe where these edits are located. Pg 1 paragraph 3: “theoretic” should probably be “theoretical” Pg 2 Results: “hovering at place” —> “hovering in place” Pg 6 “the system allowed to record” —> “the system allowed recording of” or “the system allowed us to record” Figure 2: I suggest labelling the legend as “drone” rather than “crazyflie” for clarity ********** Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Figure Files: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Data Requirements: Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5. Reproducibility: To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols References: Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. 1 Feb 2022 Submitted filename: Reviewer_answer_Jan_27_2022.docx Click here for additional data file. 18 Feb 2022 Dear Dr. Yovel, We are pleased to inform you that your manuscript 'A bio-mimetic miniature drone for real-time audio based short-range tracking' has been provisionally accepted for publication in PLOS Computational Biology. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests. In addition, I would suggest that you consider revising the title. Currently it is rather cumbersome, especially since "audio based" should be "audio-based", resulting in many hyphenated terms. Perhaps you could employ a simpler alternative that reads more clearly, such as "A bio-mimetic miniature drone for real-time, local acoustic tracking" or "A bio-mimetic miniature drone for real-time tracking of a local sound-emitting target". Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated. IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript. Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS. Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. Best regards, Iain Couzin Guest Editor PLOS Computational Biology Natalia Komarova Deputy Editor PLOS Computational Biology *********************************************************** 3 Mar 2022 PCOMPBIOL-D-21-01562R1 A bio-mimetic miniature drone for real-time audio based short-range tracking Dear Dr Yovel, I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course. The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers. Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work! With kind regards, Zsofia Freund PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol
  21 in total

1.  Hyperacute directional hearing in a microscale auditory system.

Authors:  A C Mason; M L Oshinsky; R R Hoy
Journal:  Nature       Date:  2001-04-05       Impact factor: 49.962

2.  Collective memory and spatial sorting in animal groups.

Authors:  Iain D Couzin; Jens Krause; Richard James; Graeme D Ruxton; Nigel R Franks
Journal:  J Theor Biol       Date:  2002-09-07       Impact factor: 2.691

3.  Understanding signal design during the pursuit of aerial insects by echolocating bats: tools and applications.

Authors:  Marc W Holderied; Chris J Baker; Michele Vespe; Gareth Jones
Journal:  Integr Comp Biol       Date:  2008-05-14       Impact factor: 3.326

4.  Resource Ephemerality Drives Social Foraging in Bats.

Authors:  Katya Egert-Berg; Edward R Hurme; Stefan Greif; Aya Goldstein; Lee Harten; Luis Gerardo Herrera M; José Juan Flores-Martínez; Andrea T Valdés; Dave S Johnston; Ofri Eitan; Ivo Borissov; Jeremy Ryan Shipley; Rodrigo A Medellin; Gerald S Wilkinson; Holger R Goerlitz; Yossi Yovel
Journal:  Curr Biol       Date:  2018-11-01       Impact factor: 10.834

5.  Environmental perturbations induce correlations in midge swarms.

Authors:  Kasper van der Vaart; Michael Sinhuber; Andrew M Reynolds; Nicholas T Ouellette
Journal:  J R Soc Interface       Date:  2020-03-25       Impact factor: 4.118

6.  Sensory gaze stabilization in echolocating bats.

Authors:  O Eitan; G Kosa; Y Yovel
Journal:  Proc Biol Sci       Date:  2019-10-16       Impact factor: 5.349

7.  Hyperacute directional hearing and phonotactic steering in the cricket (Gryllus bimaculatus deGeer).

Authors:  Stefan Schöneich; Berthold Hedwig
Journal:  PLoS One       Date:  2010-12-08       Impact factor: 3.240

8.  Echolocation call structure and intensity in five species of insectivorous bats.

Authors:  D A Waters; G Jones
Journal:  J Exp Biol       Date:  1995-02       Impact factor: 3.312

9.  What ears do for bats: a comparative study of pinna sound pressure transformation in chiroptera.

Authors:  M K Obrist; M B Fenton; J L Eger; P A Schlegel
Journal:  J Exp Biol       Date:  1993-07       Impact factor: 3.312

10.  Sensorimotor Model of Obstacle Avoidance in Echolocating Bats.

Authors:  Dieter Vanderelst; Marc W Holderied; Herbert Peremans
Journal:  PLoS Comput Biol       Date:  2015-10-26       Impact factor: 4.475

View more
  1 in total

Review 1.  Blueprints for measuring natural behavior.

Authors:  Alicja Puścian; Ewelina Knapska
Journal:  iScience       Date:  2022-06-18
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.