Electronic Journal «Technical Acoustics» http://webcenter.ru/~eeaa/ejta/
2004, 20
Susan M. Bertram*, Luke A. Johnson, Jerome Clark, Carmenlita Chief
School of Life Sciences, Arizona State University, Tempe, Arizona, USA 85287-4501
An electronic acoustic recorder for quantifying total signaling time, duration, rate and magnitude in acoustically signaling insects
Received 05.11.2004, published 03.12.2004
Recent mate choice investigations reveal that females often prefer to mate with males that produce acoustic loud, long, and/or leading acoustic signals. However, only a limited number of studies have examined within population variation in these temporal components. Even fewer studies have estimated their heritabilities. Work has been hindered by the time and personnel required to quantify the variation. A design for building an efficient and inexpensive electronic acoustic recorder (EAR), that enables hypothesis testing of temporal signaling behavior in most acoustically signaling insects, is described. The EAR is attached to a personal computer and samples the acoustic environment of up to 128 individuals, 10 times per second, for unlimited time periods. It compares microphone sound pressure level to a pre-set level and stores signaling/non-signaling data on the computer’s hard drive. The EAR monitors when individuals signal temporally, how much time they spend signaling, how loud they signal, duration of signaling bouts, duration of breaks, and when they produce their signaling bouts in relation to their neighbors. The capabilities of the recorder are illustrated with the Texas field cricket, Gryllus texensis.
INTRODUCTION
Recent mate choice investigations indicate that the temporal aspects of acoustic mate signaling play an important role in explaining female mating preference. Specifically, research indicates that females preferentially orient to conspecific males that produce leading signals [1-6], have the highest chorus tenure [4, 7-10], produce the loudest signals [3, 6, 11, 12], with the highest signaling times [13, 15], longest signaling bouts [3, 6, 15], and/or highest signaling rates [6, 15]. However, the extent of within population variation in the temporal aspects of acoustic signaling has only been characterized in a handful of investigations [2, 15-20]. Even fewer studies have discerned the heritable components of this variation [13, 14, 20, 21]. This research has been hampered by the time and personnel required to quantify variation in when, how much, and how loud males signal.
Here we describe a design for building an efficient and inexpensive electronic acoustic recorder (EAR) that monitors when, how much, how loud, and how continuously individuals signal through time. This description improves upon an earlier technical paper [22] which described an electronic recording system that collected data on total signaling time, signaling
’corresponding author, e-mail: [email protected]
bout duration, break duration, bout rate, and temporal signaling pattern (when through the course of an evening individuals signal). The design presented here incorporates all data collection capabilities of the earlier version, and also enables data collection on signaling sound pressure level. Further, the need for the expensive DaqBook 120 data acquisition unit included in the previously published paper [22] has been replaced by a substantially cheaper digital interface board (PCI-DIO24DB/CT).
Our EAR differs from previously published descriptions of electronic acoustic recorders in that it monitors the loudness of the acoustic signals and allows 128 individuals to be monitored simultaneously. Kidder and Sakaluk [23] describe a simple and inexpensive electronic acoustic device for monitoring the timing and duration of acoustic signals, but their system can only monitor three individuals simultaneously. Hedrick and Mulloney [24] describe an electronic acoustic device that is capable of detecting every chirp in a train (signaling bout) by the field cricket Gryllus integer. Their electronic acoustic recorder can only monitor up to 16 crickets simultaneously, and is not capable of tracking the loudness of the acoustic signal.
Our electronic acoustic recorder (EAR) can monitor the acoustic signals of 128 individuals simultaneously, for unlimited periods of time. Two EARs can be attached to each digital interface board, enabling simultaneous monitoring of the acoustic signals of 256 individuals. Computers are capable of running multiple digital interface boards, enabling numerous individuals to be monitored simultaneously.
The EAR has two limitations. First, individuals must be housed separately, as the EAR can not discriminate among signaling individuals housed together. Second, the EAR does not digitize acoustic signals, so oscillograms, spectrograms, and power spectra require the use of device like that described by Hedrick and Mulloney.
Below we describe how to build the EAR, the building costs, and the type of data our EAR collects. We then document the EAR’s utility and effectiveness by presenting the results of three different experiments. First, we reveal each microphone’s ability to only record the individual it is monitoring, without recording sound leakage from neighboring containers. Second, we determine the relationship between sound pressure level and microphone score. Third, we document the type of individuals and population level data that can be collected when several signaling individuals are monitored simultaneously.
EAR DESIGN
The EAR monitors the acoustic signaling behavior of 128 individuals simultaneously by continually determining whether each individual is signaling or not, and recording the information to a data file on the computer’s hard-drive. The EAR is capable of monitoring any organism that signals in the frequency range of 20-16,000 Hz (range set by our Electret condenser microphones), for periods longer than 168 ms.
Each EAR consists of 8 printed circuit boards (PC board; Fig. 1). Sixteen wires lead from each PC board to Electret condenser microphones. Signal degradation occurs if the wires to the microphone are longer then 4 meters. The 128 microphones (16 microphones x 8 PC
boards) are distributed among containers housing individual insects. The EAR is controlled and monitored by a computer equipped with a PCI-DIO24DB/CT digital interface board. Each digital interface board can run two EARs simultaneously (256 microphones). Digital interface boards can be permanently installed into empty slots in the CPU. If there are no empty slots available, if the consumer is using a laptop, or requires greater versatility then permanently installing their digital interface board into the CPA, digital interface boards are available which connect to the USB port. One computer can typically drive at least two digital interface boards, enabling 512 individuals to be monitored simultaneously.
Fig. 1.
Schematic diagram for one of the eight printed circuit boards of the electronic acoustic
recorder (EAR)
The EAR works in the following manner. The data acquisition software (Appendix I) sends a signal via the digital interface board to all associated PC boards, resulting in the selection of one microphone per board. The selected microphone is turned on and the sound pressure level of the container is read. This signal is amplified, converted from AC to DC using a peak detect circuit, filtered, and then compared it to a pre-set level.
We empirically determined what level we should use to compare the amplified signal to. During our initial setup we measured the voltage level of the circuit output during a typical signal (broadcasted at a sound pressure level and distance representative of our crickets
signaling in their containers). We then set our trip point to approximately half of that. Users can alter the pre-set level by changing the resistors on the EARs PC board after running a similar trial with their acoustic signals broadcasted at a sound pressure level and distance from the microphone that is typical for their captive taxa.
If the conditioned signal’s sound pressure level is greater than the pre-set level, a one (1) representing signaling is stored in RAM. Otherwise, a zero (0) representing non-signaling is stored. Each board then turns off the microphone and selects the next microphone in the series by updating the selection signal. Only one microphone per PC board is turned on at a time. Since there are 16 microphones per PC board, each microphone can be examined ten times a second. Every second the number of zeros and ones stored in the computer’s RAM is summarized, and a number between zero and ten is stored on the computer’s hard drive for each microphone.
What results is a second-by-second description of individual signaling behavior, which the user has full access to. Our summary software (Appendix II) takes the very large data matrix and summarizes it into the following summary statistics (defined below): total signaling time, average microphone score ± SD (Standard Deviation), average signaling bout duration ± SD, number of signaling bouts, average break duration ± SD, number of signaling breaks, start time, mean time, and stop time. Our MatLab software saves these summary statistics to disk along with a file summarizing the number of seconds each individual signaled each minute throughout the entire monitoring period.
Summary statistics are defined in the following manner. Time spent signaling is the total number of minutes the individual signaled throughout the monitoring period. Average signaling bout duration is the number of minutes the individual signaled without taking a break over a minute in duration. Average break duration is the average number of minutes the individual does not signal between signaling bouts. Temporal signaling pattern is when through the night individual’s signal. We describe each individual’s temporal signaling patterns by quantifying signaling onset (start time), signaling cessation (stop time) and mean time. Start time is the time each male is first observed to signal and is calculated as the number of hours after monitoring started that the individual first signaled (monitoring start = 0). Stop time is the last time each male was observed to signal, calculated as the number of hours after monitoring started that the individual last signaled. Mean time is the normalized mean of the indicator function of time relative to the time that monitoring started. Assuming total monitoring time is from 6:00 PM to 10:00 AM, the equation for mean time is:
10:00 AM
JI (t )tdt
Mean time =
6:00PM
10:00 AM
J I(t)dt
6:00PM
where I is the indicator function, which takes the value of zero or one, depending on whether the male is signaling (1=signal, 0=silent).
Consider, for example, the temporal signaling patterns of three males monitored from
6 PM to 10 AM. If these three males have identical total signaling times, start times, and stop times. Assume these males have identical total signaling times (3 hr), all initiated signaling at 10 PM (start time = 4) and all signaled for the last time at 10:00 AM (stop time = 16). Male A could call for one hour from 10:00-11:00 PM and then for two hours from 8:00 -10:00 AM. He would have a mean time of 5:30 AM (+11.5). Male B could call for two hours from 10:00 PM - midnight and then for one hour from 9:00 - 10:00 AM. His mean time would be 2:30 AM (+8.5). If male C called for one hour from 10:00 - 11:00 PM, for another hour from 3:30-4:30 AM, and then again from 9:00-10:00 AM, his mean time would be 4:00 AM (+10.0).
We quantify signaling sound pressure level using a user defined equation which incorporates microphone scores. The equation is built following a series of calibration tests run by the user (an explanation of how this is done is presented in the estimating sound pressure level section, below). In our case, Texas field cricket signaling sound pressure level was calculated using microphone score and the coefficients from our regression equation resulting from the relationship between microphone score and sound pressure level, measured at a distance of 5 cm:
SPL (dB) = -148 + ^(microphonescore x 626) - 947 .
BUILDING COSTS
Each EAR costs $2000 (US funds) to build. We supplied the design in Figure 1 to an electronics and machine shop associated with Arizona State University, and they built four units for us. The total cost of $8000 included all the parts and labor. This dollar amount also included outsourcing building of the printed circuit boards ($600 for 40 boards; $200 for the production and $ 10/board, enabling us to have the 32 boards built for the 4 units and 8 spares). Our electronics shop purchased the housing for each EAR, loaded all the parts onto each of the PC boards, put each of the units together, attached the microphones and plugs to each of the 4*128=512 wires, and helped trouble-shoot each unit. The machine shop drilled holes for each of the microphones in the casing, and labeled each hole so the user could distinguish microphones.
Please note that $1500 of the $2000 was labor charges (at an inexpensive university rate of $10/hour). The most labor intensive part was making the 128 microphones for each unit. Microphones had to be soldered to pre-cut wire, heat-shrink tubing attached and shrunk, and then the male plug attached to the other end by solder and covered with heat shrink tubing. This is a simple albeit repetitive task, and students could easily be trained how to build these components, reducing the overall cost.
Running the EAR requires a personal computer and digital interface board. The digital interface board connects the computer to the EAR and ensures all results are automatically recorded on the computer’s hard drive. These components are not included in the above estimated building costs. Computer costs vary widely. A fast processor is not necessary,
although we recommend using a computer with substantial hard drive space, as 30 megabytes of hard drive spaces is needed to run one EAR for 12 hours. The digital interface board costs $125. This is substantially cheaper than the data acquisition unit ($1250) used in the original design [22].
EFFECTIVENESS TEST
Purpose: To test microphone effectiveness at recording only the individual it is monitoring and not other nearby individuals.
Methodology: We recorded male Texas field crickets (Gryllus texensis) housed in containers, along with empty containers that surrounded each cricket. We monitored 185 crickets and 327 empty containers over the course of two nights, on the evening of November 25, 2002 and on the evening of March 18, 2003.
The EAR and computer were secured to a wire-shelving unit. The 16 wires and microphones from each PC board were attached, in numeric order, to the underside of a shelf. Containers (wax-coated paper ice-cream bowls with a clear plastic lid) used to house individual crickets were placed on the shelf below. Microphones hung down 15 cm from the shelf, and were inserted through a hole cut into the clear plastic lid into the individuals’ container, ensuring a 5 cm distance from the microphone to the bottom of the container. On each shelf, the sixteen containers (each with a lid and microphone) were placed into a cardboard box (lxhxd = 101 m x 0.10 m x 0.51 m) sub-divided into compartments. Only a subset of these containers housed a male cricket. The rest were empty, enabling us to check for sound leakage during monitoring. The bottom and sides of every compartment were lined with 2.5 cm acoustic foam to reduce sound leakage across containers. During monitoring, male crickets were spaced within and across shelves so that empty compartments surrounded each cricket on all sides (including top and bottom). This design enabled us to determine whether empty containers recorded the signaling behavior of nearby signaling males.
Results and Discussion: Our EAR records only the individuals it is monitoring, and does not falsely record the signals of nearby neighbors. Cricket presence/absence significantly influenced whether the EAR detected acoustic stimuli above the preset threshold (combined days: Pearson Chi-square = 356, P < 0.001; November 25th 2002: Pearson Chi-square = 203, P < 0.001; March 18th 2003: Pearson Chi-square = 150, P < 0.001).
Sound leakage between containers does not occur at a high enough decibel level to be picked up by our EAR microphones. Only 3/327 empty containers exhibited acoustic stimuli loud enough to be recorded by our EAR; the microphones in these empty containers recorded ‘signaling’ for only one second throughout the 16-hour monitoring period. Conversely, 80% (148/185) of the containers that held crickets exhibited acoustic stimuli loud enough to be recorded as cricket signaling. Males in these containers signaled on average for 127 min (range: 5 s to 12.63 hr) throughout the 16 hr monitoring period.
EAR effectiveness did not change through time. The proportion of empty containers being scored as having produced acoustic signals did not change over four-months of continual use
(November 25th 2002: 0/153, March 18th 2003: 3/174; Pearson Chi-square = 2.662, df=1, P=0.103).
Overall, the EAR appears to be highly effective at only monitoring the acoustic signaling behavior of select individuals without inadvertently monitoring signals leaking from neighboring containers. Our leakage test was conducted in a space-limited laboratory, so we placed containers close together and insulated them with acoustic foam to reduce sound transmission across containers. One inch (2.54 cm) of acoustic foam ensured that we did not inadvertently monitor the neighbors’ acoustic signals. This insulation enabled us to monitor 256 containers simultaneously in a small space (3m3: l*h*d = 3 m * 2 m * 0.5 m). Overall, acoustic foam enables space efficient monitoring of the acoustic signaling patterns of large numbers of individuals.
We have a few words of caution for potential users. We encourage users to determine how the signaling behavior of individuals is influenced by their neighbors. Although the EAR microphones do not detect signals from adjacent containers in these artificially close conditions, the signals of multiple individuals are audible to human observers, and are likely to be audible to neighboring individuals. Detection of neighboring signals may alter the signaling behavior of the individuals being monitored, and this should to be carefully examined for in the species being monitored.
We also limit the scope of this design to insects, because container size is an important issue. The containers used for housing individuals and monitoring their acoustic signals need to be small enough to ensure the individual being monitored can not stray too far from the microphone. If the organism being monitored is allowed to roam freely in a large cage, it may signal from locations which can not be detected by the EAR. This would result in lower total signaling times, and inaccurate estimations of the temporal signaling patterns. Even if an organism’s signals can be detected, if the individual is further from the microphone then anticipated when it signals, the microphone to sound pressure level correlation will be uncoupled (see below). This would result in erroneous sound pressure level estimates.
ESTIMATING SOUND PRESSURE LEVEL
Purpose: To document the EAR’s capability of discerning the sound pressure level of acoustic signals produced by insects.
Methodology: We monitored the pre-recorded acoustic signals played from a speaker at a variety of known sound pressure levels and determined the microphone scores. We then fit a
2 degree polynomial to quantify the relationship between microphone score and sound pressure level.
We used a pre-recorded mate attraction signal of the Texas field cricket, Gryllus texensis, to determine if microphone score corresponds to signaling sound pressure level. The signals of the Texas field cricket were digitally recorded and stored on a compact disc. The signals were broadcasted from speakers attached to a Sony Walkman which played the disc. The speakers were placed 5 cm away from decibel level meter and microphones. Eleven Electret
condenser microphone heads were taped together, so that they faced the same direction and were exactly the same distance from the sound source.
Signals were broadcasted continuously for 55 to 110 seconds at each sound pressure level and the microphone scores from the eleven microphones were monitored electronically using the EAR. Ten sound pressure levels ranging from 54 to 96 dB were used. This range brackets all possible sound pressure levels the Texas field cricket might use while signaling, from no signaling, to signals that are much louder than anything we have ever head in nature or in the laboratory. We visually monitored the sound pressure level every second, ensuring that it did not change within each treatment. Each microphone was sampled by the EAR ten times each second and a zero or one was recorded in RAM; every second these microphone scores were summed and a number between 0 and 10 was recorded to the computers hard drive.
Results and Discussion: Changes in the sound pressure level (dB) of the signal resulted in corresponding changes in average microphone score. Average score of each of the eleven microphones increased with increasing sound pressure levels (Regression: R2adj = 0.96, F = 1455, P < 0.0001; Regression Equation: Microphone Score = - 9.3 + 0.2 sound pressure level (dB) - 0.0016 [sound pressure level (dB) - 85]2; Fig. 2). Microphone score changes with changes in sound pressure level because of the EAR’s physical properties. Specifically, second-by-second microphone score is dependent on the signal (in volts) that is received by the microphone and the EAR’s pre-set threshold (in volts). If the signal received by the microphone is greater than the pre-set threshold then MicScore / Possible MicScore = 1-(2/rc)*arcsin(s/A), where s is the preset threshold (in volts) and A is the signal’s amplitude (in volts) read by the microphone (Fig. 3). If the voltage of the signal is less then the voltage of the pre-set threshold then MicScore=0. It is important to note that because microphone score is dependent on sound pressure level, and sound pressure level decreases with increasing distance, the distance between the individuals and the microphones must remain constant. Microphone score can therefore be accurately used to quantify the sound pressure level of acoustic signals provided the distance between the insect and the speaker can be approximated during the entire monitoring session. This distance can be easily discerned if the individuals being monitored are placed in relatively small containers.
Amplitude (dB)
Fig. 2.
The relationship between microphone score (counts per second) and amplitude (dB) for the signal of the Texas field cricket, Gryllus texensis. Amplitude is presented using a log scale. Each data point represents an average score for each microphone at specific sound pressure levels. The solid line shows the relationship between sound pressure level and microphone score
Time = 1 second
Time = 1 second
Fig. 3.
Relationship between amplitude and microphone score for the mate attraction signals of the Texas field cricket, Gryllus texensis. Two waveforms of the mate attraction signal are shown, differing only in amplitude: the quieter signal is shown in A, the louder signal is shown in B. The horizontal line in both A and B represents the preset threshold above which the EAR records signaling and below which the EAR records non-signaling. This pre-set threshold does not change, and so the horizontal line lies at the same dB level in both A and B. The ten black bars at the top of each waveform indicate the ten EAR samples that occur during the second long time period; a solid black bar indicates that the signal would be above the pre-set threshold during sampling period, a dotted bar indicates that the signal would lie below the pre-set threshold during the sampling period. In A, 7 of 10 of the sampling periods took place when the pulses of sound were louder then the threshold line, resulting in a microphone score of 7. In B, all 10 of these samples took place when pulses of sound were louder then the threshold line, resulting in a microphone score of 10. The EAR would thus show a higher microphone score for the louder signal of B then it would for the quieter signal of A
DATA COLLECTION CAPABILITIES:
Purpose: To document the type of population-level data that can be collected using the EAR.
Methodology: Sixty-four Texas field cricket males were collected from lights at a driving range in Austin, TX. They were brought to a greenhouse at the Brackenridge Field Laboratory, of the University of Texas at Austin in which the lighting and temperature conditions emulate natural conditions.
Male crickets were placed in individual containers with food, water, and a microphone and arranged in four 4*4 grids. Neighbors were spaced 60 cm apart within each grid. Grids were separated by 75 cm, into a total grid that measured 8*8 individuals. The 64 crickets were monitored acoustically using our EAR from 6 PM to 10 AM on the evening of October 2nd,
2001. We obtained total signaling time, signaling bout duration, break duration, hourly signaling bout number, microphone score, start, mean, and stop times for each of these crickets. The EAR unit also records a second by second summary of the microphone scores for each individual throughout the night. We used this summary data to provide an overall picture of how this population of crickets signaled through time and in relation to each other.
Results and Discussion: Males exhibited extensive variation in their acoustic mate
attraction behavior (Table 1; Fig. 4). Less than half (29/64) of the field-captured males signaled during the 16 hour monitoring period. Of the males that signaled, some initiated signaling early in the evening (7:46 PM) while others did not start signaling until well after dawn (9:30 AM). Some signaled continuously for only a few seconds on average while others signaled for 76 min on average without a break. A few males signaled only once while others produced 14 different signaling bouts. Throughout the night, several males signaled for only a few seconds or a few minutes, while a few males signaled for several hours (Table 1). Some males took very short breaks from signaling (5 min on average) while others took 3 hours off between signaling bouts. Males also varied in how loud they signaled (68 -90 dB at 4 cm). The average start time was 3:45 AM (Table 1), and peak population chorusing activity occurred between 5:00 and 6:00 am (Fig. 5). A small percentage of the males (7%) completed their entire signaling early in the night (between 6:00 PM and 2:00 AM), 28% called throughout the night (between 6:00 PM and 10:00 AM), and 65% completed their entire signaling in the morning (between 2:00 and 10:00 AM; Fig. 4).
Table 1. Acoustic signaling behavior of 64 males monitored for one night
N Mean SE Range
Population Total Signaling Time 64 23 min 7 min 0 sec - 305 min
Callers Total Signaling Time 29 51 min 15 min 1 sec - 305 min
Average Sound Pressure Level 29 80 dB 1.21 dB 68 - 90 dB
Total Number of Bouts 29 5 0.8 1 - 14
Average Bout Duration 29 10 min 3 min 5 sec - 76 min
Total Number of Breaks 29 4 0.83 0 - 13
Average Break Duration 29 64 min 10 min 5 min - 174 min
Start Time 29 3:45 AM 49 min 7:46 PM - 9:31 AM
Mean Time 29 5:58 AM 41 min 7:54 PM - 9:47 AM
Stop Time 29 7:36 AM 42 min 8:01 PM - 10:00 AM
Number of Leading Signals (Nearest Neighbors) 29 2.52 0.6 0 - 12
Number of Leading Signals (Population) 29 0.62 0.18 0 - 3
1
1
6pm 2am 10am 6pm 2am 10am 6pm 2am 10am 6pm 2am 10am
Fig. 4.
A spatial and temporal representation of when and how much each individual signaled through the course of the night. One 4*4 grid is shown, with a total of 16 individuals. Each individual’s signaling behavior is shown in its own graph. Within each graph, the x-axis equals the time of night (6:00 PM - 10:00 AM), and the y-axis equals the individual’s signaling time (# s/min). When each individual signaled is discerned by the black bars, and the height of each bar represents the number of seconds the individual sang per minute. Graphs are arranged in the same way individuals were spaced and monitored through the night, allowing the viewer to look at signaling behavior of one quadrant of the population simultaneously. Overall, the figure suggests the potential for signaling clusters, and that spatial relationships may be important, although more data collection and further analysis is necessary
Time of Day
Fig. 5.
Chorusing dynamics of the 64 monitored males. How the number of individuals signaling acoustically changed through the course of the night. Although chorusing initiates early in the evening, peak signaling does not occur until just after dawn
As a whole, male crickets signaled for 10 min, took a 64 min break, and then repeated the process (Table 1). They averaged five of these signaling bouts per night for a total signaling time of 51 min. Throughout the night, signaling males produced attraction signals with an average sound pressure level of 80 dB. Half of the signaling bouts (2.5/5) were produced when nearest neighbors were silent, whereas only 12% (0.62/5) of their signaling bouts were produced when the all of the other 63 males were silent. Mean signaling time for the entire group occurred at 5:58 AM. Stop time for the entire group averaged 7:36 AM.
The EAR simultaneously collects acoustic signaling data at several spatial scales, allowing researchers to conduct hypothesis testing at the population-, nearest neighbor-, and individual-level. More specifically, hypothesis testing regarding individual-, nearest neighbor-, or population-level differences in temporal signaling pattern, time spent signaling, bout length and number, break length and number, and loudness can be thoroughly tested providing a holistic view of the temporal aspects of acoustic signaling behavior.
SUMMARY
Our electronic acoustic recorder is an inexpensive and efficient way to simultaneously estimate the temporal aspects of acoustic signaling of 128 individuals for an indefinite period of time. Several EARs can be attached to a computer, allowing monitoring of numerous individuals simultaneously. Conceivably, the microphones could also be arranged in a spatial grid within a single animal’s territory to understand spatial use in relationship to signaling (i.e., does the animal signal most from the edge of the territory, or from the center?).
Because the EAR can simultaneously monitor the acoustic signaling behavior of hundreds of individuals over unlimited time periods, it should prove exceedingly helpful in estimating the genetic and environmental basis to observable variation in acoustic mating signals. Obtaining heritability estimates of a trait usually requires several hundred individuals be sampled. This requirement has greatly limited the number of heritability studies completed on acoustically signaling individuals. It is our hope that the EAR will enable more researchers to investigate the heritable and environmental contributions to signaling behavior. Quantifying the variation in the temporal aspects of acoustic signals, and the environmental and genetic basis influencing this variation should help us comprehend the evolutionary implications of selection on the temporal aspects of acoustic signaling behavior.
ACKNOWLEDGEMENTS
We gratefully acknowledge William Coleman in the electronics shop at Arizona State University for turning our design into reality. J. Crutchfield and L. Gilbert of the Brackenridge Field Laboratory of the University of Texas at Austin provided greenhouse space for our field based experiments. We also wish to thank R. Gorelick, J. S. Johnson, S. X. Orozco, S. Williams, M. Begay, A. C. Bostic. This research was funded by a National Science Foundation (IBN 0131728) grant to S.M.B.
REFERENCES
1. Dyson M., Henzi S., Passmore N. The effect of changes in relative timing of signals during female phonotaxis in the reed frog, Hyperolius marmoratus. Animal Behaviour, 1994, 48(3), 679-685.
2. Schwartz J. J. Male calling behavior, female discrimination and acoustic interference in the neotropical treefrog Hyla microcephala under realistic acoustic conditions. Behavioral Ecology and Sociobiology, 1993, 32(6), 401-414.
3. Galliart P. L., Shaw K. C. The effect of variation in parameters of the male calling song of the katydid, Amblycorypha parvipennis (Orthoptera: Tettigonnidae) on female phonotaxis and phonoresponse. Journal of Insect Behavior, 1996, 9(6), 841-855.
4. Snedden W. A., Greenfield M. D. Females prefer leading males: relative call timing and sexual selection in katydid choruses. Animal Behaviour, 1998, 56, 1091-1098.
5. Greenfield M. D., Rand A. S. Frogs have rules: Selective attention algorithms regulate chorusing in Physalaemus pustulosus (Leptodactylidae). Ethology, 2000, 106(4), 331-347.
6. Tarano Z., Herrera E. A. Female preferences for call traits and male mating success in the neotropical frogPhysalaemus enesefae. Ethology, 2003, 109(2), 121-134.
7. Murphy C. G. Chorus tenure of male barking treefrogs, Hyla gratiosa. Animal Behaviour, 1994, 48(4), 763-777.
8. Bertram S. M., Berrill M., Nol E. Male mating success and variation in chorus attendance within and among breeding seasons in the gray treefrog (Hyla versicolor). Copeia, 1996(3), 729-734.
9. Morrison C., Hero J. M., Smith W. P. Mate selection in Litoria chloris and Litoria xanthomera: females prefer smaller males. Austral Ecology, 2001, 26(3), 223-232.
10. Given M. Interrelationships among calling effort, growth rate, and chorus tenure in Bufo fowleri. Copeia, 2002, 2002(4), 979-987.
11. Klappert K., Reinhold K. Acoustic preference functions and sexual selection on the male calling song in the grasshopper Chorthippus biguttulus. Animal Behaviour, 2003, 65, 225-233.
12. Castellana S., et al. Call intensity and female preferences in the European green toad. Ethology, 2000, 106(12), 1129-1141.
13. Cade W. H., Cade E. S. Male mating success, calling and searching behavior at high and low-densities in the field cricket, Gryllus integer. Animal Behaviour, 1992, 43(1), 49-56.
14. Crnokrak P., Roff D. A. The genetic basis of the trade-off between calling and wing morph in males of the cricket Gryllus firmus. Evolution, 1998, 52(4), 1111-1118.
15. Prohl H. Variation in male calling behavior and relation to male mating success in the strawberry poison frog (Dendrobates pumilio). Ethology, 2003, 109(4), 273-290.
16. Hedrick A. Female preference for male calling bout duration in a field cricket. Behavioral Ecology and Sociobiology, 1986, 19, 73-77.
17. Allen G. R. Diel calling activity and field survival of the bushcricket, Sciarasaga quadrata (Orthoptera: Tettigoniidae): A role for sound-locating parasitic flies? Ethology, 1998, 104(8), 645-660.
18. Kolluru G. R. Variation and repeatability of calling behavior in crickets subject to a phonotacticparasitoid fly. Journal of Insect Behavior, 1999, 12(5), 611-626.
19. Bertram S. M. Temporally fluctuating selection of sex-limited signaling traits in the Texas field cricket, Gryllus texensis. Evolution, 2002, 56(9), 1831-1839.
20. Webb K. L., Roff D. A. The quantitative genetics of sound production in Gryllus firmus. Animal Behaviour, 1992, 44(5), 823-832.
21. Hedrick A. Female choice and the heritability of attractive male traits: an empirical study. American Naturalist, 1988, 132, 267-276.
22. Bertram S. M., Johnson L. An electronic technique for monitoring the temporal aspects of acoustic signals of captive organisms. BioAcoustics, 1998, 9, 107-118.
23. Kidder G. W., Sakaluk S. K. Simple and inexpensive electronic device for automatic recording and analysis of insect acoustical activity. Florida Entomologist, 1989, 72(4), 642-649.
24. Hedrick A. V., Mulloney B. A multichannel electronic monitor of acoustic behaviors, and software to parse individual chanels. Journal of Neuroscience Methods, 2004, 133, 201-210.
Electronic Journal «Technical Acoustics» http://webcenter.ru/~eeaa/ejta/
2004, 20
Susan M. Bertram*, Luke A. Johnson, Jerome Clark, Carmenlita Chief
School of Life Sciences, Arizona State University, Tempe, Arizona, USA 85287-4501
An electronic acoustic recorder for quantifying total signaling time, duration, rate and magnitude in acoustically signaling insects
Received 05.11.2004, published 03.12.2004 APPENDIX I - DATA ACQUISITION SOFTWARE
Borland C software designed to run one or two EARs. The software communicates between the EAR(s) and the computer via one DIO24db/ct digital interface board.
/* ears.c */
/* EARs driver software*/
/* by Luke A. Johnson */
/* Requires: */
/* DIO24db/ct */
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include <conio.h>
#include <dos.h>
#include <bios.h>
#define IRQ 5
unsigned char config, CurMic=15, Sample, status; unsigned long LineCount = 0; unsigned int SumA[16][8]; unsigned int SumB[16][8];
unsigned int Xlat[16]={7,15,6,14,5,13,4,12,3,11,2,10,1,9,0,8};
unsigned int SumCnt=7;
unsigned int i=0;
float NextUpdate=0;
unsigned short address = 0xEC40;
typedef enum {FALSE,TRUE} boolean;
void main()
{
char filename[20];
’corresponding author, e-mail: [email protected]
FILE *outfile; struct time t; struct date d; gettime(&t); getdate(&d);
/* Initialize Digital Ports */
/* Port A: Output - Microphone select Port B: Input - Microphone sample Port C: Input - Microphone sample */ outportb(address+0x03,0x8b);
/* Initialize timer to 64Hz Square Clock */ outportb(address+0x 13,0x3 7); outportb(address+0x10,0x00); /* 0000 bcd */ outportb(address+0x 10,0x00); outportb(address+0x13,0x77); outportb(address+0x11,0x05); /* 0005 bcd */ outportb(address+0x11,0x00); outportb(address+0x13,0xb7); outportb(address+0x12,0x25); /* 3125 bcd */ outportb(address+0x 12,0x31);
sprintf(filename,"S%02d%02d.ccc",d.da_mon,d.da_day);
outfile = fopen(filename, "w");
fprintf(outfile, " %2d : %02d:%02d %d/%d/%d\n\n",
t.ti_hour, t.ti_min, t.ti_sec,d.da_mon,d.da_day,d.da_year); fprintf(outfile,"\nTime A B C D E F G H A B C D E F G H A B C D E F G H A B C D E F G H A B C D E F G H A B C D E F G H A B C D E F G H A B C D E F G H A B C D E F G H A B C D E F G H A B C D E F G H A B C D E F G H A B C D E F G H A B C D E F G H A B C D E F G H A B C D E F G H ");
fprintf(outfile,"\n A A A A A A A A A A A A A A A A A A A A A A A A A A
A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A
A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A
A A A A A A A A A A A A A A A A A A A A A A A A ");
fprintf(outfile,"\n 0 1 2 3 4 5 6 7 8 9 a b c d e f 0 1 2 3 4 5 6 7 8 9 a b c d e f 0 1 2
3 4 5 6 7 8 9 a b c d e f 0 1 2 3 4 5 6 7 8 9 a b c d e f 0 1 2 3 4 5 6 7 8 9 a b c d e f 0 1 2 3 4 5 6
7 8 9 a b c d e f 0 1 2 3 4 5 6 7 8 9 a b c d e f 0 1 2 3 4 5 6 7 8 9 a b c d e f "); fprintf(outfile,"\n%2d:%02d:%02d ",t.ti_hour, t.ti_min, t.ti_sec);
clrscr();
printf("Electronic Acoustic Recorder data file (%s)\nStarted %2d:%02d:%02d %d/%d/%d\n\n",filename,
t.ti_hour, t.ti_min, t.ti_sec,d.da_mon,d.da_day,d.da_year); gotoxy(21,5); printf("Box A"); gotoxy(57,5); printf("Box B");
gotoxy(1,7);
printf(" 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1\n");
printf(" 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6\n");
printf("\n A A\n");
B f( t in ri p B\n");
C f( t in ri p C\n");
Q f( t in ri p D\n");
W f( t in ri p E\n");
Ph f( t in ri p F\n");
O f( t in ri p G\n");
X f( t in ri p H\n");
printf("\n 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1\n");
printf(" - 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6\n");
do {
/* Select next microphone */
if (++CurMic > 15) {
CurMic = 0;
/* Select next mic */
outportb(address+0x00,Xlat[CurMic]);
if (++SumCnt>=8) {
SumCnt=0;
gotoxy(2,22);
gettime(&t);
printf(M%2d:%02d:%02d ",t.ti_hour, t.ti_min, t.ti_sec);
}
gotoxy(8, SumCnt+10);
printf("%x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x %x",
Sum A[0] [SumCnt], Sum A[ 1 ] [ SumCnt], Sum A[2] [ SumCnt], Sum A[3 ] [ SumCnt], Sum A[ 4][SumCnt],SumA[5][SumCnt],SumA[6] [SumCnt],SumA[7][SumCnt],
Sum A[8] [SumCnt], Sum A[9] [SumCnt], Sum A[ 10] [SumCnt], SumA[ 11 ] [SumCnt], Sum A[12][SumCnt],SumA[13][SumCnt],SumA[14][SumCnt],SumA[15][SumCnt],
SumB [0] [SumCnt], SumB [ 1 ] [SumCnt], SumB [2] [SumCnt], SumB [3 ] [SumCnt], SumB [4 ] [SumCnt] ,SumB[5][ SumCnt],SumB [6][ SumCnt] ,SumB[7][SumCnt],
SumB[8][SumCnt],SumB[9][SumCnt],SumB[10][SumCnt],SumB[11][SumCnt],Sum
B[12][SumCnt],SumB[13][SumCnt],SumB[14][SumCnt],SumB[15][SumCnt]);
SumA[0][SumCnt]=0;
Sum A[ 1 ] [SumCnt]=0;
SumA[2][SumCnt]=0;
SumA[3][SumCnt]=0;
SumA[4][SumCnt]=0;
SumA[5][SumCnt]=0;
SumA[6][SumCnt]=0;
SumA[7][SumCnt]=0;
SumA[8][SumCnt]=0;
SumA[9][SumCnt]=0;
Sum A[ 10] [SumCnt]=0;
Sum A[ 11 ] [SumCnt]=0;
Sum A[ 12] [SumCnt]=0;
Sum A[ 13 ] [SumCnt]=0;
Sum A[ 14] [SumCnt]=0;
Sum A[ 15] [SumCnt]=0;
SumB[0][SumCnt]=0;
SumB [ 1 ] [ SumCnt]=0;
SumB[2][SumCnt]=0;
SumB[3][SumCnt]=0;
SumB[4][SumCnt]=0;
SumB[5][SumCnt]=0;
SumB[6][SumCnt]=0;
SumB[7][SumCnt]=0;
SumB [ 8 ] [ SumCnt]=0 ;
SumB[9][SumCnt]=0;
SumB [ 10] [SumCnt]=0;
SumB[11][SumCnt]=0;
SumB[12][SumCnt]=0;
SumB [ 13 ] [ SumCnt]=0 ;
SumB[14][SumCnt]=0;
SumB [ 15] [ SumCnt]=0; gettime(&t);
fprintf(outfile,"\n%2d:%02d:%02d ", t.ti_hour, t.ti_min, t.ti_sec); if (t.ti_min>=NextUpdate){
NextUpdate=t.ti_min+30; if (NextUpdate>=60){
NextUpdate-=60;
}
fclose(outfile);
outfile = fopen(filename, "a");
}
} else
/* Select next mic */ outportb(address+0x00,Xlat[CurMic]);
/* Wait for next clock edge */
outportb(address+0x13,0xe8); /* get status of counter A2 */ Sample = inportb(address+0x12) & 0x80; do {
outportb(address+0x13,0xe8); /* get status of counter A2 */ status = inportb(address+0x12) & 0x80;} while (status==Sample);
/* Read Microphone */
Sample = inportb(address+0x01); if (Sample&0x80) SumA[CurMic][1] += 1; if (Sample&0x40) SumB[CurMic][1] += 1; if (Sample&0X20) SumA[CurMic][3] += 1; if (Sample&0X10) SumB[CurMic][3] += 1; if (Sample&0x08) SumA[CurMic][5] += 1; if (Sample&0x04) SumB[CurMic][5] += 1; if (Sample&0x02) SumA[CurMic][7] += 1; if (Sample&0x01) SumB[CurMic][7] += 1;
Sample = inportb(address+0x02); if (Sample&0x80) SumA[CurMic][0] += 1; if (Sample&0x40) SumB[CurMic][0] += 1; if (Sample&0X20) SumA[CurMic][2] += 1; if (Sample&0X10) SumB[CurMic][2] += 1; if (Sample&0x08) SumA[CurMic][4] += 1; if (Sample&0x04) SumB[CurMic][4] += 1; if (Sample&0x02) SumA[CurMic][6] += 1; if (Sample&0x01) SumB[CurMic][6] += 1;
} while (kbhit()==0 || CurMic<15);
gettime(&t);
getdate(&d);
fprintf(outfile, "\n\nEnded %2d:%02d:%02d %d/%d/%d\n\n",
t.ti_hour, t.ti_min, t.ti_sec,d.da_mon,d.da_day,d.da_year);
printf("\n\nEnded %2d:%02d:%02d %d/%d/%d\n\n",
t.ti_hour, t.ti_min, t.ti_sec,d.da_mon,d.da_day,d.da_year);
fclose(outfile);
} /* End main program */
Electronic Journal «Technical Acoustics» http://webcenter.ru/~eeaa/ejta/
2004, 20
Susan M. Bertram*, Luke A. Johnson, Jerome Clark, Carmenlita Chief
School of Life Sciences, Arizona State University, Tempe, Arizona, USA 85287-4501
An electronic acoustic recorder for quantifying total signaling time, duration, rate and magnitude in acoustically signaling insects
Received 05.11.2004, published 03.12.2004 APPENDIX II
Summary software written (runs in MatLab 6.1). This software is designed to take the large data matrix produced by the EAR and summarize it into two distinct files. The SS.mat file will contain all the summary statistics. The min.mat file contains the overall summary of the number of seconds each individual signaled each minute.
% Cricket_Summary_Statistics.m
% To be used with files that are from 6pm to 10am (18:00 to 10:00)
% File should have no column headings or time headings.
% Before running, replace all a's (representing a 10) with 9's and save as a .cvs file
clear; % Clears out all previously used variables load EAR_Data_Matrix.txt;
Smtrx= EAR_Data_Matrix;
% DELETE TIME COLUMN AND GET THE SIZE OF THE REST OF THE FILE
Smtrx(:,1) = []; % [] deletes the time column
Time_removed=clock
SizeSmtrx=size(Smtrx);
LgthSmtrx=SizeSmtrx(:,1); % Determines number of rows in Smtrx WdthSmtrx=SizeSmtrx(:,2); % Determines number of columns in Smtrx Sized=clock
i = find(Smtrx == 1); % Replaces all values of one with zero Smtrx(i) = 0;
% Calculate Total Signaling Time (TSCmin) for i=1:WdthSmtrx;
calltime(:,i)=nnz(Smtrx(:,i));
end;
TSCsec=calltime';
CMPfctr=round(LgthSmtrx/(60*16)); % Avg number of samples in a minute of data TSCmin=TSCsec/CMPfctr;
% Calculate Median Values
’corresponding author, e-mail: [email protected]
medSmtrx=Smtrx;
WndwSize=5; for frlp=1:WdthSmtrx if TSCmin(frlp,:) > 1
for i=1:WdthSmtrx; % Sets up a loop
medSmtrx(:,frlp) = medfilt1(Smtrx(:,frlp),WndwSize,LgthSmtrx); % Runs a sliding window taking median values end; % Ends the loop end
end
% Calculate Microphone Score (MicScore) and Sound Pressure Level (SPL) SUMSarray = sum(medSmtrx); % Sum of all microphone scores for each individual SUMSarray = SUMSarray';
MicScore=SUMSarray./TSCsec; % Average microphone score for each individual SPL=(MicScore+12.118)/0.232; % Equation for Gryllus texensis SPL at 4 cm distance
% Calculate Start time (TCPstart), Mean time (TCPmean), Stop time (TCPstop) for i=1:WdthSmtrx;
if sum(medSmtrx(:,i))==0;
TCPstart(i)=0;
TCPmean(i)=0;
TCPstop(i)=0;
else
[R,C,VA] = find(medSmtrx(:,i) > 0);
TCPstart(i)=min(R);
TCPmean(i)=mean(R);
TCPstop(i)=max(R);
end
end
numtime=16/LgthSmtrx;
TCPstart=TCPstart' *numtime;
TCPmean=TCPmean'*numtime;
TCPstop=TCPstop' *numti me;
% Compress the second x second file to a minute x minute file OneSmtrx=medSmtrx;
i = find(OneSmtrx > 0); OneSmtrx(i)=1; % Smtrx is a matrix with 0 for no calling and 1 for calling
CUMSmtrx=cumsum(OneSmtrx); % Cumulative sum of medSmtrx CMPfctr=round(LgthSmtrx/(60*16)); % Avg number of samples in a minute of data Z=fix(LgthSmtrx/CMPfctr); for i=1:Z;
MinSmtrx(i,:)=CUMSmtrx(CMPfctr*i,:);
end
MinSmtrx=diff(MinSmtrx); % Compresses the data from second data to minute data
% Calculate the number of bouts, breaks, avg break length, avg bout length SizeMinSmtrx=size(MinSmtrx); % Calculates the size of the Smtrx LgthMinSmtrx=SizeMinSmtrx(:,1); % Determines length of Smtrx WdthMinSmtrx=SizeMinSmtrx(:,2); % Determines width of Smtrx NumBouts=zeros(1,WdthMinSmtrx);
for i=1:WdthMinSmtrx; for j=2:LgthMinSmtrx;
if MinSmtrx(j-1,i)==0 && MinSmtrx(j,i)>0 NumBouts(1,i)=NumBouts(1,i)+1; % Calculating average number of bouts end end
end
NumBreaks=NumBouts; for i=1:WdthMinSmtrx; if NumBreaks(1,i)>0 NumBreaks(1,i)=NumBreaks(1,i)-1; % Calculating average number of breaks end
end
NumB outs=NumB outs';
NumBreaks=NumBreaks';
AvgBoutLength=TSCmin./NumBouts; % Calculating average bout length AvgBreakLength=((TCPstop-TCPstart)*60-TSCmin)./NumBreaks; % Calculating average break length
% SAVE THE RESULTS (TSC, MicScore, START, MEAN, STOP) RSLTS=[TSCmin MicScore SPL NumBouts AvgBoutLength NumBreaks AvgBreakLength TCPstart TCPmean TCPstop]; savefile = ‘SS.mat'; save(savefile,'RSLTS','-ASCII') savefile = 'min.mat'; save(savefile,'MinSmtrx','-ASCII')