Description
Summary:This dataset only contains acoustic events that were detected and analysed during the voyage. During periods when few sounds were present, all of the sounds were likely to be included in the acoustic event log. During periods when many sounds were detected, not all sounds could be included in the acoustic event log due to the limited amount of time and attention available for processing files. During the 2013 Antarctic Blue Whale Voyage Acousticians noted all whale calls and other acoustic events that were detected during real-time monitoring in a Sonobuoy Event Log. The acoustic tracking software, difarBSM, stored processed bearings from acoustic events and cross bearings in tab delimited text files. Each event was assigned a classification by the acoustician, and events for each classification were stored in separate text files. The first row in each file contains the column headers, and the content of each column is as follows: buoyID: Buoy ID number is the number of the sonobuoy on which this event was detected. This can be used as a foreign key to link to the sonobuoy deployment log. timeStamp_matlabDatenum: Date and time (UTC) at the start of the event represented as a Matlab datenum (i.e. number of days since Jan 0 0000). Latitude: Latitude of the sonobuoy deployment in decimal degrees. Southern hemisphere latitudes should be negative. Longitude: Longitude of sonobuoy deployment in decimal degrees. Western hemisphere longitudes should be negative. Altitude: Depth of the sonobuoy deployment in metres. For DIFAR sonobuoys either 30, 120 or 300. magneticVariation_degrees: The estimated magnetic variation of the sonobuoy in degrees at the time of the event. Positive declination is East, negative is West. At the start of a recording this will be entered from a chart. As the recording progresses, this should be updated by measuring the bearing to the vessel. bearing_degreesMagnetic: Magnetic bearing in degrees from the sonobuoy to the acoustic event. Magnetic bearings were selected by the acoustician by choosing a single point on the bearing-frequency surface (AKA DIFARGram) produced by the analysis software difarBSM. frequency_Hz: The frequency in Hz of the magnetic bearing that the acoustician selected from the bearing-frequency surface (DIFARGram). logDifarPower: The base 10 logarithm of the height of the point on the DIFARGram receiveLevel_dB: This column contains an estimate of the The RMS receive level (dB SPL re 1 micro Pa) of the event. Received levels were estimated by applying a correction for the shaped sonobuoy frequency response, the receiver’s frequency response, and were calculated over only the frequency band specified in each classification (see below). soundType: soundType is the classification assigned to the event by the acoustician. Analysis parameters for each classification are included in the csv file classificationParameters.txt. The columns of this file are as follows: outFile: The name of the tab-separated text file that contains events for this classification. analysisType: A super-class describing the broad category of analysis parameters soundType: The name of the classification sampleRate: When events are processed, they are downsampled to this sample rate (in Hz) in order to make directional processing more efficient and precise FFTLength: The duration (in seconds) used for determining the size of the FFT during difar beamforming (i.e. creation of the DIFARGram). numFreqs: Not used during this voyage targetFreq: The midpoint of the frequency axis (in Hz) displayed in the DIFARGram Bandwidth: This describes the half-bandwidth (Hz) of the frequency axis of the DIFARGram. The frequency axis of the DIFARGram starts at targetFreq-bandwidth and ends at targetFreq + bandwidth frequencyBands_1: The lower frequency (Hz) used for determining RMS received level. frequencyBands_2: The upper frequency (Hz) used for determining RMS received level. preDetect: Duration of audio (in seconds) that will be loaded before the start of the event. The processed audio includes the time-bounds of the event marked by the acoustician as well as preDetect seconds before the start of the event. postDetect: Duration of audio (in seconds) that will be loaded after the end of the event. The processed audio includes the time-bounds of the event marked by the acoustician + postDetect seconds. Passive acoustics involves the use of underwater listening devices to detect and locate calling animals. Antarctic blue whales frequently make extremely loud, repeated calls that can be heard over a greater range than they can be seen by a visual observer on a ship. The main purpose of passive acoustics on this voyage was to: - detect and locate calling blue whales - provide baseline information about Antarctic blue whale vocalisations After calibration of the sonobuoy, the acoustician on-duty monitored and analysed incoming vocalisations to obtain bearings from the sonobuoy to all detected whale vocalisations. - Bearings from multiple sonobuoys were used to triangulate the location of the whales (Latitude, Longitude WGS84). - Acousticians noted all whale calls in a Acoustic Event Log, including any additional information that may be useful for tracking and 'targeting' (eg., sightings of whales, sonobuoys or other vessels, acoustic, electrical, or radio noise, gear failure, etc). The acoustic tracking software stored all acquired bearings, and cross bearings in text files.