WO1997010586A1 - System for adaptively filtering audio signals to enhance speech intelligibility in noisy environmental conditions - Google Patents

System for adaptively filtering audio signals to enhance speech intelligibility in noisy environmental conditions Download PDF

Info

Publication number
WO1997010586A1
WO1997010586A1 PCT/US1996/014665 US9614665W WO9710586A1 WO 1997010586 A1 WO1997010586 A1 WO 1997010586A1 US 9614665 W US9614665 W US 9614665W WO 9710586 A1 WO9710586 A1 WO 9710586A1
Authority
WO
WIPO (PCT)
Prior art keywords
noise
speech
filter circuit
estimate
energy
Prior art date
Application number
PCT/US1996/014665
Other languages
French (fr)
Inventor
Torbjörn W. SÖLVE
Original Assignee
Ericsson Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ericsson Inc. filed Critical Ericsson Inc.
Priority to EP96931552A priority Critical patent/EP0852052B1/en
Priority to EE9800068A priority patent/EE03456B1/en
Priority to BR9610290A priority patent/BR9610290A/en
Priority to DE69613380T priority patent/DE69613380D1/en
Priority to AU70784/96A priority patent/AU724111B2/en
Priority to JP9512112A priority patent/JPH11514453A/en
Priority to PL96325532A priority patent/PL185513B1/en
Publication of WO1997010586A1 publication Critical patent/WO1997010586A1/en
Priority to NO981074A priority patent/NO981074L/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02168Noise filtering characterised by the method used for estimating noise the estimation exclusively taking place during speech pauses
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L2025/783Detection of presence or absence of voice signals based on threshold decision
    • G10L2025/786Adaptive threshold
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain

Definitions

  • the present invention is related to U.S. Patent Application Serial No. 08/128,639, entitled “Adaptive Noise Reduction for Speech Signals” filed on September 29, 1993; and to U.S. Patent Application No. 07/967,027 entitled “Multi-Mode Signal Processing” filed on
  • the present invention relates to noise reduction systems, and in particular, to an adaptive speech intelligibility enhancement system for use in portable digital radio telephones.
  • PCNs personal communication networks
  • Digital communication systems take advantage of powerful digital signal processing techniques.
  • Digital signal processing refers generally to mathematical and other manipulation of digitized signals. For example, after converting (digitizing) an analog signal into digital form, that digital signal may be filtered, amplified, and attenuated using simple mathematical routines in a digital signal processor (DSP) .
  • DSPs are manufactured as high speed integrated circuits so that data processing operations can be performed essentially in real time. DSPs may also be used to reduce the bit transmission rate of digitized speech which translates into reduced spectral occupancy of the transmitted radio signals and increased system capacity.
  • a serial bit rate of 112 Kbits/sec is produced.
  • voice coding techniques can be used to compress the serial bit rate from 112 Kbits/sec to 7.95 Kbits/sec to achieve a 14:1 reduction in bit transmission rate. Reduced transmission rates translate into more available bandwidth.
  • VSELP vector sourcebook excited linear predictive coding
  • the distortion is caused in large part by the environment in which the mobile telephones are used.
  • Mobile telephones are typically used in a vehicle's interior where there is often ambient noise produced by the vehicle's engine and surrounding vehicular traffic.
  • This ambient noise in the vehicle's interior is typically concentrated in the low audible frequency range and the magnitude of the noise can vary due to such factors as the speed and acceleration of the vehicle and the extent of the surrounding vehicular traffic.
  • This type of low frequency noise also has the tendency of significantly decreasing the intelligibility of the speech coming from the speaking person in the car environment.
  • the decrease in speech intelligibility caused by low frequency noise can be particularly significant in communication systems deploying a VSELP vocoder, but can also occur in communication systems that do not include a VSELP vocoder.
  • the influence of the ambient noise on the mobile telephone can also be affected by the manner in which the mobile telephone is used.
  • the mobile telephone may be used in a hands-free mode where the telephone user talks on the telephone while the mobile telephone is in a cradle. This frees the telephone user's hands to drive but also increases the distance that the telephone user's audible words must travel before reaching the microphone input of the mobile telephone. This increased distance between the user and the mobile telephone, along with the varying ambient noise, can result in noise being a significant portion of the total power spectral energy of the audio signal inputted into the *no ilj» telephone.
  • the present invention provides an adaptive noise reduction system that reduces the undesirable contributions of encoded background noise while both minimizing any negative impact on the quality of the encoded speech and minimizing any increased drain on digital signal processor resources.
  • the method and system of the present invention increases the intelligibility of the speech in a digitized audio signal by passing frames of the digitized audio signal through a filter circuit.
  • the filter circuit functions as an adjustable, high-pass filter which filters a portion of the digitized signal in a low audible frequency range and passes the portion of the digitized signal falling in higher frequency ranges.
  • the filter circuit filters a large segment of the noise in the digitized audio signal while only filtering less important segments of the speech. This results in a relatively larger portion of the noise energy being removed compared to the portion of the speech energy removed.
  • a filter control circuit is used to adjust the filter circuit to exhibit different frequency response curves as a function of a noise estimate and/or a spectral profile result corresponding to the noise in the audio signal.
  • the noise estimate and/or the spectral profile result are adjusted on a frame-by- frame basis for the digital signal and as a function of speech detection. If speech is not detected, the noise estimate and/or spectral profile result is updated for the current frame. If speech is detected, the noise estimate and/or spectral profile result is left unadjusted.
  • the filter circuit calculates noise estimates for the frames of the digitized audio signals. The noise estimates correspond to the amount of background noise in the frames of the digitized audio signals.
  • the filter control circuit uses the noise estimates to adjust the filter circuit to filter larger portions of the low frequency range of speech as the relative amount of background noise to speech in a low frequency range of speech increases.
  • no background noise no portion of the speech signal is filtered.
  • Larger portions of noise and speech information are extracted when there is a higher level of background noise. Because noise tends to be concentrated in a low frequency range and only a relatively small portion of the intelligibility content of speech falls within this low frequency range, the overall intelligibility of the audio signal can be increased by increasing the portion of low frequency energy being filtered as the noise estimates increase.
  • a modified filter control circuit is used to adjust the filter circuit to exhibit different frequency response curves as a function of a noise profile of the noise estimate over a selected frequency range in the audio signal.
  • the filter control circuit includes a spectral analyzer for determining a noise profile estimate as a function of the detection speech. A noise profile estimate is determined for a current frame and compared to a reference noise profile. Based on this comparison, the filter circuit is adaptively adjusted to extract varying amounts of low frequency energy from the current frame.
  • the adaptive noise reduction system may be advantageously applied to telecommunication systems in which portable/mobile radio transceivers communicate over RF channels with each other or with fixed telephone line subscribers.
  • Each transceiver includes an antenna, a receiver for converting radio signals received over an RF channel via the antenna into analog audio signals, and a transmitter.
  • the transmitter includes a coder-decoder (codec) for digitizing analog audio signals to be transmitted into frames of digitized speech information, the speech information including both speech and background noise.
  • codec coder-decoder
  • a digital signal processor processes a current frame based on an estimate of the background noise and the detection of speech in the current frame to minimize background noise.
  • a modulator modulates an RF carrier with the processed frame of digitized speech information for subsequent transmission via the antenna.
  • FIGURE 1 is a general functional block diagram of the present invention
  • FIGURE 2 illustrates the frame and slot structure of the U.S. digital standard IS-54 for cellular radio communications;
  • FIGURE 3 is a block diagram of a first preferred embodiment of the present invention implemented using a digital signal processor
  • FIGURE 4 is a functional block diagram of an exemplary embodiment of the present invention in one of plural portable radio transceivers in a telecommunication system
  • FIGURES 5A and 5B is a flow chart which illustrates functions/operations performed by the digital signal processor in implementing the first preferred embodiment of the present invention
  • FIGURE 6A is a graph illustrating a first example of an attenuation vs. frequency characteristic of a filter circuit according to the first preferred embodiment of the present invention
  • FIGURE 6B is a graph illustrating a second example of an attenuation vs. frequency characteristic of a filter circuit according to the first preferred embodiment of the present invention.
  • FIGURE 7 is an example look-up table accessible by the filter control circuit of the first preferred embodiment of the present invention
  • FIGURES 8A and 8B are graphs illustrating the amplitude vs. frequency characteristics of example input audio signals
  • FIGURES 9A and 9B are graphs illustrating the amplitude vs. frequency cha*">c*-i!ristics of the input audio signals of Figures 8A and 8B, respectively, after having been filtered by the filter circuit of the present invention;
  • FIGURE 10 is a block diagram of a second preferred embodiment of the present invention implemented using a digital signal processor
  • FIGURE 11 is a flow chart, corresponding to the flow chart of Figure 5B, which illustrates functions/operations performed by the digital signal processor in implementing the second preferred embodiment of the present invention.
  • FIGURE 12 is an example look-up table accessible by the filter control circuit of the second preferred embodiment of the present invention.
  • FIG. 1 is a general block diagram of the adaptive noise reduction system 100 according to the present invention.
  • Adaptive noise reduction system 100 includes a filter control circuit 105 connected to a filter circuit 115.
  • Filter control circuit 105 generates a filter control signal for a current frame of a digitized audio signal.
  • the filter control signal is outputted to the filter circuit 115, and the filter circuit 115 adjusts in response to the filter control signal to exhibit a high-pass frequency response curve selected based on the filter control signal.
  • the adjusted filter circuit 115 filters the current frame of the digitized audio signal.
  • the filtering signal is processed by a voice coder 120 to produce a coded signal representing the digitized audio signal.
  • Figure 2 illustrates the time division multiple access (TDMA) frame structure employed by the IS-54 standard for digital cellular telecommunications.
  • a "frame” is a twenty millisecond time period which includes one transmit block TX, one receive block RX, and a signal strength measurement block used for mobile-assisted hand-off (MAHO) .
  • the two consecutive frames shown in Figure 2 are transmitted in a forty millisecond time period. Digitized speech and background noise information is processed and filtered on a frame-by- frame basis as further described below.
  • the functions of the filter control circuit 105, filter circuit 115, and voice coder 120 shown in Figure 1 are implemented with a high speed digital signal processor.
  • One suitable digital signal processor is the TMS320C53 DSP available from Texas Instruments.
  • the TMS320C53 DSP includes on a single integrated chip a sixteen-bit microprocessor, on-chip RAM for storing data such as speech frames to be processed, ROM for storing various data processing algorithms including the VSELP speech compression algorithm, and other algorithms to be described below for implementing the functions performed by the filter control circuit 105 and the filter circuit 115.
  • a first embodiment of the present invention i ⁇ shown in Figure 3.
  • the filter circuit 115 is adjusted as a function of background noise estimates determined by the filter control circuit.
  • Frames of pulse code modulated (PCM) audio information are sequentially stored in the DSP's on- chip RAM. The audio information could be digitized using other digitization techniques.
  • PCM pulse code modulated
  • Each PCM frame is retrieved from a DSP on-chip RAM and processed by frame energy estimator 210, and stored temporarily in temporary frame store 220.
  • the energy of the current frame determined by frame energy estimator 210 is provided to noise estimator 230 and speech detector 240 function blocks.
  • Speech detector 240 indicates that speech is present in the current frame when the frame energy estimate exceeds the sum of the previous noise estimate and a speech threshold. If the speech detector 240 determines that no speech is present, the digital signal processor 200 calculates an updated noise estimate as a function of the previous noise estimate and the current frame energy (block 230) .
  • the updated noise estimate is outputted to a filter selector 235.
  • Filter selector 235 generates a filter control signal based on the noise estimate.
  • the filter selector 235 accesses a look-up table in generating the filter control signal.
  • the look-up table includes a series of filter control values that are each matched with a noise estimate or range of noise estimates.
  • a filter control value from a look-up table is selected based on the updated noise estimate and this filter control value is represented by a filter control signal outputted to a filter bank 265 for the filter circuit 115.
  • a hangover time of N frames is set upon the selection of a new filter.
  • a new filter can only be selected every N frames, where N is an integer greater than one and preferably greater than 10.
  • the filter circuit 115 is adjusted in response to the filter control signal to exhibit a high-pass frequency response curve that corresponds with the inputted filter control signal and noise estimate.
  • Various different types of filter circuits well known in prior art can be utilized to exhibit selected frequency response curves in response to the filter control signal.
  • These prior art filters include IIR filters such as Butterworth, Chebyshev (Tschebyscheff) or elliptic filters. IIR filters are preferable to FIR filters, which also can be used, due to lower processing requirements.
  • the filtered signal is processed by a voice coder 120 which is used to compress the bit rate of the filtered signal.
  • the voice coder 120 uses vector sourcebook excited linear predictive coding (VSELP) to code the audio signal.
  • VSELP vector sourcebook excited linear predictive coding
  • CELP code excited linear predictive
  • RPE-LTP residual pulse excited linear predictive
  • IMBE improved multiband excited
  • the digital signal processor 200 described in conjunction with Figure 3 can be used, for example, in the transceiver of a digital portable/mobile radiotelephone used in a radio telecommunications system.
  • Figure 4 illustrates one such digital radio transceiver which may be used in a cellular telecommunications network. Although Figure 4 generally describes the basic function blocks included in the radio transceiver, a more detailed description of this transceiver may be obtained from the previously referenced U.S. Patent Application Serial No. 07/967,027 entitled "Multi-Mode Signal Processing" which is incorporated herein by reference.
  • Audio signals including s ft cb and background noise are input in a microphone 400 to a coder-decoder (codec) 402 which preferably is an application specific integrated circuit (ASIC) .
  • codec coder-decoder
  • ASIC application specific integrated circuit
  • the band limited audio signals detected at microphone 400 are sampled by the codec 402 at a rate of 8,000 samples per second and blocked into frames. Accordingly, each twenty millisecond frame includes 160 speech samples. These samples are quantized and converted into a coded digital format such as 14-bit linear PCM.
  • the transmit DSP 200 performs channel encoding functions, the frame energy estimation, noise estimation, speech detection, FFT, filter functions and digital speech coding/compression in accordance with the VSELP algorithm, as described above in conjunction with Figure 3.
  • a supervisory microprocessor 432 controls the overall operation of all of the components in the transceiver shown in Figure 4.
  • the filtered PCM data stream generated by transmit DSP 200 is provided for quadrature modulation and transmission.
  • an ASIC gate array 404 generates in-phase (I) and quadrature (Q) channels of information based upon the filtered PCM data stream from DSP 200.
  • the I and Q bit streams are processed by matched, low pass filters 406 and 408 and passed onto IQ mixers in balanced modulator 410.
  • a reference oscillator 412 and a multiplier 414 provide a transmit intermediate frequency (IF) .
  • the I signal is mixed with in-phase IF, and the Q signal is mixed with quadrature IF (i.e., the in-phase IF delayed by 90 degrees by phase shifter 416) .
  • the mixed I and Q signals are summed, converted "up" to an RF channel frequency selected by channel synthesizer 430, and transmitted via duplexer 420 and antenna 422 over the selected radio frequency channel.
  • signals received via antenna 422 and duplexer 420 are down converted from the selected receive channel frequency in a mixer 424 to a first IF frequency using a local oscillator signal synthesized by channel synthesizer 430 based on the output of reference oscillator 428.
  • the output of the first IF mixer 424 is filtered and down converted in frequency to a second IF frequency based on another output from channel synthesizer 430 and demodulator
  • a receive gate array 434 then converts the second IF signal into a series of phase samples and a series of frequency samples.
  • the receive DSP 436 performs demodulation, filtering, gain/attenuation, channel decoding, and speech expansion on the received signals.
  • the processed speech data are then sent to codec 402 and converted to baseband audio signals for driving loudspeaker 438.
  • Frame energy estimator 210 determines the energy in each frame of audio signals.
  • Frame energy estimator 210 determines the energy of the current frame by calculating the sum of the squared values of each PCM sample in the frame (step 505) . Since there are 160 samples per twenty millisecond frame for an 8000 samples per second sampling rate, 160 squared PCM samples are summed. Expressed mathematically, the frame energy estimate is determined according to equation 1 below:
  • the frame energy value calculated for the current frame is stored in the on-chip RAM 202 of DSP 200 (step 510) .
  • the functions of speech detector 240 include fetching a noise estimate previously determined by noise estimator 230 from the on-chip RAM of DSP 200 (step 515) .
  • Decision block 520 anticipates this situation and assigns a noise estimate in step 525.
  • an arbitrarily high value e.g. 20 dB above normal speech levels, is assigned as the noise estimate in order to force an update of the noise estimate value as will be described below.
  • the frame energy determined by frame energy estimator 210 is retrieved from the on-chip RAM 202 of DSP 200 (block 530) .
  • a decision is made in block 535 as to whether the frame energy estimate exceeds the sum of the retrieved noise estimate plus a predetermined speech threshold value, as shown in equation 2 below: frame energy estimate > (noise estimate + speech threshold) (equation 2)
  • the speech threshold value may be a fixed value determined empirically to be larger than short term energy variations of typical background noise and may, for example, be set to 9 dB. In addition, the speech threshold value may be adaptively modified to reflect changing speech conditions such as when the speaker enters a noisier or quieter environment. If the frame energy estimate exceeds the sum in equation 2, a flag is set in block 570 that speech exists. If speech detector 240 detects that speech exists, then noise estimator 230 is bypassed and the noise estimate calculated for the previous frame in the digitized audio is retrieved and used as the current noise estimate. Conversely, if the frame energy estimate is less than the sum in equation 2, the speech flag is reset in block 540. Other systems for detecting speech in a current frame can also be used.
  • the European Telecommunications Standards Institute has developed a standard for voice activity detection (VAD) in the Global System for Mobile communications (GSM) system and is described in the ETSI Reference: RE/SMG- 020632P which is incorporated by reference.
  • VAD voice activity detection
  • GSM Global System for Mobile communications
  • RE/SMG- 020632P which is incorporated by reference.
  • This standard could be used for speech detection in the present invention and is incorporated by reference.
  • the noise estimation update routine of noise. estimator 230 is executed.
  • the noise estimate is a running average of the frame energy during periods of no speech. As described above, if the initial start-up noise estimate is chosen sufficiently high, speech is not detected, and the speech flag will be reset thereby forcing an update of the noise estimate.
  • a difference/error delta ( ⁇ ) is determined in block 545 between the frame noise energy generated by frame energy estimator 210 and a noise estimate previously calculated by noise estimator 230 in accordance with the following equation:
  • current frame energy - previous noise estimate (equation 3)
  • a determination is made in decision block 550 whether ⁇ exceeds zero. If ⁇ is negative, as occurs for high values of the noise estimate, then the noise estimate is recalculated in block 560 in accordance with the following equation: noise estimate previous noise estimate + ⁇ /2 (equation 4) Since ⁇ is negative, this results in a downward correction of the noise estimate.
  • the relatively large step size of ⁇ /2 is chosen to rapidly correct for decreasing noise levels.
  • noise estimate previous r?.oije estimate + ⁇ /256 (equation 5) Since ⁇ is positive, the noise estimate must be increased. However, a smaller step size of ⁇ /256 (as compared to ⁇ /2) is chosen to gradually increase the noise estimate and provide substantial immunity to transient noise.
  • the noise estimate calculated for the current frame is outputted to the filter selector 235.
  • filter selector 235 accesses a look-up table and uses the current noise estimate to select a filter control value (Step 572) .
  • the filter circuit 115 (in Step 574) is then adjusted as a function of the selected filter control value to exhibit a frequency response curve intended to increase the amount of noise filtered as the noise estimate and background noise increases.
  • the PCM samples stored in DSP RAM are then passed through the adjusted filter circuit 265 to filter the PCM samples in order to remove noise (Step 576) .
  • the filtered PCM samples are then processed by voice coder 120 (step 578) , and the coded samples are then outputted to RF transmit circuits (Step 580) .
  • Figures 6A and 6B show examples of how the filter circuit 115 adjusts to exhibit different frequency response curves F1-F4 for different filter control signals inputted to the filter circuit 115.
  • the filter circuit 115 can be selected to exhibit a series of different frequency response curves with the frequency response curves F1-F4 having cut-off frequencies Flc-F4c, respectively.
  • the cut-off frequencies of filter circuit 115 may range in the preferred embodiment from 300 Hz to 800 Hz.
  • the filter circuit 115 is designed to exhibit frequency response curves having higher cut-off frequencies. The higher cut-off frequencies result in a larger portion of frame energy falling within the lower frequency range of speech being extracted by the filter circuit 115.
  • the filter circuit 115 can be selected to exhibit a series of different frequency response curves F1-F4 with each frequency response curve having a different slope and the same cut-off frequencies.
  • the cut-off frequency for frequency response curves F1-F4 is in the above- mentioned range.
  • the filter circuit 115 is adjusted to exhibit frequency response curves having steeper slopes. The steeper slopes result in a larger portion of frame energy falling within the lower frequency range of speech being extracted by the filter circuit 115.
  • the filter circuit 115 filters the current frames as a function of the noise estimate calculated for the current frame.
  • the current frame is filtered so that the noise is reduced and a major portion of the speech is passed.
  • the major portion of speech which is passed unfiltered provides for recognizable speech output with only a minimal reduction in the quality of the speech signal.
  • a combination of different cutoff frequencies and different slopes could be used for adaptively extracting selected portions of frame energy falling within a low frequency range of speech.
  • Figure 7 depicts an example look-up table accessed by filter selector 235 in order to select one of the filter response curves F1-F4 for filter circuit 115.
  • the look-up table includes a series of potential noise estimates Nl-Nn and filter control values Fl-Fn that correspond with potential response curves that are exhibitable by the filter circuit 115.
  • Noise estimates Nl-Nn can each represent a range of noise estimates and are each matched with a particular filter control value F1-F4.
  • the filter control circuit 105 generates a filter control signal by calculating a noise estimate and retrieving from the look-up table the filter control value associated therewith.
  • Figures 8A & B and 9A & B show how the audio signal for two frames are each adaptively filtered to provide an improved audio signal outputted to the RF transmitter.
  • Figures 8A and 8B show a first frame and a second frame of an audio signal containing speech components si and s2 and noise components nl and n2, respectively. As shown, the noise energy nl and n2 in both frames is concentrated in a low audible frequency range, while the speech energy si and s2 is concentrated in a higher audible frequency range.
  • Figure 9A shows the noise signal nl and speech signal ⁇ l for the first frame after filtering.
  • Figure 9B shows the noise signal n2 and speech signal s2 for the second frame after filtering.
  • the adaptive audio noise reduction system 100 is designed to account for the difference in noise level between the first frame and the second frame by adjusting the filter control circuit 105 based on a calculated noise estimate for the current frame. For example, a noise estimate Nl and a spectral profile SI is calculated by filter control circuit 105 and a filter control value of Fl is selected for the first frame.
  • the filter circuit 115 is adjusted based on filter control value Fl and exhibits a frequency response curve Fl having a cut-off frequency Flc, as shown in Figure 6A. The first frame is passed through this adjusted filter circuit 115.
  • the filter circuit 115 is selected so that a large portion of the noise nl and only a small portion of speech si falls below the cut-off frequency Flc of the frequency response curve Fl. This results in noise nl being effectively filtered and only a relatively insignificant portion of speech si being filtered.
  • the filtered audio signal of the first frame is shown in Figure 9A.
  • a higher background noise is present, and assuming speech is not detected, a higher noise estimate n2 is calculated by filter control circuit 105.
  • a higher corresponding filter control value F2 is determined for the second frame based on the higher noise estimate.
  • the filter circuit 115 is adjusted in response to the higher filter control value F2 to exhibit a frequency response curve having a higher cut ⁇ off frequency F2c, as shown in Figure 6A.
  • the subsequent frame of audio signal is passed through the adjusted filter circuit 115. Because the cut-off frequency F2c of the frequency response curve F2 is higher for the subsequent frame, a larger portion of both the noise n2 and speech s2 is filtered.
  • the portion of speech s2 filtered is still relatively insignificant to the intelligibility information contained by the frame so that there is only minimal affect on the speech.
  • the disadvantage of filtering a larger portion of the speech s2 is offset by the advantage of the increased removal of noise n2 from the second frame.
  • the filtered spectral portion of the speech does not significantly contribute to the intelligibility of the speech.
  • the filtered audio signal of the second frame is shown in Figure 9B.
  • a second preferred embodiment of adaptive noise reduction system 100 is shown in Figures 10-12.
  • the filter control circuit 105 adjusts the filter circuit 115 as a function of noise profile estimates. A noise profile estimate is calculated for each frame and is compared to a reference noise profile. Based on this comparison, the filter circuit 115 is adaptively adjusted to extract varying amounts of low frequency energy from the current frame.
  • the filter control circuit 105 includes a spectral analyzer 270, in addition to frame energy estimator 210, noise estimator 230, speech detector 240, and filter selector 235 which are described with respect to the first preferred embodiment.
  • the filter control circuit 105 determines noise estimates and detects speech for the received frames as described for the first embodiment and shown in flow charts 5A and 5B.
  • the spectral analyzer 270 updates the noise profile estimate and uses the noise profile estimate in adjusting the filter circuit 115.
  • Figure 11 shows the steps performed by spectral analyzer 270 incorporated into the overall process previously described in the flow charts of Figures 5A and 5B for the first preferred embodiment.
  • the spectral analyzer 270 first determines a noise profile for the current frame (step 600) .
  • the noise profile determined for the current frame includes energy calculations for different frequencies (i.e., frequency bins) within a selected low frequency range of speech for the current frame. In the preferred embodiment, the selected frequency range is approximately 300 to 800 hertz.
  • the noise profile of the current frame can be determined by processing the current frame using a Fast Fourier Transform (FFT) having N frequency bins. Processing digital signals using an FFT is well-known in the prior art and is advantageous in that very little processing power is required where the FFT is limited to a relatively small number of frequency bins such as 32. An FFT having N frequency bins produces energy calculations at N different frequencies.
  • FFT Fast Fourier Transform
  • the energy calculations for the frequency bins falling within the selected frequency range form the noise profile for the current frame.
  • the noise profile for the current frame is averaged with a noise profile estimate determined for the previous frame of the audio signal. Where no previous noise profile estimate is available, such as after initialization, a stored, initial noise profile estimate can be used.
  • each noise energy estimate e ⁇ corresponds to an average of the energy calculations at a particular frequency in the selected frequency range over a plurality of successive frames in which no speech was detected.
  • the filter circuit 115 is adjusted on a more gradual basis.
  • the noise profile estimate can be equated to the noise profile of the current frame.
  • the energy estimates e t of the noise profile estimate are then compared with a reference noise profile (step 604) .
  • the reference energy thresholds e ri can be determined empirically.
  • the noise energy estimates e A are successively compared to corresponding reference energy thresholds e ri from the highest frequency energy estimate e x to the lowest frequency energy estimate e n .
  • noise energy estimate e x is first compared to reference noise threshold e rl . If ⁇ j is greater than reference noise threshold e rl , then a comparison value c x is selected and inputted into filter selector 235. If noise energy estimate e x is less than reference noise threshold e rl , then noise energy estimate e 2 (which is a noise energy estimate taken at a lower frequency than e x ) is compared to reference noise threshold e r2 . If noise energy estimate e 2 is greater than reference noise threshold e r2 , then a comparison value c 2 is selected and inputted to filter selector 235.
  • the filter circuit 235 uses the determined comparison value Ci to determine a filter control value.
  • the filter control value is selected from a look-up table such as that shown in Figure 12.
  • the look-up table includes a series of comparison values c A and corresponding filter control values Fi.
  • the filter circuit 115 is adjusted as a function of the selected filter control value.
  • the filter circuit 115 is adjusted to exhibit a frequency response curve for extracting low frequency energy from the current frame.
  • the filter circuit 115 is adjusted to extract increasing amounts of low frequency energy as noise energy estimates at successively higher frequencies surpass their corresponding reference energy thresholds.
  • Figure 6A and 6B show example frequency response curves for selected filter control values.
  • noise profile estimates helps improve the ability to adaptively adjust the filter circuit to extract low frequency energy in a manner to improve the overall quality of speech. Since the car environment is not the only environment where a mobile telecommunications device is used, and therefore the noise profile in certain situations could be tilted more towards higher frequencies, the spectral analyzer 270 can be selectively disabled when noise energy in the low frequencies is small. Also, when a significant portion of the noise frequency spectrum resides in lower frequencies a steeper filtering slope could be applied even though some processing power may be sacrificed. This extra processing requirement is still fairly small.
  • the adaptive noise filter system of the present invention is implemented simply and without significant increase in DSP calculations. More complex methods of reducing noise, such as “spectral subtraction, " require several calculation-related MIPS and a large amount of memory for data and program code storage. By comparison, the present invention may be implemented using only a fraction of the MIPS and memory required for the
  • spectral subtraction algorithm which also introduces more speech distortion.
  • Reduced memory reduces the size of the DSP integrated circuits; decreased MIPS decreases power consumption. Both of these attributes are desirable for battery-powered portable/mobile radiotelephones.

Abstract

A method and system are provided for adaptively reducing noise in frames of digitized audio signals that include both speech and background noise. Frames of digitized audio signals are passed through an adjustable, high-pass filter circuit to filter a portion of background noise located in a low frequency range of the digitized signal. The filter circuit is adjusted by a filter control circuit adapted for a current frame to exhibit a selected frequency response curve. The filter control circuit includes a speech detector for detecting the presence or absence of speech in the frames of digitized audio signals. The filter circuit is adjusted when no speech is detected in the current frame. In a first preferred embodiment, the filter control circuit controls the filter circuit by calculating a noise estimate corresponding to the background noise, and adjusting the filter circuit based on the noise estimate. As the noise estimates increase, the filter circuit is adjusted to extract increasing amounts of energy falling in low frequency ranges of speech. In a second preferred embodiment, the filter circuit is adjusted as a function of a noise profile estimate. A noise profile estimate for a current frame is determined as a function of speech detection and is compared to a reference noise profile. Based on this comparison, the filter circuit is adaptively adjusted.

Description

SYSTEM FOB ADAPTIVELY
FILTERING AϋDTO SIGNALS TO ENHANCE SPBBffH
INTELLIGIBILITY IN NOISY ENVIRONMENTAL CONDTTTOMfi
RELATED APPLICATIONS
The present invention is related to U.S. Patent Application Serial No. 08/128,639, entitled "Adaptive Noise Reduction for Speech Signals" filed on September 29, 1993; and to U.S. Patent Application No. 07/967,027 entitled "Multi-Mode Signal Processing" filed on
October 27, 1992, which are both herein incorporated by reference. U.S. Patent Application Serial Nos. 08/128,639 is are currently pending and is assigned to the parent company of the present assignee.
FIELD OF THE INVENTION The present invention relates to noise reduction systems, and in particular, to an adaptive speech intelligibility enhancement system for use in portable digital radio telephones.
BACKGROUND OF THE INVENTION
The cellular telephone industry has made phenomenal strides in commercial operations in the United States as well as the rest of the world. Demand for cellular services in major metropolitan areas is outstripping current system capacity. Assuming this trend continues, cellular telecommunications will reach even the smallest rural markets. Consequently, cellular ^ capacity must be increased while maintaining high quality service at a reasonable cost. One important step towards increasing capacity is the conversion of cellular systems from analog to digital transmission. This conversion is also important because the first generation of personal communication networks (PCNs) , employing low cost, pocket-size, cordless telephones that can be easily carried and used to make or receive calls in the home, office, street, car, etc., will likely be provided by cellular carriers using the next generation digital cellular infrastructure.
Digital communication systems take advantage of powerful digital signal processing techniques. Digital signal processing refers generally to mathematical and other manipulation of digitized signals. For example, after converting (digitizing) an analog signal into digital form, that digital signal may be filtered, amplified, and attenuated using simple mathematical routines in a digital signal processor (DSP) . Typically, DSPs are manufactured as high speed integrated circuits so that data processing operations can be performed essentially in real time. DSPs may also be used to reduce the bit transmission rate of digitized speech which translates into reduced spectral occupancy of the transmitted radio signals and increased system capacity. For example, if speech signals are digitized using 14-bit linear Pulse Code Modulation (PCM) and sampled at an 8 KHz rate, a serial bit rate of 112 Kbits/sec is produced. Moreover, by taking mathematical advantage of redundancies and other predicable characteristics of human speech, voice coding techniques can be used to compress the serial bit rate from 112 Kbits/sec to 7.95 Kbits/sec to achieve a 14:1 reduction in bit transmission rate. Reduced transmission rates translate into more available bandwidth.
One popular speech compression technique adopted in the United States by the TIA for use as the digital standard for the second generation of cellular telephone systems (i.e., IS-54) is vector sourcebook excited linear predictive coding (VSELP) . Unfortunately, when audio signals including speech, mixed with high levels of ambient noise (particularly "colored noise") , are coded/compressed using VSELP, undesirable audio signal characteristics may be part of the result. For example, if a digital mobile telephone is used in a noisy environment (e.g. inside a moving automobile) , both ambient noise and desired speech are compressed using the VSELP encoding algorithm and transmitted to a base station where the compressed signal is decoded and reconstituted into audible speech. When the background noise is reconstituted into an analog format, undesirable, audible distortion of the noise, and occasionally in the speech, is introduced. This distortion is very annoying to the average listener.
The distortion is caused in large part by the environment in which the mobile telephones are used. Mobile telephones are typically used in a vehicle's interior where there is often ambient noise produced by the vehicle's engine and surrounding vehicular traffic. This ambient noise in the vehicle's interior is typically concentrated in the low audible frequency range and the magnitude of the noise can vary due to such factors as the speed and acceleration of the vehicle and the extent of the surrounding vehicular traffic. This type of low frequency noise also has the tendency of significantly decreasing the intelligibility of the speech coming from the speaking person in the car environment. The decrease in speech intelligibility caused by low frequency noise can be particularly significant in communication systems deploying a VSELP vocoder, but can also occur in communication systems that do not include a VSELP vocoder.
The influence of the ambient noise on the mobile telephone can also be affected by the manner in which the mobile telephone is used. In particular, the mobile telephone may be used in a hands-free mode where the telephone user talks on the telephone while the mobile telephone is in a cradle. This frees the telephone user's hands to drive but also increases the distance that the telephone user's audible words must travel before reaching the microphone input of the mobile telephone. This increased distance between the user and the mobile telephone, along with the varying ambient noise, can result in noise being a significant portion of the total power spectral energy of the audio signal inputted into the *no ilj» telephone.
In theory, various signal processing algorithms could be implemented using digital signal processors to filter the VSELP encoded background noise. These solutions, however, often require significant digital signal processing overhead, measured in terms of millions of instructions executed per second (MIPS) , which consumes valuable processing time, memory space, and power consumption. Each of these signal processing resources, however, is limited in portable radiotelephones. Hence, simply increasing the processing burden of the DSP is not an optimal solution for minimizing VSELP encoded and other types of background noise.
SUMMARY OF THE INVENTION
The present invention provides an adaptive noise reduction system that reduces the undesirable contributions of encoded background noise while both minimizing any negative impact on the quality of the encoded speech and minimizing any increased drain on digital signal processor resources. The method and system of the present invention increases the intelligibility of the speech in a digitized audio signal by passing frames of the digitized audio signal through a filter circuit. The filter circuit functions as an adjustable, high-pass filter which filters a portion of the digitized signal in a low audible frequency range and passes the portion of the digitized signal falling in higher frequency ranges. Because the noise in a vehicle tends to be concentr?.t-d in a low audible frequency range and only a relatively small portion of the intelligibility content of speech falls within this low frequency range, the filter circuit filters a large segment of the noise in the digitized audio signal while only filtering less important segments of the speech. This results in a relatively larger portion of the noise energy being removed compared to the portion of the speech energy removed. By adaptively adjusting and selecting the frequency response curve of the filter circuit, the amount of speech filtered is limited and has a minimal affect on the intelligibility of the speech outputted by the radio.
A filter control circuit is used to adjust the filter circuit to exhibit different frequency response curves as a function of a noise estimate and/or a spectral profile result corresponding to the noise in the audio signal. The noise estimate and/or the spectral profile result are adjusted on a frame-by- frame basis for the digital signal and as a function of speech detection. If speech is not detected, the noise estimate and/or spectral profile result is updated for the current frame. If speech is detected, the noise estimate and/or spectral profile result is left unadjusted. In a first embodiment, the filter circuit calculates noise estimates for the frames of the digitized audio signals. The noise estimates correspond to the amount of background noise in the frames of the digitized audio signals. As the relative amount of background noise to speech in a low frequency range of speech increases, the noise estimates increase. The filter control circuit uses the noise estimates to adjust the filter circuit to filter larger portions of the low frequency range of speech as the relative amount of background noise to speech in a low frequency range of speech increases. When no background noise is present, no portion of the speech signal is filtered. Larger portions of noise and speech information are extracted when there is a higher level of background noise. Because noise tends to be concentrated in a low frequency range and only a relatively small portion of the intelligibility content of speech falls within this low frequency range, the overall intelligibility of the audio signal can be increased by increasing the portion of low frequency energy being filtered as the noise estimates increase.
In a second embodiment, a modified filter control circuit is used to adjust the filter circuit to exhibit different frequency response curves as a function of a noise profile of the noise estimate over a selected frequency range in the audio signal. The filter control circuit includes a spectral analyzer for determining a noise profile estimate as a function of the detection speech. A noise profile estimate is determined for a current frame and compared to a reference noise profile. Based on this comparison, the filter circuit is adaptively adjusted to extract varying amounts of low frequency energy from the current frame.
The adaptive noise reduction system according to the present invention may be advantageously applied to telecommunication systems in which portable/mobile radio transceivers communicate over RF channels with each other or with fixed telephone line subscribers. Each transceiver includes an antenna, a receiver for converting radio signals received over an RF channel via the antenna into analog audio signals, and a transmitter. The transmitter includes a coder-decoder (codec) for digitizing analog audio signals to be transmitted into frames of digitized speech information, the speech information including both speech and background noise. A digital signal processor processes a current frame based on an estimate of the background noise and the detection of speech in the current frame to minimize background noise. A modulator modulates an RF carrier with the processed frame of digitized speech information for subsequent transmission via the antenna.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other features and advantages of the present invention will be readily apparent to one of ordinary skill in the art from the following written description, read in conjunction with the drawings, in which:
FIGURE 1 is a general functional block diagram of the present invention; FIGURE 2 illustrates the frame and slot structure of the U.S. digital standard IS-54 for cellular radio communications;
FIGURE 3 is a block diagram of a first preferred embodiment of the present invention implemented using a digital signal processor;
FIGURE 4 is a functional block diagram of an exemplary embodiment of the present invention in one of plural portable radio transceivers in a telecommunication system; FIGURES 5A and 5B is a flow chart which illustrates functions/operations performed by the digital signal processor in implementing the first preferred embodiment of the present invention;
FIGURE 6A is a graph illustrating a first example of an attenuation vs. frequency characteristic of a filter circuit according to the first preferred embodiment of the present invention;
FIGURE 6B is a graph illustrating a second example of an attenuation vs. frequency characteristic of a filter circuit according to the first preferred embodiment of the present invention;
FIGURE 7 is an example look-up table accessible by the filter control circuit of the first preferred embodiment of the present invention; FIGURES 8A and 8B are graphs illustrating the amplitude vs. frequency characteristics of example input audio signals;
FIGURES 9A and 9B are graphs illustrating the amplitude vs. frequency cha*">c*-i!ristics of the input audio signals of Figures 8A and 8B, respectively, after having been filtered by the filter circuit of the present invention;
FIGURE 10 is a block diagram of a second preferred embodiment of the present invention implemented using a digital signal processor;
FIGURE 11 is a flow chart, corresponding to the flow chart of Figure 5B, which illustrates functions/operations performed by the digital signal processor in implementing the second preferred embodiment of the present invention; and
FIGURE 12 is an example look-up table accessible by the filter control circuit of the second preferred embodiment of the present invention.
DETAILED DESCRIPTION OF THE DRAWINGS In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular circuits, circuit components, techniques, flow charts, etc. in order to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well known methods, devices, and circuits are omitted so as not to obscure the description of the present invention with unnecessary details.
Figure 1 is a general block diagram of the adaptive noise reduction system 100 according to the present invention. Adaptive noise reduction system 100 includes a filter control circuit 105 connected to a filter circuit 115. Filter control circuit 105 generates a filter control signal for a current frame of a digitized audio signal. The filter control signal is outputted to the filter circuit 115, and the filter circuit 115 adjusts in response to the filter control signal to exhibit a high-pass frequency response curve selected based on the filter control signal. The adjusted filter circuit 115 filters the current frame of the digitized audio signal. The filtering signal is processed by a voice coder 120 to produce a coded signal representing the digitized audio signal.
In an exemplary embodiment of the invention applied to portable/mobile radio telephone transceivers in a cellular telecommunications system, Figure 2 illustrates the time division multiple access (TDMA) frame structure employed by the IS-54 standard for digital cellular telecommunications. A "frame" is a twenty millisecond time period which includes one transmit block TX, one receive block RX, and a signal strength measurement block used for mobile-assisted hand-off (MAHO) . The two consecutive frames shown in Figure 2 are transmitted in a forty millisecond time period. Digitized speech and background noise information is processed and filtered on a frame-by- frame basis as further described below.
Preferably, the functions of the filter control circuit 105, filter circuit 115, and voice coder 120 shown in Figure 1 are implemented with a high speed digital signal processor. One suitable digital signal processor is the TMS320C53 DSP available from Texas Instruments. The TMS320C53 DSP includes on a single integrated chip a sixteen-bit microprocessor, on-chip RAM for storing data such as speech frames to be processed, ROM for storing various data processing algorithms including the VSELP speech compression algorithm, and other algorithms to be described below for implementing the functions performed by the filter control circuit 105 and the filter circuit 115.
A first embodiment of the present invention iε shown in Figure 3. In the first embodiment, the filter circuit 115 is adjusted as a function of background noise estimates determined by the filter control circuit. Frames of pulse code modulated (PCM) audio information are sequentially stored in the DSP's on- chip RAM. The audio information could be digitized using other digitization techniques. Each PCM frame is retrieved from a DSP on-chip RAM and processed by frame energy estimator 210, and stored temporarily in temporary frame store 220. The energy of the current frame determined by frame energy estimator 210 is provided to noise estimator 230 and speech detector 240 function blocks. Speech detector 240 indicates that speech is present in the current frame when the frame energy estimate exceeds the sum of the previous noise estimate and a speech threshold. If the speech detector 240 determines that no speech is present, the digital signal processor 200 calculates an updated noise estimate as a function of the previous noise estimate and the current frame energy (block 230) .
The updated noise estimate is outputted to a filter selector 235. Filter selector 235 generates a filter control signal based on the noise estimate. In the preferred embodiment, the filter selector 235 accesses a look-up table in generating the filter control signal. The look-up table includes a series of filter control values that are each matched with a noise estimate or range of noise estimates. A filter control value from a look-up table is selected based on the updated noise estimate and this filter control value is represented by a filter control signal outputted to a filter bank 265 for the filter circuit 115. To stabilize the process and avoid accessive switching between different filters a hangover time of N frames is set upon the selection of a new filter. A new filter can only be selected every N frames, where N is an integer greater than one and preferably greater than 10. The filter circuit 115 is adjusted in response to the filter control signal to exhibit a high-pass frequency response curve that corresponds with the inputted filter control signal and noise estimate. Various different types of filter circuits well known in prior art can be utilized to exhibit selected frequency response curves in response to the filter control signal. These prior art filters include IIR filters such as Butterworth, Chebyshev (Tschebyscheff) or elliptic filters. IIR filters are preferable to FIR filters, which also can be used, due to lower processing requirements.
The filtered signal is processed by a voice coder 120 which is used to compress the bit rate of the filtered signal. In the preferred embodiments, the voice coder 120 uses vector sourcebook excited linear predictive coding (VSELP) to code the audio signal. Other voice coding techniques and algorithms such as code excited linear predictive (CELP) codings, residual pulse excited linear predictive (RPE-LTP) coding, improved multiband excited (IMBE) coding can be used. By filtering the frames of audio signals in accordance with the present invention before voice coding, background noise is minimized which substantially reduces any undesired noise effects in the speech when it is reconstituted. It also prevents the speech from being "drowned" in low frequent noise.
The digital signal processor 200 described in conjunction with Figure 3 can be used, for example, in the transceiver of a digital portable/mobile radiotelephone used in a radio telecommunications system. Figure 4 illustrates one such digital radio transceiver which may be used in a cellular telecommunications network. Although Figure 4 generally describes the basic function blocks included in the radio transceiver, a more detailed description of this transceiver may be obtained from the previously referenced U.S. Patent Application Serial No. 07/967,027 entitled "Multi-Mode Signal Processing" which is incorporated herein by reference. Audio signals including s ft cb and background noise are input in a microphone 400 to a coder-decoder (codec) 402 which preferably is an application specific integrated circuit (ASIC) . The band limited audio signals detected at microphone 400 are sampled by the codec 402 at a rate of 8,000 samples per second and blocked into frames. Accordingly, each twenty millisecond frame includes 160 speech samples. These samples are quantized and converted into a coded digital format such as 14-bit linear PCM. Once 160 samples of digitized speech for a current frame are stored in a transmit DSP 200 in on-chip RAM 202, the transmit DSP 200 performs channel encoding functions, the frame energy estimation, noise estimation, speech detection, FFT, filter functions and digital speech coding/compression in accordance with the VSELP algorithm, as described above in conjunction with Figure 3.
A supervisory microprocessor 432 controls the overall operation of all of the components in the transceiver shown in Figure 4. The filtered PCM data stream generated by transmit DSP 200 is provided for quadrature modulation and transmission. To this end, an ASIC gate array 404 generates in-phase (I) and quadrature (Q) channels of information based upon the filtered PCM data stream from DSP 200. The I and Q bit streams are processed by matched, low pass filters 406 and 408 and passed onto IQ mixers in balanced modulator 410. A reference oscillator 412 and a multiplier 414 provide a transmit intermediate frequency (IF) . The I signal is mixed with in-phase IF, and the Q signal is mixed with quadrature IF (i.e., the in-phase IF delayed by 90 degrees by phase shifter 416) . The mixed I and Q signals are summed, converted "up" to an RF channel frequency selected by channel synthesizer 430, and transmitted via duplexer 420 and antenna 422 over the selected radio frequency channel.
On the receive side, signals received via antenna 422 and duplexer 420 are down converted from the selected receive channel frequency in a mixer 424 to a first IF frequency using a local oscillator signal synthesized by channel synthesizer 430 based on the output of reference oscillator 428. The output of the first IF mixer 424 is filtered and down converted in frequency to a second IF frequency based on another output from channel synthesizer 430 and demodulator
426. A receive gate array 434 then converts the second IF signal into a series of phase samples and a series of frequency samples. The receive DSP 436 performs demodulation, filtering, gain/attenuation, channel decoding, and speech expansion on the received signals. The processed speech data are then sent to codec 402 and converted to baseband audio signals for driving loudspeaker 438.
The operations performed by the digital signal processor 200 for implementing the functions of filter control circuit 105, filter circuit 115, and voice coder 120 will now be described in conjunction with the flow chart illustrated in Figures 5A and 5B. Frame energy estimator 210 determines the energy in each frame of audio signals. Frame energy estimator 210 determines the energy of the current frame by calculating the sum of the squared values of each PCM sample in the frame (step 505) . Since there are 160 samples per twenty millisecond frame for an 8000 samples per second sampling rate, 160 squared PCM samples are summed. Expressed mathematically, the frame energy estimate is determined according to equation 1 below:
160 Frame energy = Σ{Samp ( i ) } » (equation 1) i-l
The frame energy value calculated for the current frame is stored in the on-chip RAM 202 of DSP 200 (step 510) . The functions of speech detector 240 include fetching a noise estimate previously determined by noise estimator 230 from the on-chip RAM of DSP 200 (step 515) . Of course, when the transceiver is initially powered up, no noise estimate will exist. Decision block 520 anticipates this situation and assigns a noise estimate in step 525. Preferably, an arbitrarily high value, e.g. 20 dB above normal speech levels, is assigned as the noise estimate in order to force an update of the noise estimate value as will be described below. The frame energy determined by frame energy estimator 210 is retrieved from the on-chip RAM 202 of DSP 200 (block 530) . A decision is made in block 535 as to whether the frame energy estimate exceeds the sum of the retrieved noise estimate plus a predetermined speech threshold value, as shown in equation 2 below: frame energy estimate > (noise estimate + speech threshold) (equation 2)
The speech threshold value may be a fixed value determined empirically to be larger than short term energy variations of typical background noise and may, for example, be set to 9 dB. In addition, the speech threshold value may be adaptively modified to reflect changing speech conditions such as when the speaker enters a noisier or quieter environment. If the frame energy estimate exceeds the sum in equation 2, a flag is set in block 570 that speech exists. If speech detector 240 detects that speech exists, then noise estimator 230 is bypassed and the noise estimate calculated for the previous frame in the digitized audio is retrieved and used as the current noise estimate. Conversely, if the frame energy estimate is less than the sum in equation 2, the speech flag is reset in block 540. Other systems for detecting speech in a current frame can also be used. For example, the European Telecommunications Standards Institute (ETSI) has developed a standard for voice activity detection (VAD) in the Global System for Mobile communications (GSM) system and is described in the ETSI Reference: RE/SMG- 020632P which is incorporated by reference. This standard could be used for speech detection in the present invention and is incorporated by reference. If speech does not exist, the noise estimation update routine of noise. estimator 230 is executed. In essence, the noise estimate is a running average of the frame energy during periods of no speech. As described above, if the initial start-up noise estimate is chosen sufficiently high, speech is not detected, and the speech flag will be reset thereby forcing an update of the noise estimate.
In the noise estimation routine followed by noise estimator 230, a difference/error delta (Δ) is determined in block 545 between the frame noise energy generated by frame energy estimator 210 and a noise estimate previously calculated by noise estimator 230 in accordance with the following equation:
Δ = current frame energy - previous noise estimate (equation 3) A determination is made in decision block 550 whether Δ exceeds zero. If Δ is negative, as occurs for high values of the noise estimate, then the noise estimate is recalculated in block 560 in accordance with the following equation: noise estimate = previous noise estimate + Δ/2 (equation 4) Since Δ is negative, this results in a downward correction of the noise estimate. The relatively large step size of Δ/2 is chosen to rapidly correct for decreasing noise levels. However, if the frame energy exceeds the noise estimate, providing a Δ greater than zero, the noise is updated in block 555 in accordance with the following equation: noise estimate = previous r?.oije estimate + Δ/256 (equation 5) Since Δ is positive, the noise estimate must be increased. However, a smaller step size of Δ/256 (as compared to Δ/2) is chosen to gradually increase the noise estimate and provide substantial immunity to transient noise.
The noise estimate calculated for the current frame is outputted to the filter selector 235. In the first preferred embodiment, filter selector 235 accesses a look-up table and uses the current noise estimate to select a filter control value (Step 572) . The filter circuit 115 (in Step 574) is then adjusted as a function of the selected filter control value to exhibit a frequency response curve intended to increase the amount of noise filtered as the noise estimate and background noise increases. The PCM samples stored in DSP RAM are then passed through the adjusted filter circuit 265 to filter the PCM samples in order to remove noise (Step 576) . The filtered PCM samples are then processed by voice coder 120 (step 578) , and the coded samples are then outputted to RF transmit circuits (Step 580) .
Figures 6A and 6B show examples of how the filter circuit 115 adjusts to exhibit different frequency response curves F1-F4 for different filter control signals inputted to the filter circuit 115. As shown in Figure 6A, the filter circuit 115 can be selected to exhibit a series of different frequency response curves with the frequency response curves F1-F4 having cut-off frequencies Flc-F4c, respectively. The cut-off frequencies of filter circuit 115 may range in the preferred embodiment from 300 Hz to 800 Hz. As the noise estimates increase, the filter circuit 115 is designed to exhibit frequency response curves having higher cut-off frequencies. The higher cut-off frequencies result in a larger portion of frame energy falling within the lower frequency range of speech being extracted by the filter circuit 115.
Likewise, as shown in Figure 6B, the filter circuit 115 can be selected to exhibit a series of different frequency response curves F1-F4 with each frequency response curve having a different slope and the same cut-off frequencies. The cut-off frequency for frequency response curves F1-F4 is in the above- mentioned range. As the noise estimate increases, the filter circuit 115 is adjusted to exhibit frequency response curves having steeper slopes. The steeper slopes result in a larger portion of frame energy falling within the lower frequency range of speech being extracted by the filter circuit 115.
The filter circuit 115 filters the current frames as a function of the noise estimate calculated for the current frame. The current frame is filtered so that the noise is reduced and a major portion of the speech is passed. The major portion of speech which is passed unfiltered provides for recognizable speech output with only a minimal reduction in the quality of the speech signal. A combination of different cutoff frequencies and different slopes could be used for adaptively extracting selected portions of frame energy falling within a low frequency range of speech.
Figure 7 depicts an example look-up table accessed by filter selector 235 in order to select one of the filter response curves F1-F4 for filter circuit 115. The look-up table includes a series of potential noise estimates Nl-Nn and filter control values Fl-Fn that correspond with potential response curves that are exhibitable by the filter circuit 115. Noise estimates Nl-Nn can each represent a range of noise estimates and are each matched with a particular filter control value F1-F4. The filter control circuit 105 generates a filter control signal by calculating a noise estimate and retrieving from the look-up table the filter control value associated therewith. Figures 8A & B and 9A & B show how the audio signal for two frames are each adaptively filtered to provide an improved audio signal outputted to the RF transmitter. Figures 8A and 8B show a first frame and a second frame of an audio signal containing speech components si and s2 and noise components nl and n2, respectively. As shown, the noise energy nl and n2 in both frames is concentrated in a low audible frequency range, while the speech energy si and s2 is concentrated in a higher audible frequency range. Figure 9A shows the noise signal nl and speech signal εl for the first frame after filtering. Figure 9B shows the noise signal n2 and speech signal s2 for the second frame after filtering.
The adaptive audio noise reduction system 100, as discussed, is designed to account for the difference in noise level between the first frame and the second frame by adjusting the filter control circuit 105 based on a calculated noise estimate for the current frame. For example, a noise estimate Nl and a spectral profile SI is calculated by filter control circuit 105 and a filter control value of Fl is selected for the first frame. In the preferred embodiment, the filter circuit 115 is adjusted based on filter control value Fl and exhibits a frequency response curve Fl having a cut-off frequency Flc, as shown in Figure 6A. The first frame is passed through this adjusted filter circuit 115. The filter circuit 115 is selected so that a large portion of the noise nl and only a small portion of speech si falls below the cut-off frequency Flc of the frequency response curve Fl. This results in noise nl being effectively filtered and only a relatively insignificant portion of speech si being filtered. The filtered audio signal of the first frame is shown in Figure 9A.
In the second frame shown in Figure 8b, a higher background noise is present, and assuming speech is not detected, a higher noise estimate n2 is calculated by filter control circuit 105. A higher corresponding filter control value F2 is determined for the second frame based on the higher noise estimate. In the first preferred embodiment, the filter circuit 115 is adjusted in response to the higher filter control value F2 to exhibit a frequency response curve having a higher cut¬ off frequency F2c, as shown in Figure 6A. The subsequent frame of audio signal is passed through the adjusted filter circuit 115. Because the cut-off frequency F2c of the frequency response curve F2 is higher for the subsequent frame, a larger portion of both the noise n2 and speech s2 is filtered. The portion of speech s2 filtered is still relatively insignificant to the intelligibility information contained by the frame so that there is only minimal affect on the speech. The disadvantage of filtering a larger portion of the speech s2 is offset by the advantage of the increased removal of noise n2 from the second frame. The filtered spectral portion of the speech does not significantly contribute to the intelligibility of the speech. The filtered audio signal of the second frame is shown in Figure 9B. A second preferred embodiment of adaptive noise reduction system 100 is shown in Figures 10-12. In the second preferred embodiment, the filter control circuit 105 adjusts the filter circuit 115 as a function of noise profile estimates. A noise profile estimate is calculated for each frame and is compared to a reference noise profile. Based on this comparison, the filter circuit 115 is adaptively adjusted to extract varying amounts of low frequency energy from the current frame.
Referring to Figure 10, a DSP 200 configured according to the second preferred embodiment is shown. As shown, the filter control circuit 105 includes a spectral analyzer 270, in addition to frame energy estimator 210, noise estimator 230, speech detector 240, and filter selector 235 which are described with respect to the first preferred embodiment. The filter control circuit 105 determines noise estimates and detects speech for the received frames as described for the first embodiment and shown in flow charts 5A and 5B. Upon speech detection for a current frame, the spectral analyzer 270 updates the noise profile estimate and uses the noise profile estimate in adjusting the filter circuit 115.
Referring to Figure 11, the steps of updating the noise profile estimate and adjusting the filter circuit 115 is shown. Figure 11 shows the steps performed by spectral analyzer 270 incorporated into the overall process previously described in the flow charts of Figures 5A and 5B for the first preferred embodiment.
When speech is not detected for the current frame, the spectral analyzer 270 first determines a noise profile for the current frame (step 600) . The noise profile determined for the current frame includes energy calculations for different frequencies (i.e., frequency bins) within a selected low frequency range of speech for the current frame. In the preferred embodiment, the selected frequency range is approximately 300 to 800 hertz. The noise profile of the current frame can be determined by processing the current frame using a Fast Fourier Transform (FFT) having N frequency bins. Processing digital signals using an FFT is well-known in the prior art and is advantageous in that very little processing power is required where the FFT is limited to a relatively small number of frequency bins such as 32. An FFT having N frequency bins produces energy calculations at N different frequencies. The energy calculations for the frequency bins falling within the selected frequency range form the noise profile for the current frame. To determine the noise profile estimate for the current frame (step 604) , the noise profile for the current frame is averaged with a noise profile estimate determined for the previous frame of the audio signal. Where no previous noise profile estimate is available, such as after initialization, a stored, initial noise profile estimate can be used. The noise profile estimate includes noise energy estimates eA (where i = l,2,...n) located at successively lower frequencies (i.e., βj is the noise energy estimate for the highest frequency and en is the noise energy estimate for the lowest frequency in the selected frequency range) . In the preferred embodiment, each noise energy estimate e± corresponds to an average of the energy calculations at a particular frequency in the selected frequency range over a plurality of successive frames in which no speech was detected. By using a plurality of frames in determining the noise profile estimate, the filter circuit 115 is adjusted on a more gradual basis. In alternate embodiments, the noise profile estimate can be equated to the noise profile of the current frame. The energy estimates et of the noise profile estimate are then compared with a reference noise profile (step 604) . The reference noise profile includes reference energy thresholds eri (where i = l,2,...n) at frequencies corresponding to the frequencies for noise energy estimates e± of the noise profile estimate. The reference energy thresholds eri can be determined empirically. The noise energy estimates eA are successively compared to corresponding reference energy thresholds eri from the highest frequency energy estimate ex to the lowest frequency energy estimate en.
More specifically, noise energy estimate ex is first compared to reference noise threshold erl. If βj is greater than reference noise threshold erl, then a comparison value cx is selected and inputted into filter selector 235. If noise energy estimate ex is less than reference noise threshold erl, then noise energy estimate e2 (which is a noise energy estimate taken at a lower frequency than ex) is compared to reference noise threshold er2. If noise energy estimate e2 is greater than reference noise threshold er2, then a comparison value c2 is selected and inputted to filter selector 235. This comparison process is continued until a comparison value cL (where i = l,2,...n) is selected. The filter circuit 235 uses the determined comparison value Ci to determine a filter control value. The filter control value is selected from a look-up table such as that shown in Figure 12. The look-up table includes a series of comparison values cA and corresponding filter control values Fi. The filter circuit 115 is adjusted as a function of the selected filter control value. The filter circuit 115 is adjusted to exhibit a frequency response curve for extracting low frequency energy from the current frame. The filter circuit 115 is adjusted to extract increasing amounts of low frequency energy as noise energy estimates at successively higher frequencies surpass their corresponding reference energy thresholds. Figure 6A and 6B show example frequency response curves for selected filter control values.
Use of noise profile estimates helps improve the ability to adaptively adjust the filter circuit to extract low frequency energy in a manner to improve the overall quality of speech. Since the car environment is not the only environment where a mobile telecommunications device is used, and therefore the noise profile in certain situations could be tilted more towards higher frequencies, the spectral analyzer 270 can be selectively disabled when noise energy in the low frequencies is small. Also, when a significant portion of the noise frequency spectrum resides in lower frequencies a steeper filtering slope could be applied even though some processing power may be sacrificed. This extra processing requirement is still fairly small.
As is evident from the description above, the adaptive noise filter system of the present invention is implemented simply and without significant increase in DSP calculations. More complex methods of reducing noise, such as "spectral subtraction, " require several calculation-related MIPS and a large amount of memory for data and program code storage. By comparison, the present invention may be implemented using only a fraction of the MIPS and memory required for the
"spectral subtraction" algorithm which also introduces more speech distortion. Reduced memory reduces the size of the DSP integrated circuits; decreased MIPS decreases power consumption. Both of these attributes are desirable for battery-powered portable/mobile radiotelephones.
While the invention has been particularly shown and described with reference to the preferred embodiments thereof, it is not limited to those embodiments. For example, although a DSP is disclosed as performing the functions of the frame energy estimator 210, noise estimator 230, speech detector 240, filter selector 235 and filter circuit 265, these functions could be implemented using other digital and/or analog components. In addition, an adaptive filtering system
100 could be implemented where the filter circuit 115 is adjusted as a function of both noise estimates and noise profile estimates. It will be understood by those skilled in the art that various alterations in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims

What is claimed is:
1. A method of increasing intelligibility of speech in audio signals, comprising: receiving frames of digitized audio signals which include speech information and background noise; detecting whether a current frame includes speech information; determining a noise estimate corresponding to the background noise for the current frame as a function of the detection of speech; outputting a filter control signal corresponding to the noise estimate to a filter circuit; adjusting the filter circuit to exhibit a frequency response curve for filtering speech in response to the filter control signal; and applying the filter circuit to the current frame for filtering the current frame as a function of the estimated background noise.
2. The method of claim 1, wherein the filter circuit is adjusted to exhibit a high-pass frequency' response curve for passing a selected portion of speech falling within a high frequency range of speech and extracting a selected portion of speech falling within a low frequency range of speech.
3. The method according to claim 1, wherein the step of detecting whether a current frame includes speech includes determining the energy of the current frame and comparing the determined frame energy with the sum of the noise estimate and a speech threshold value, wherein speech is detected when the determined frame energy exceeds the sum of the noise estimate and the speech threshold value.
4. The method according to claim l, wherein the noise estimate is an average of the background noise detected for a plurality of received frames determined to have no speech information.
5. The method of claim 1, wherein the step of adjusting the filter circuit further comprises adjusting the filter circuit so as to extract from the current frame a greater portion of background noise falling in a low frequency range for speech as the noise estimates increase.
6. The method of claim 5, wherein the step of adjusting the filter circuit further comprises adjusting the filter circuit to exhibit frequency response curves having higher cut-off frequencies as the calculated noise estimateε increase.
7. The method of claim 5, wherein the step of adjusting the filter circuit further comprises adjusting the filter circuit to exhibit frequency response curves having steeper slopes as the calculated noise estimates increase.
8. The method of claim 1, wherein the filter circuit is adjusted to exhibit a selected frequency response curve that passes substantially all of the speech information of the current frame when the noise estimate for the current frame is below a predetermined reference noise estimate.
9. The method of claim 1, wherein the step of selectively adjusting the filter circuit includes adjusting the filter circuit a maximum of one time over N successive frames, where N is an integer greater than one.
10. An apparatus for reducing noise in received frames of digitized audio signals which include speech and background noise, comprising: a) a filter control circuit including: i) an energy level detector for de¬ tecting energy levels in frames of the digitized signal and generating frame energy outputs corresponding to the detected energy levels, ii) a speech detector connected to the energy level detector for detecting the absence or presence of speech in frames of the digitized speech and outputting a speech- indication signal identifying a frame as a speech-containing frame or background-noise frame, iii) a noise estimator connected to the energy level detector and voice detector for determining noise estimates for the frames as a function of the energy level output and the speech-indication signal, iv) a filter selector for generating filter control signals corresponding to noise estimates; and b) a high-pass filter circuit connected to the filter control circuit for filtering the received frames as a function of the noise estimate.
11. The apparatus of claim 10, wherein the filter circuit exhibits a high-pass frequency response curve for passing a selected portion of speech falling within a high frequency range of speech and extracting a selected portion of speech falling within a low frequency range of speech.
12. The apparatus according to claim 10, wherein the speech detector detects speech in a frame by comparing the determined frame energy with the sum of the noise estimate and a speech threshold value, wherein speech is detected when the determined frame energy exceeds the sum of the noise estimate and the speech threshold value.
13. The apparatus according to claim 10, wherein the noise estimate corresponds to an average of background noise detected for a plurality of received frames determined to have no speech information.
14. The apparatus of claim 10, wherein the filter circuit is adjusted so as to extract from the current frame a greater portion of background noise falling within a low frequency range of speech as the noise estimate increases.
15. The apparatus of claim 14, wherein the filter circuit is adjusted to exhibit frequency response curves having higher cut-off frequencies as the calculated noise estimates increase.
16. The apparatus of claim 14, wherein the filter circuit is adjusted to exhibit frequency response curves having steeper slopes as the calculated noise estimates increase.
17. The apparatus of claim 10, wherein the filter circuit is adjusted to exhibit a selected frequency response curve that passes substantially all of the speech information of the current frame when the noise estimate for the current frame is below a predetermined reference noise estimate.
18. The apparatus of claim 10, wherein the filter circuit is adjusted a maximum of one time over N successive frames, where N is an integer greater than one.
19. A telecommunications system in which portable radio transceivers communicate over RF channels, each transceiver comprising: an antenna; a receiver for converting radio signals received over an RF channel via the antenna into analog audio signals; and a transmitter including: a codec for digitizing analog audio signals into frames of digitized speech information, the digitized speech information including speech and background noise; a digital signal processor for detecting speech in the received frames and generating a noise estimate as a function of detecting speech, and for filtering background noise from a current frame as a function of the calculated background noise for the current frame.
20. The apparatus of claim 19, wherein the background noise is filtered by passing a selected portion of speech falling within a high frequency range of speech and extracting a selected portion of speech falling within a low frequency range of speech.
21. The apparatus of claim 20, wherein the digital signal processor adjustably filters the background noise by extracting from the current frame greater portions of background noise falling in a low frequency range for speech as the noise estimates increase.
22. A method of increasing intelligibility of speech in audio signals, comprising: receiving frames of digitized audio signals which include background noise and speech information, ; detecting whether a current frame includes speech information; determining a noise profile estimate for the current frame as a function of the detection of speech, the noise profile estimate including a plurality of noise energy estimates at a plurality of frequencies falling within a predetermined frequency range of speech; comparing the noise energy estimates of the noise profile estimate to a reference noise profile having a plurality of energy thresholds at frequencies corresponding to the frequencies of the noise energy estimates; generating a filter control signal as a function of the comparison between the noise profile estimate and the reference noise profile; adjusting the filter circuit to exhibit a selected high-pass frequency response curve in response to the filter control signal; and applying the filter circuit to the current frame for filtering the current frame as a function of the comparison between the noise profile estimate to the reference noise profile.
23. The method of claim 22, wherein the filter circuit is adjusted to extract increasing amounts of low frequency energy as noise energy estimates at successively higher frequencies surpass their corresponding energy thresholds in the reference noise profile.
24. The method of claim 23, wherein the step of adjusting the filter circuit further comprises adjusting the filter circuit to exhibit frequency response curves having higher cut-off frequencies as noise energy estimates at successively higher frequencies surpass their corresponding energy thresholds in the reference noise profile.
25. The method of claim 22, wherein the noise estimate is an average of the background noise detected for a plurality of received frames determined to have no speech information.
26. The method of claim 22, wherein the step of selectively adjusting the filter circuit includes adjusting the filter circuit a maximum of one time over N successive frames, where N is an integer greater than one.
27. An apparatus for reducing noise in received frames of digitized audio signals which include speech and background noise, comprising: a) a filter control circuit including:
5 i) an energy level detector for de¬ tecting energy levels in frames of the digitized signal and generating frame energy outputs corresponding to the detected energy levels,
10 ii) a speech detector connected to the energy level detector for detecting the absence or presence of speech in frames of the digitized speech and outputting a speech- indication signal identifying a frame as a
15 speech-containing frame or background-noise frame, iii) a spectral analyzer connected to the speech detector for determining a noise profile estimate for a current frame as a
20 function of the detection of speech, the noise profile estimate including a plurality of noise energy estimates at a plurality of frequencies falling within a predetermined frequency range of speech, the spectral
25 comparator comparing the noise energy estimates of the noise profile estimate to a reference noise profile having a plurality of energy thresholds at frequencies corresponding to the frequencies of the noise
30 energy estimates; iv) a filter selector for generating filter control signals as a function of the comparison between the noise profile estimate and the reference noise profile; b) a high-pass filter circuit connected to the filter control circuit for filtering the received frames ?.«? a function of the comparison between the noise profile estimate to the reference noise profile.
28. The apparatus of claim 27, wherein the filter circuit is adjusted to extract increasing amounts of low frequency energy as noise energy estimates at successively higher frequencies surpass their corresponding energy thresholds in the reference noise profile.
29. The apparatus of claim 28, wherein the step of adjusting the filter circuit further comprises adjusting the filter circuit to exhibit frequency response curves having higher cut-off frequencies as noise energy estimates at successively higher frequencies surpass their corresponding energy thresholds in the reference noise profile.
30. The apparatus of claim 27, wherein the noise estimate is an average of the background noise detected for a plurality of received frames determined to have no speech information.
31. The apparatus of claim 27, wherein the filter circuit is adjusted a maximum of one time over N successive frames, where N is an integer greater than one.
32. A telecommunications system in which portable radio transceivers communicate over ?.F channels, each transceiver comprising: an antenna; a receiver for converting radio signals received over an RF channel via the antenna into analog audio signals; and a transmitter including: a codec for digitizing analog audio signals into frames of digitized speech information, the digitized speech information including speech and background noise; a digital signal processor for detecting speech in the received frames and generating a noise profile estimate as a function of detecting speech, and for filtering background noise from a current frame as a function of the calculated noise profile estimate for the current frame.
PCT/US1996/014665 1995-09-14 1996-09-13 System for adaptively filtering audio signals to enhance speech intelligibility in noisy environmental conditions WO1997010586A1 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
EP96931552A EP0852052B1 (en) 1995-09-14 1996-09-13 System for adaptively filtering audio signals to enhance speech intelligibility in noisy environmental conditions
EE9800068A EE03456B1 (en) 1995-09-14 1996-09-13 Adaptive filtering system for audio signals to improve speech clarity in noisy environments
BR9610290A BR9610290A (en) 1995-09-14 1996-09-13 Process to increase speech intelligibility in audio signals apparatus to reduce noise in frames received from digitized audio signals and telecommunications system
DE69613380T DE69613380D1 (en) 1995-09-14 1996-09-13 SYSTEM FOR ADAPTIVELY FILTERING SOUND SIGNALS TO IMPROVE VOICE UNDER ENVIRONMENTAL NOISE
AU70784/96A AU724111B2 (en) 1995-09-14 1996-09-13 System for adaptively filtering audio signals to enhance speech intelligibility in noisy environmental conditions
JP9512112A JPH11514453A (en) 1995-09-14 1996-09-13 A system for adaptively filtering audio signals to enhance speech intelligibility in noisy environmental conditions
PL96325532A PL185513B1 (en) 1995-09-14 1996-09-13 System for adaptively filtering audio signals in order to improve speech intellegibitity in presence a noisy environment
NO981074A NO981074L (en) 1995-09-14 1998-03-11 Improving speech information in audio signals

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US52800595A 1995-09-14 1995-09-14
US08/528,005 1995-09-14

Publications (1)

Publication Number Publication Date
WO1997010586A1 true WO1997010586A1 (en) 1997-03-20

Family

ID=24103874

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/014665 WO1997010586A1 (en) 1995-09-14 1996-09-13 System for adaptively filtering audio signals to enhance speech intelligibility in noisy environmental conditions

Country Status (15)

Country Link
EP (1) EP0852052B1 (en)
JP (1) JPH11514453A (en)
KR (1) KR100423029B1 (en)
CN (1) CN1121684C (en)
AU (1) AU724111B2 (en)
BR (1) BR9610290A (en)
CA (1) CA2231107A1 (en)
DE (1) DE69613380D1 (en)
EE (1) EE03456B1 (en)
MX (1) MX9801857A (en)
NO (1) NO981074L (en)
PL (1) PL185513B1 (en)
RU (1) RU2163032C2 (en)
TR (1) TR199800475T1 (en)
WO (1) WO1997010586A1 (en)

Cited By (135)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19747885A1 (en) * 1997-10-30 1999-05-06 Daimler Chrysler Ag Process for the reduction of acoustic signal interference using the adaptive filter method of spectral subtraction
WO1999022561A2 (en) * 1997-10-31 1999-05-14 Koninklijke Philips Electronics N.V. A method and apparatus for audio representation of speech that has been encoded according to the lpc principle, through adding noise to constituent signals therein
WO2002017299A1 (en) * 2000-08-21 2002-02-28 Conexant Systems, Inc. Method for noise robust classification in speech coding
WO2004004297A2 (en) * 2002-07-01 2004-01-08 Koninklijke Philips Electronics N.V. Stationary spectral power dependent audio enhancement system
DE10305369A1 (en) * 2003-02-10 2004-11-04 Siemens Ag User adaptive method for sound modeling
EP1339256A3 (en) * 2003-03-03 2005-06-22 Phonak Ag Method for manufacturing acoustical devices and for reducing wind disturbances
US7127076B2 (en) 2003-03-03 2006-10-24 Phonak Ag Method for manufacturing acoustical devices and for reducing especially wind disturbances
GB2429139A (en) * 2005-08-10 2007-02-14 Zarlink Semiconductor Inc Applying less aggressive noise reduction to an input signal when speech is dominant over noise
US7242763B2 (en) 2002-11-26 2007-07-10 Lucent Technologies Inc. Systems and methods for far-end noise reduction and near-end noise compensation in a mixed time-frequency domain compander to improve signal quality in communications systems
US7283879B2 (en) 2002-03-10 2007-10-16 Ycd Multimedia Ltd. Dynamic normalization of sound reproduction
WO2009035614A1 (en) * 2007-09-12 2009-03-19 Dolby Laboratories Licensing Corporation Speech enhancement with voice clarity
WO2009082302A1 (en) * 2007-12-20 2009-07-02 Telefonaktiebolaget L M Ericsson (Publ) Noise suppression method and apparatus
KR100978015B1 (en) * 2002-07-01 2010-08-25 코닌클리케 필립스 일렉트로닉스 엔.브이. Stationary spectral power dependent audio enhancement system
US7817677B2 (en) 2004-08-30 2010-10-19 Qualcomm Incorporated Method and apparatus for processing packetized data in a wireless communication system
US8019603B2 (en) 2007-04-03 2011-09-13 Samsung Electronics Co., Ltd Apparatus and method for enhancing speech intelligibility in a mobile terminal
CN102202038A (en) * 2010-03-24 2011-09-28 华为技术有限公司 Method and system for realizing voice energy display, conference server and terminal
US8085678B2 (en) 2004-10-13 2011-12-27 Qualcomm Incorporated Media (voice) playback (de-jitter) buffer adjustments based on air interface
CN101221767B (en) * 2008-01-23 2012-05-30 晨星半导体股份有限公司 Voice boosting device and method used on the same
EP2579254A1 (en) * 2010-05-24 2013-04-10 Nec Corporation Signal processing method, information processing device, and signal processing program
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
KR20140147687A (en) * 2013-06-20 2014-12-30 하만 베커 오토모티브 시스템즈 게엠베하 Identifying spurious signals in audio signals
US9177566B2 (en) 2007-12-20 2015-11-03 Telefonaktiebolaget L M Ericsson (Publ) Noise suppression method and apparatus
US9190062B2 (en) 2010-02-25 2015-11-17 Apple Inc. User profiling for voice input processing
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
WO2017081092A1 (en) * 2015-11-09 2017-05-18 Nextlink Ipr Ab Method of and system for noise suppression
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
WO2018222683A1 (en) * 2017-06-02 2018-12-06 Bose Corporation Dynamic spectral filtering
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US10446167B2 (en) 2010-06-04 2019-10-15 Apple Inc. User-specific noise suppression for voice quality improvements
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10607141B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US10791216B2 (en) 2013-08-06 2020-09-29 Apple Inc. Auto-activating smart responses based on activities from remote devices
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
CN112927715A (en) * 2021-02-26 2021-06-08 腾讯音乐娱乐科技(深圳)有限公司 Audio processing method and device and computer readable storage medium
US20220256267A1 (en) * 2018-03-30 2022-08-11 Panasonic Intellectual Property Management Co., Ltd. Noise reduction device
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000074236A (en) * 1999-05-19 2000-12-15 정몽규 Auto audio volume control means
JP2001318694A (en) * 2000-05-10 2001-11-16 Toshiba Corp Device and method for signal processing and recording medium
KR20030010432A (en) * 2001-07-28 2003-02-05 주식회사 엑스텔테크놀러지 Apparatus for speech recognition in noisy environment
WO2004008801A1 (en) * 2002-07-12 2004-01-22 Widex A/S Hearing aid and a method for enhancing speech intelligibility
KR100640865B1 (en) 2004-09-07 2006-11-02 엘지전자 주식회사 method and apparatus for enhancing quality of speech
EP1840874B1 (en) * 2005-01-11 2019-04-10 NEC Corporation Audio encoding device, audio encoding method, and audio encoding program
KR100667852B1 (en) * 2006-01-13 2007-01-11 삼성전자주식회사 Apparatus and method for eliminating noise in portable recorder
RU2453986C2 (en) * 2006-01-27 2012-06-20 Долби Интернэшнл Аб Efficient filtering with complex modulated filterbank
KR101414233B1 (en) 2007-01-05 2014-07-02 삼성전자 주식회사 Apparatus and method for improving speech intelligibility
KR100883896B1 (en) * 2007-01-19 2009-02-17 엘지전자 주식회사 Speech intelligibility enhancement apparatus and method
KR101238731B1 (en) 2008-04-18 2013-03-06 돌비 레버러토리즈 라이쎈싱 코오포레이션 Method and apparatus for maintaining speech audibility in multi-channel audio with minimal impact on surround experience
DE102009011583A1 (en) 2009-03-06 2010-09-09 Krones Ag Method and device for producing and filling thin-walled beverage containers
CN101859569B (en) * 2010-05-27 2012-08-15 上海朗谷电子科技有限公司 Method for lowering noise of digital audio-frequency signal
CN102128976B (en) * 2011-01-07 2013-05-15 钜泉光电科技(上海)股份有限公司 Energy pulse output method and device of electric energy meter and electric energy meter
AU2012232977A1 (en) * 2011-09-30 2013-04-18 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
CN102737646A (en) * 2012-06-21 2012-10-17 佛山市瀚芯电子科技有限公司 Real-time dynamic voice noise reduction method for single microphone
CN104095640A (en) * 2013-04-03 2014-10-15 达尔生技股份有限公司 Oxyhemoglobin saturation detecting method and device
US9697831B2 (en) * 2013-06-26 2017-07-04 Cirrus Logic, Inc. Speech recognition
EP2980801A1 (en) * 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method for estimating noise in an audio signal, noise estimator, audio encoder, audio decoder, and system for transmitting audio signals
RU2589298C1 (en) * 2014-12-29 2016-07-10 Александр Юрьевич Бредихин Method of increasing legible and informative audio signals in the noise situation
CN105869650B (en) * 2015-12-28 2020-03-06 乐融致新电子科技(天津)有限公司 Digital audio data playing method and device
CN106060717A (en) * 2016-05-26 2016-10-26 广东睿盟计算机科技有限公司 High-definition dynamic noise-reduction pickup
US9748929B1 (en) * 2016-10-24 2017-08-29 Analog Devices, Inc. Envelope-dependent order-varying filter control
CN107039044B (en) * 2017-03-08 2020-04-21 Oppo广东移动通信有限公司 Voice signal processing method and mobile terminal
RU2680735C1 (en) * 2018-10-15 2019-02-26 Акционерное общество "Концерн "Созвездие" Method of separation of speech and pauses by analysis of the values of phases of frequency components of noise and signal
WO2020107269A1 (en) * 2018-11-28 2020-06-04 深圳市汇顶科技股份有限公司 Self-adaptive speech enhancement method, and electronic device
US11438452B1 (en) 2019-08-09 2022-09-06 Apple Inc. Propagating context information in a privacy preserving manner
US11501758B2 (en) 2019-09-27 2022-11-15 Apple Inc. Environment aware voice-assistant devices, and related systems and methods
CN111370033B (en) * 2020-03-13 2023-09-22 北京字节跳动网络技术有限公司 Keyboard sound processing method and device, terminal equipment and storage medium
US20230305590A1 (en) * 2020-03-13 2023-09-28 University Of South Australia A data processing method
CN111402916B (en) * 2020-03-24 2023-08-04 青岛罗博智慧教育技术有限公司 Voice enhancement system, method and handwriting board
CN111916106B (en) * 2020-08-17 2021-06-15 牡丹江医学院 Method for improving pronunciation quality in English teaching
CN114550740B (en) * 2022-04-26 2022-07-15 天津市北海通信技术有限公司 Voice definition algorithm under noise and train audio playing method and system thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4461025A (en) * 1982-06-22 1984-07-17 Audiological Engineering Corporation Automatic background noise suppressor
US4630305A (en) * 1985-07-01 1986-12-16 Motorola, Inc. Automatic gain selector for a noise suppression system
US4811404A (en) * 1987-10-01 1989-03-07 Motorola, Inc. Noise suppression system
DE4012349A1 (en) * 1989-04-19 1990-10-25 Ricoh Kk Noise elimination device for speech recognition system - uses spectral subtraction of sampled noise values from sampled speech values
EP0558312A1 (en) * 1992-02-27 1993-09-01 Central Institute For The Deaf Adaptive noise reduction circuit for a sound reproduction system
US5251263A (en) * 1992-05-22 1993-10-05 Andrea Electronics Corporation Adaptive noise cancellation and speech enhancement system and apparatus therefor
EP0645756A1 (en) * 1993-09-29 1995-03-29 Ericsson Ge Mobile Communications Inc. System for adaptively reducing noise in speech signals
EP0665530A1 (en) * 1994-01-28 1995-08-02 AT&T Corp. Voice activity detection driven noise remediator

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3065739B2 (en) * 1991-10-14 2000-07-17 三菱電機株式会社 Voice section detection device
JPH05259928A (en) * 1992-03-09 1993-10-08 Oki Electric Ind Co Ltd Method and device for canceling adaptive control noise
JPH0695693A (en) * 1992-09-09 1994-04-08 Fujitsu Ten Ltd Noise reducing circuit for voice recognition device
JP3270866B2 (en) * 1993-03-23 2002-04-02 ソニー株式会社 Noise removal method and noise removal device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4461025A (en) * 1982-06-22 1984-07-17 Audiological Engineering Corporation Automatic background noise suppressor
US4630305A (en) * 1985-07-01 1986-12-16 Motorola, Inc. Automatic gain selector for a noise suppression system
US4811404A (en) * 1987-10-01 1989-03-07 Motorola, Inc. Noise suppression system
DE4012349A1 (en) * 1989-04-19 1990-10-25 Ricoh Kk Noise elimination device for speech recognition system - uses spectral subtraction of sampled noise values from sampled speech values
EP0558312A1 (en) * 1992-02-27 1993-09-01 Central Institute For The Deaf Adaptive noise reduction circuit for a sound reproduction system
US5251263A (en) * 1992-05-22 1993-10-05 Andrea Electronics Corporation Adaptive noise cancellation and speech enhancement system and apparatus therefor
EP0645756A1 (en) * 1993-09-29 1995-03-29 Ericsson Ge Mobile Communications Inc. System for adaptively reducing noise in speech signals
EP0665530A1 (en) * 1994-01-28 1995-08-02 AT&T Corp. Voice activity detection driven noise remediator

Cited By (200)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19747885A1 (en) * 1997-10-30 1999-05-06 Daimler Chrysler Ag Process for the reduction of acoustic signal interference using the adaptive filter method of spectral subtraction
WO1999023642A1 (en) * 1997-10-30 1999-05-14 Daimlerchrysler Ag Method for reducing interference in acoustic signals by means of an adaptive filter method involving spectral subtraction
US6643619B1 (en) 1997-10-30 2003-11-04 Klaus Linhard Method for reducing interference in acoustic signals using an adaptive filtering method involving spectral subtraction
DE19747885B4 (en) * 1997-10-30 2009-04-23 Harman Becker Automotive Systems Gmbh Method for reducing interference of acoustic signals by means of the adaptive filter method of spectral subtraction
WO1999022561A2 (en) * 1997-10-31 1999-05-14 Koninklijke Philips Electronics N.V. A method and apparatus for audio representation of speech that has been encoded according to the lpc principle, through adding noise to constituent signals therein
WO1999022561A3 (en) * 1997-10-31 1999-07-15 Koninkl Philips Electronics Nv A method and apparatus for audio representation of speech that has been encoded according to the lpc principle, through adding noise to constituent signals therein
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
WO2002017299A1 (en) * 2000-08-21 2002-02-28 Conexant Systems, Inc. Method for noise robust classification in speech coding
CN1302460C (en) * 2000-08-21 2007-02-28 曼德斯必德技术公司 Method for noise robust classification in speech coding
US7283879B2 (en) 2002-03-10 2007-10-16 Ycd Multimedia Ltd. Dynamic normalization of sound reproduction
KR100978015B1 (en) * 2002-07-01 2010-08-25 코닌클리케 필립스 일렉트로닉스 엔.브이. Stationary spectral power dependent audio enhancement system
WO2004004297A3 (en) * 2002-07-01 2004-06-03 Koninkl Philips Electronics Nv Stationary spectral power dependent audio enhancement system
WO2004004297A2 (en) * 2002-07-01 2004-01-08 Koninklijke Philips Electronics N.V. Stationary spectral power dependent audio enhancement system
US7242763B2 (en) 2002-11-26 2007-07-10 Lucent Technologies Inc. Systems and methods for far-end noise reduction and near-end noise compensation in a mixed time-frequency domain compander to improve signal quality in communications systems
DE10305369B4 (en) * 2003-02-10 2005-05-19 Siemens Ag User-adaptive method for noise modeling
DE10305369A1 (en) * 2003-02-10 2004-11-04 Siemens Ag User adaptive method for sound modeling
EP1339256A3 (en) * 2003-03-03 2005-06-22 Phonak Ag Method for manufacturing acoustical devices and for reducing wind disturbances
US7127076B2 (en) 2003-03-03 2006-10-24 Phonak Ag Method for manufacturing acoustical devices and for reducing especially wind disturbances
US7492916B2 (en) 2003-03-03 2009-02-17 Phonak Ag Method for manufacturing acoustical devices and for reducing especially wind disturbances
US8094847B2 (en) 2003-03-03 2012-01-10 Phonak Ag Method for manufacturing acoustical devices and for reducing especially wind disturbances
US7817677B2 (en) 2004-08-30 2010-10-19 Qualcomm Incorporated Method and apparatus for processing packetized data in a wireless communication system
US7826441B2 (en) 2004-08-30 2010-11-02 Qualcomm Incorporated Method and apparatus for an adaptive de-jitter buffer in a wireless communication system
US7830900B2 (en) 2004-08-30 2010-11-09 Qualcomm Incorporated Method and apparatus for an adaptive de-jitter buffer
US8085678B2 (en) 2004-10-13 2011-12-27 Qualcomm Incorporated Media (voice) playback (de-jitter) buffer adjustments based on air interface
GB2429139B (en) * 2005-08-10 2010-06-16 Zarlink Semiconductor Inc A low complexity noise reduction method
GB2429139A (en) * 2005-08-10 2007-02-14 Zarlink Semiconductor Inc Applying less aggressive noise reduction to an input signal when speech is dominant over noise
US7908138B2 (en) 2005-08-10 2011-03-15 Zarlink Semiconductor Inc. Low complexity noise reduction method
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9117447B2 (en) 2006-09-08 2015-08-25 Apple Inc. Using event alert text as input to an automated assistant
US8942986B2 (en) 2006-09-08 2015-01-27 Apple Inc. Determining user intent based on ontologies of domains
US8930191B2 (en) 2006-09-08 2015-01-06 Apple Inc. Paraphrasing of user requests and results by automated digital assistant
US8019603B2 (en) 2007-04-03 2011-09-13 Samsung Electronics Co., Ltd Apparatus and method for enhancing speech intelligibility in a mobile terminal
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US8583426B2 (en) 2007-09-12 2013-11-12 Dolby Laboratories Licensing Corporation Speech enhancement with voice clarity
WO2009035614A1 (en) * 2007-09-12 2009-03-19 Dolby Laboratories Licensing Corporation Speech enhancement with voice clarity
RU2469423C2 (en) * 2007-09-12 2012-12-10 Долби Лэборетериз Лайсенсинг Корпорейшн Speech enhancement with voice clarity
WO2009082302A1 (en) * 2007-12-20 2009-07-02 Telefonaktiebolaget L M Ericsson (Publ) Noise suppression method and apparatus
US9177566B2 (en) 2007-12-20 2015-11-03 Telefonaktiebolaget L M Ericsson (Publ) Noise suppression method and apparatus
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US10381016B2 (en) 2008-01-03 2019-08-13 Apple Inc. Methods and apparatus for altering audio output signals
CN101221767B (en) * 2008-01-23 2012-05-30 晨星半导体股份有限公司 Voice boosting device and method used on the same
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
US10108612B2 (en) 2008-07-31 2018-10-23 Apple Inc. Mobile device having human language translation capability with positional feedback
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US11080012B2 (en) 2009-06-05 2021-08-03 Apple Inc. Interface for a virtual digital assistant
US10795541B2 (en) 2009-06-05 2020-10-06 Apple Inc. Intelligent organization of tasks items
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10475446B2 (en) 2009-06-05 2019-11-12 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
US8903716B2 (en) 2010-01-18 2014-12-02 Apple Inc. Personalized vocabulary for digital assistant
US11423886B2 (en) 2010-01-18 2022-08-23 Apple Inc. Task flow identification based on user intent
US9548050B2 (en) 2010-01-18 2017-01-17 Apple Inc. Intelligent automated assistant
US10706841B2 (en) 2010-01-18 2020-07-07 Apple Inc. Task flow identification based on user intent
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US11410053B2 (en) 2010-01-25 2022-08-09 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10607141B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10984327B2 (en) 2010-01-25 2021-04-20 New Valuexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10984326B2 (en) 2010-01-25 2021-04-20 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10607140B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
US9190062B2 (en) 2010-02-25 2015-11-17 Apple Inc. User profiling for voice input processing
CN102202038A (en) * 2010-03-24 2011-09-28 华为技术有限公司 Method and system for realizing voice energy display, conference server and terminal
EP2579254A1 (en) * 2010-05-24 2013-04-10 Nec Corporation Signal processing method, information processing device, and signal processing program
EP2579254A4 (en) * 2010-05-24 2014-07-02 Nec Corp Signal processing method, information processing device, and signal processing program
US9837097B2 (en) 2010-05-24 2017-12-05 Nec Corporation Single processing method, information processing apparatus and signal processing program
US10446167B2 (en) 2010-06-04 2019-10-15 Apple Inc. User-specific noise suppression for voice quality improvements
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US10102359B2 (en) 2011-03-21 2018-10-16 Apple Inc. Device access using voice authentication
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US11120372B2 (en) 2011-06-03 2021-09-14 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US10978090B2 (en) 2013-02-07 2021-04-13 Apple Inc. Voice trigger for a digital assistant
US11388291B2 (en) 2013-03-14 2022-07-12 Apple Inc. System and method for processing voicemail
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10657961B2 (en) 2013-06-08 2020-05-19 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
KR20140147687A (en) * 2013-06-20 2014-12-30 하만 베커 오토모티브 시스템즈 게엠베하 Identifying spurious signals in audio signals
KR102180656B1 (en) * 2013-06-20 2020-11-19 하만 베커 오토모티브 시스템즈 게엠베하 Identifying spurious signals in audio signals
US10791216B2 (en) 2013-08-06 2020-09-29 Apple Inc. Auto-activating smart responses based on activities from remote devices
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US10083690B2 (en) 2014-05-30 2018-09-25 Apple Inc. Better resolution when referencing to concepts
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US10169329B2 (en) 2014-05-30 2019-01-01 Apple Inc. Exemplar-based natural language processing
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US11133008B2 (en) 2014-05-30 2021-09-28 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US10497365B2 (en) 2014-05-30 2019-12-03 Apple Inc. Multi-command single utterance input method
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US11257504B2 (en) 2014-05-30 2022-02-22 Apple Inc. Intelligent assistant for home automation
US10904611B2 (en) 2014-06-30 2021-01-26 Apple Inc. Intelligent automated assistant for TV user interactions
US9668024B2 (en) 2014-06-30 2017-05-30 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US10431204B2 (en) 2014-09-11 2019-10-01 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US11556230B2 (en) 2014-12-02 2023-01-17 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US10311871B2 (en) 2015-03-08 2019-06-04 Apple Inc. Competing devices responding to voice triggers
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US11087759B2 (en) 2015-03-08 2021-08-10 Apple Inc. Virtual assistant activation
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US11500672B2 (en) 2015-09-08 2022-11-15 Apple Inc. Distributed personal assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US11526368B2 (en) 2015-11-06 2022-12-13 Apple Inc. Intelligent automated assistant in a messaging environment
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10726859B2 (en) 2015-11-09 2020-07-28 Invisio Communication A/S Method of and system for noise suppression
WO2017081092A1 (en) * 2015-11-09 2017-05-18 Nextlink Ipr Ab Method of and system for noise suppression
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US11069347B2 (en) 2016-06-08 2021-07-20 Apple Inc. Intelligent automated assistant for media exploration
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US11037565B2 (en) 2016-06-10 2021-06-15 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US11152002B2 (en) 2016-06-11 2021-10-19 Apple Inc. Application integration with a digital assistant
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US11405466B2 (en) 2017-05-12 2022-08-02 Apple Inc. Synchronization and task delegation of a digital assistant
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10157627B1 (en) 2017-06-02 2018-12-18 Bose Corporation Dynamic spectral filtering
WO2018222683A1 (en) * 2017-06-02 2018-12-06 Bose Corporation Dynamic spectral filtering
US20220256267A1 (en) * 2018-03-30 2022-08-11 Panasonic Intellectual Property Management Co., Ltd. Noise reduction device
US11665459B2 (en) * 2018-03-30 2023-05-30 Panasonic Intellectual Property Management Co., Ltd. Noise reduction device
CN112927715A (en) * 2021-02-26 2021-06-08 腾讯音乐娱乐科技(深圳)有限公司 Audio processing method and device and computer readable storage medium

Also Published As

Publication number Publication date
MX9801857A (en) 1998-11-29
AU7078496A (en) 1997-04-01
NO981074D0 (en) 1998-03-11
EE03456B1 (en) 2001-06-15
TR199800475T1 (en) 1998-06-22
BR9610290A (en) 1999-03-16
KR100423029B1 (en) 2004-07-01
KR19990044659A (en) 1999-06-25
AU724111B2 (en) 2000-09-14
EE9800068A (en) 1998-08-17
PL185513B1 (en) 2003-05-30
PL325532A1 (en) 1998-08-03
RU2163032C2 (en) 2001-02-10
CA2231107A1 (en) 1997-03-20
DE69613380D1 (en) 2001-07-19
EP0852052B1 (en) 2001-06-13
CN1121684C (en) 2003-09-17
NO981074L (en) 1998-05-13
JPH11514453A (en) 1999-12-07
EP0852052A1 (en) 1998-07-08
CN1201547A (en) 1998-12-09

Similar Documents

Publication Publication Date Title
EP0852052B1 (en) System for adaptively filtering audio signals to enhance speech intelligibility in noisy environmental conditions
EP0645756B1 (en) System for adaptively reducing noise in speech signals
EP1017042B1 (en) Voice activity detection driven noise remediator
US5794199A (en) Method and system for improved discontinuous speech transmission
EP0699334B1 (en) Method and apparatus for group encoding signals
CA2348913C (en) Complex signal activity detection for improved speech/noise classification of an audio signal
US7613606B2 (en) Speech codecs
FI116643B (en) Noise reduction
US8977556B2 (en) Voice detector and a method for suppressing sub-bands in a voice detector
EP0599664B1 (en) Voice encoder and method of voice encoding
KR19990007936A (en) Battery-powered radio transceiver with improved battery life and how to operate it
JP2003524796A (en) Method and apparatus for crossing line spectral information quantization method in speech coder
US5710862A (en) Method and apparatus for reducing an undesirable characteristic of a spectral estimate of a noise signal between occurrences of voice signals
EP1040467A1 (en) Communication terminal
US20060041426A1 (en) Noise detection for audio encoding
EP1238479A1 (en) Method and apparatus for suppressing acoustic background noise in a communication system

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 96198008.7

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1996931552

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2231107

Country of ref document: CA

Ref document number: 2231107

Country of ref document: CA

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: PA/a/1998/001857

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 1997 512112

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1019980701913

Country of ref document: KR

Ref document number: 1998/00475

Country of ref document: TR

WWP Wipo information: published in national office

Ref document number: 1996931552

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1019980701913

Country of ref document: KR

WWG Wipo information: grant in national office

Ref document number: 1996931552

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 1019980701913

Country of ref document: KR