WO2004053839A1 - System and method for speech processing using independent component analysis under stability constraints - Google Patents

System and method for speech processing using independent component analysis under stability constraints Download PDF

Info

Publication number
WO2004053839A1
WO2004053839A1 PCT/US2003/039593 US0339593W WO2004053839A1 WO 2004053839 A1 WO2004053839 A1 WO 2004053839A1 US 0339593 W US0339593 W US 0339593W WO 2004053839 A1 WO2004053839 A1 WO 2004053839A1
Authority
WO
WIPO (PCT)
Prior art keywords
signals
speech
filter
ica
signal
Prior art date
Application number
PCT/US2003/039593
Other languages
French (fr)
Inventor
Erik Visser
Te-Won Lee
Original Assignee
Softmax, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Softmax, Inc. filed Critical Softmax, Inc.
Priority to US10/537,985 priority Critical patent/US7383178B2/en
Priority to JP2005511772A priority patent/JP2006510069A/en
Priority to AU2003296976A priority patent/AU2003296976A1/en
Priority to EP03812979A priority patent/EP1570464A4/en
Publication of WO2004053839A1 publication Critical patent/WO2004053839A1/en
Priority to IL169587A priority patent/IL169587A0/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating

Definitions

  • the present invention relates to systems and methods for audio signal processing, in particular to systems and methods for enhancing speech quality in an acoustic environment.
  • Speech signal processing is important in many areas of everyday communication, particularly in those areas where noises are profuse.
  • Noises in the real world abound from multiple sources, including apparently single source noises, which in the real world transgress into multiple sounds with echoes and reverberations.
  • Background noise may include numerous noise signals generated by the general environment, signals generated by background conversations of other people, as well as the echoes, reflections, and reverberations generated from each of the signals.
  • Speech communication mediums such as cell phones, speakerphones, headsets, hearing aids, cordless telephones, teleconferences, CB radios, walkie-talkies, computer telephony applications, computer and automobile voice command applications and other hands-free applications, intercoms, microphone systems and so forth, can take advantage of speech signal processing to separate the desired speech signals from background noise.
  • ICA Independent Component Analysis
  • PCT publication WO 00/41441 discloses using a specific ICA technique to process input audio signals to reduce noise in the output audio signal.
  • ICA is a technique for separating mixed source signals (components) which are presumably independent from each other.
  • independent component analysis operates an "un-mixing" matrix of weights on the mixed signals, for example multiplying the matrix with the mixed signals, to produce separated signals. The weights are assigned initial values, and then adjusted to maximize joint entropy of the signals in order to minimize information redundancy.
  • blind separation problems refer to the idea of separating mixed signals that come from multiple independent sources.
  • ICA algorithms are not able to effectively separate signals that have been recorded in a real environment which inherently include acoustic echoes, such as those due to room reflections. It is emphasized that the methods mentioned so far are restricted to the separation of signals resulting from a linear stationary mixture o f source signals. The phenomenon resulting from the summing of direct path signals and their echoic counterparts is termed reverberation and poses a major issue in artificial speech enhancement and recognition systems.
  • ICA algorithms require include long filters which can separate those time-delayed and echoed signals, thus precluding effective real time use.
  • FIGURE 1 shows one embodiment of a prior art ICA signal separation system 100.
  • a network of filters acting as a neural network, serve to resolve individual signals from any number of mixed signals inputted into the filter network.
  • the system 100 includes two input channels 110 and 120 that receive input signals Xi and X .
  • an ICA direct filter Wi and an ICA cross filter C 2 are applied.
  • an ICA direct filter W 2 and an ICA cross filter C ⁇ are applied.
  • the direct filters Wi and W 2 communicate for direct adjustments.
  • the cross filters are feedback filters that merge their respective filtered signals with signals filtered by the direct filters. After convergence of the ICA filters, the produced output signals Ui and U 2 represent the separated signals.
  • U.S. Patent No. 5,675,659, Torkkola et al. proposes methods and an apparatus for blind separation of delayed and filtered sources.
  • Torkkola suggests an ICA system maximizing the entropy of separated outputs but employing un-mixing filters instead of static coefficients like in Bell's patent.
  • the ICA calculations described in Torkkola to calculate the joint entropy and to adjust the cross filter weights are numerically unstable in the presence of input signals with time- varying input energy like speech signals and introduce reverberation artifacts into the separated output signals.
  • the proposed filtering scheme therefore does not achieve stable and perceptually acceptable blind source separation of real-life speech signals.
  • Typical ICA implementations also face additional hurdles as requiring substantial c omputing p ower to r epeatedly c alculate t he joint entropy of signals and to adjust the filter weights.
  • Many ICA implementations also require multiple rounds of feedback filters and direct correlation of filters. As a result, it is difficult to accomplish ICA filtering of speech in real time and use a large number of microphones to separate a large number of mixed source signals, h the case of sources originating from spatially localized locations, the un-mixing filter coefficients can be computed with a reasonable amount of filter taps and recording microphones.
  • the present invention relates to systems and methods for speech processing useful to identify and separate desired audio signal(s), such as at least one speech signal, in a noisy acoustic environment.
  • the speech process operates on a device(s) having at least two microphones, such as a wireless mobile phone, headset, or cell phone. At least two microphones are positioned on the housing of the device for receiving desired signals from a target, such as speech from a speaker. The microphones are positioned to receive the target user's speech, but also receive noise, speech from other sources, reverberations, echoes, and other undesirable acoustic signals. At least both microphones receive audio signals that include the desired target speech and a mixture of other undesired acoustic information.
  • the mixed signals from the microphones are processed using a modified ICA ( independent c omponent analysis) process.
  • the speech process uses a predefined speech characteristic, which has been predefined, to assist in identifying the speech signal, hi this way, the speech process generates a desired speech signal from the target user, and a noise signal.
  • the noise signal may be u sed t o further filter a nd p rocess t he d esired speech signal.
  • An aspect of the invention relates to a speech separation system that includes at least two channels of input signals, each comprising one or a combination of audio signals, and two improved independent component analysis cross filters.
  • the two channels of input signals are filtered by the cross filters, which are preferably infinitive impulse response filters with nonlinear bounded functions.
  • the nonlinear bounded functions are nonlinear functions with pre-determined maximum and minimum values that can be computed quickly, for example a sign function that returns as output either a positive or a negative value based on the input value.
  • two channels of output signals are produced, with one channel containing substantially desired audio signals and the other channel containing substantially noise signals.
  • One aspect of the invention relates to systems and methods of separating audio signals into desired speech signals and noise signals.
  • Input signals which are combinations of desired speech signals and noise signals, are received from at least two channels.
  • An equal number of independent component analysis cross filters are employed. Signals from the first channel are filtered by the first cross filter and combined with signals from the second channel to form augmented signals on the second channel.
  • the augmented signals on the second channel are filtered by the second cross filter and combined with signals from the first channel to form augmented signals on the first channel.
  • the augmented signals on the first channel can be further filtered by the first cross filter.
  • the filtering and combining processes are repeated to reduce information redundancy between the two channels of signals.
  • the produced two channels of output signals represent one channel of predominantly speech signals and one channel of predominantly non-speech signals. Additional speech enhancement methods, such as spectral subtraction, Wiener filtering, de-noising and speech feature extraction may be performed to further improve speech quality.
  • the filter weight adaptation rule is designed in such a manner that the weight adaptation dynamics are in pace with the overall stability requirement of the feedback structure. Unlike previous approaches, the overall system performance is thus not solely directed towards the desired entropy maximization of separated outputs but considers stability constraints to meet a more realistic objective. This objective is better described as a maximum likelihood principle under stability constraint. These stability constraints in maximum 1 ikelihood e stimation c orrespond to modeling temporal characteristics of the source signals. In entropy maximization approaches signal sources are assumed i.i.d. (independently, identically drawn) random variables. However, real signals such as sounds and speech signals are not random signals but have correlations in time and are smooth in frequency. This results in a corresponding original ICA filter coefficient learning rule.
  • the input channels are scaled down by an adaptive scaling factor to constrain the filter weight adaptation speed.
  • the scaling factor is determined from a recursive equation and is a function of the channel input energy. It is thus unrelated to the entropy maximization of the subsequent ICA filter operations.
  • the adaptive nature of the ICA filter structure implies that the separated output signals contain reverberation artifacts if filter coefficients are adjusted too fast or exhibit oscillating behavior.
  • the learned filter weights have to be smoothed in the time and frequency domains to avoid reverberation effects. Since this smoothing operation slows down the filter learning process, this enhanced speech intelligibility design aspect has an additional stabilizing effect on the overall system performance.
  • the ICA computed inputs and outputs can be each pre-process or post- processed, respectively.
  • an alternative embodiment of the present invention contemplates including voice activity detection and adaptive Wiener filtering since these methods exploit solely temporal or spectral information about the processed signals, and would thus complement the ICA filtering unit.
  • a final aspect of the invention is concerned with computational precision and power issues of the filter feedback structure, h a finite bit precision arithmetic environment (typically 16 bit or 32 bit), the filtering operation is subject to filter coefficient quantization errors. These typically result in deteriorated convergence performance and overall system stability. Quantization effects can be controlled by limiting the cross filter lengths and by changing the original feedback structure s o the post-processed ICA output is instead fed back into the ICA filter structure. It is emphasized that the down scaling of input energy in a finite precision environment is not only necessary from a stability point of view, but also because of the finite range of computed numerical values. Although performance in finite precision environments is reliable and adjustable, the proposed speech processing scheme should preferably be implemented in floating point precision environments. Finally implementation under computational constraints is accomplished by appropriately choosing the filter length and tuning the filter coefficient update frequency. Indeed the computational complexity of the ICA filter structure is a direct function of these latter variables.
  • FIGURE 1 illustrates a block diagram of prior art ICA signal separation systems.
  • FIGURE 2 is a block diagram of one embodiment of a speech separation system in accordance with the present invention
  • FIGURE 3 a block diagram of one embodiment of an improved ICA processing sub-module in accordance with the present invention.
  • FIGURE 4 a block diagram of one embodiment of an improved ICA speech separation process in accordance with the present invention.
  • FIGURE 5 is a flowchart of a speech processing method in accordance with the present invention.
  • FIGURE 6 is a flowchart of a speech de-noising process in accordance with the present invention.
  • FIGURE 7 is a flowchart of a speech feature extraction process in accordance with the present invention
  • FIGURE 8 is a table showing examples of combinations of speech processing processes in accordance with the present invention.
  • FIGURE 9 is a block diagram one embodiment of a cellular phone with a speech separation system in accordance with the present invention.
  • FIGURE 10 is a block diagram of another embodiment of a cellular phone with a speech separation system.
  • FIG. 2 illustrates one embodiment of a speech s eparation system 200.
  • the system 200 includes a speech enhancement module 210, an optional speech de- noising module 220, and an optional speech feature extraction module 230.
  • the speech enhancement module 210 includes an improved ICA processing sub-module 212 and optionally a post-processing sub-module 214.
  • the improved ICA processing sub-module 212 uses simplified and improved ICA processing to achieve real-time speech separation with relatively low computing power. In applications that do not require real-time speech separation, the improved ICA processing can further reduce the requirement on computing power.
  • ICA and BSS are interchangeable and refer to methods for minimizing or maximizing the mathematical formulation of mutual information directly or indirectly through approximations, including time- and frequency- domain based decorrelation methods such as time delay decorrelation or any other second or higher order statistics based decorrelation methods.
  • a "module” or “sub-module” can refer to any method, apparatus, device, unit or computer-readable data storage medium that includes computer instructions in software, hardware or firmware form. It is to be understood that multiple modules or systems can be combined into one module or system and one module or system can be separated into multiple modules or systems to perform the same functions.
  • the improved ICA processing sub-module 212 in its own or in combination with other modules, is embodied in a microprocessor chip located in a cell phone.
  • the elements of the present invention are essentially the code segments to perform the necessary tasks, such as with routines, programs, objects, components, data structures, and the like.
  • the program or code segments can be stored in a processor readable medium or transmitted by a computer data signal embodied in a carrier wave over a transmission medium or communication link.
  • the "processor readable medium” may include any medium that can store or transfer information, including volatile, nonvolatile, removable and non-removable media. Examples of the processor readable medium include an electronic circuit, a semiconductor memory device, a ROM, a flash- memory, an erasable ROM (EROM), a floppy diskette or other magnetic storage, a CD-ROM/DVD or other optical storage, a hard disk, a fiber optic medium, a radio frequency (RF) link, or any other medium which can be used to store the desired information and which can be accessed.
  • RF radio frequency
  • the computer data signal may include any signal that can propagate over a transmission medium such as electronic network channels, optical fibers, air, electromagnetic, RF links, etc.
  • the code segments may be downloaded via computer networks such as the Internet, Intranet, etc. In any case, the present invention should not be construed as limited by such embodiments.
  • a speech separation system 200 may include various combinations of one or more speech enhancement modules 210, speech de-noising modules 220, and speech feature extraction modules 230.
  • the speech separation system 200 may also include one or more speech recognition modules (not shown) to be described below. ' Each of the modules can be used by itself as a stand-alone system or as part of a larger system.
  • the speech separation system is preferably incorporated into an electronic device that accepts speech input in order to control certain functions, or otherwise requires separation of desired noises from background noises. Many applications require enhancing or separating clear desired sound from background sounds originating from multiple directions.
  • Such applications include human-machine interfaces such as in electronic or computational devices which incorporate capabilities such as voice recognition and detection, speech enhancement and separation, voice- activated control, and the like. Due to the lower processing power required by the invention speech separation system, it is suitable in devices that only provide limited processing capabilities.
  • FIGURE 3 illustrates one embodiment 300 of an improved ICA or BSS processing sub-module 212.
  • Input signals Xi and X 2 are received from channels 310 and 320, respectively. Typically, each of these signals would come from at least one microphone, but it will be appreciated other sources may be used.
  • Cross filters Wi and W 2 are applied to each of the input signals to produce a channel 330 of separated signals Ui and a channel 340 of separated signals U 2 .
  • Channel 330 (speech channel) contains predominantly desired signals and channel 340 (noise channel) c ontains p redominantly noise signals.
  • speech channel and “noise channel” are used, the terms “speech” and “noise” are interchangeable based on desirability, e.g., it may be that one speech and/or noise is desirable over other speeches and/or noises.
  • the method can also be used to separate the mixed noise signals from more than two sources.
  • Infinitive impulse response filters are preferably used in the improved ICA processing process.
  • An infinitive impulse response filter is a filter whose output signal is fed back into the filter as at least a part of an input signal.
  • a finite impulse response filter is a filter whose output signal is not feedback as input.
  • the cross filters W 2 ⁇ and W 12 can have sparsely distributed coefficients over time to capture a long period of time delays, hi a most simplified form, the cross filters W 2 ⁇ and W ⁇ 2 are gain factors with only one filter coefficient per filter, for example a delay gain factor for the time delay between the output signal and the feedback input signal and an amplitude gain factor for amplifying the input signal. In other forms, the cross filters can each have dozens, hundreds or thousands of filter coefficients.
  • the output signals Ui and U 2 can be further processed by a post processing sub-module, a de-noising module or a speech feature extraction module.
  • the ICA learning rule has been explicitly derived to achieve blind source separation, its practical implementation to speech processing in an acoustic environment may lead to unstable behavior of the filtering scheme.
  • the adaptation dynamics of Wj 2 and similarly W ⁇ have to be stable in the first place.
  • the gain margin for such a system is low in general meaning that an increase in input gain, such as encountered with non stationary speech signals, can lead to instability and therefore exponential increase of weight coefficients.
  • speech signals generally exhibit a sparse distribution with zero mean, the sign function will oscillate frequently in time and contribute to the unstable behavior.
  • a large learning parameter is desired for fast convergence, there is an inherent trade-off between stability and performance since a large input gain will make the system more unstable.
  • the known learning rule not only lead to instability, but also tend to oscillate due to the nonlinear sign function, especially when approaching the stability limit, leading to reverberation of the filtered output signals Y ⁇ [t] and Y 2 [t].
  • the adaptation rules for W ⁇ 2 and W 2 ⁇ need to be stabilized. If the learning rules for the filter coefficients are stable, extensive analytical and empirical studies have shown that systems are stable in the BIBO (bounded input bounded output). The final corresponding objective of the overall processing scheme will thus be blind source separation of noisy speech signals under stability constraints.
  • the principal way to ensure stability is therefore to scale the input appropriately as i llustrated b y F igure 3 .
  • the s caling factor s c_fact i s adapted based on the incoming input signal characteristics. For example, if the input is too high, this will lead to an increase in sc_fact, thus reducing the input amplitude. There is a compromise between performance and stability. Scaling the input down by sc fact reduces the SNR which leads to diminished separation performance. The input should thus only be scaled to a degree necessary to ensure stability. Additional stabilizing can be achieved for the cross filters by running a filter architecture that accounts for short term fluctuation in weight coefficients at every sample, thereby avoiding associated reverberation.
  • This adaptation rule filter can be viewed as time domain smoothing. Further filter smoothing can be performed in the frequency domain to enforce coherence of the converged separating filter over neighboring frequency bins. This can be conveniently done by zero tapping the K-tap filter to length L, then Fourier transforming this filter with increased time support followed by Inverse Transforming. Since the filter has effectively been windowed with a rectangular time domain window, it is correspondingly smoothed by a sine function in the frequency domain. This frequency domain smoothing can be accomplished at regular time intervals to periodically reinitialize the adapted filter coefficients to a coherent solution.
  • the function f(x) is a nonlinear bounded function, namely a nonlinear function with a predetermined maximum value and a predetermined minimum value.
  • f(x) is a nonlinear bounded function which quickly approaches the maximum value or the minimum value depending on the sign of the variable x.
  • Eq. 3 and Eq. 4 above use a sign function as a simple bounded function.
  • a sign function f(x) is a function with binary values of 1 or -1 depending on whether x is positive or negative.
  • Example nonlinear bounded functions include, but are not limited to:
  • filter coefficient quantization error effect Another factor which may affect separation performance is the filter coefficient quantization error effect. Because of the limited filter coefficient resolution, adaptation of filter coefficients will yield gradual additional separation improvements at a certain point and thus a consideration in determining convergence properties.
  • the quantization error effect depends on a number of factors but is mainly a function of the filter length and the bit resolution used.
  • the input scaling issues listed previously are also necessary in finite precision computations where they prevent numerical overflow. Because the convolutions involved in the filtering process could potentially add up to numbers larger than the available resolution range, the scaling factor has to ensure the filter input is sufficiently small to prevent this from happening.
  • the improved ICA processing sub-module 212 receives input signals from at least two audio input channels, such as microphones.
  • the number of audio input channels can be increased beyond the minimum of two channels.
  • speech separation quality may improve, generally to the point where the number of input channels equals the number of audio signal sources.
  • the sources of the input audio signals include a speaker, a background speaker, a background music source, and a general background noise produced by distant road noise and wind noise, then a four-channel speech separation system will normally outperform a two-channel system.
  • more input channels are used, more filters and more computing power are required.
  • the improved ICA processing sub-module and process can be used to separate more than two channels of input signals.
  • one channel may contain substantially desired speech signal
  • another channel may contain substantially noise signals from one noise source
  • another channel may contain substantially audio signals from another noise source.
  • one channel may include speech predominantly from one target user, while another channel may include speech predominantly from a different target user.
  • a third channel may include noise, and be useful for further process the two speech channels. It will be appreciated that additional speech or target channels may be useful.
  • the improved ICA process can be used to not only separate one source of speech signals from b ackground n oise, b ut a lso t o s eparate o ne s peaker's s peech s ignals from another speaker's speech signals.
  • peripheral processing techniques can be applied to the input and output signals and in varying degrees.
  • Pre-processing techniques as well as postprocessing techniques which complement the methods and systems described herein clearly will enhance the performance of blind source separation techniques applied to audio mixtures.
  • post-processing techniques can be used to improve the quality of the desired s ignal utilizing the undesirable output or the unseparated inputs.
  • pre-processing techniques or information can enhance the performance of blind source separation techniques applied to audio mixtures by improving the conditioning of the mixing scenario to complement the methods and systems described herein.
  • Improved ICA processing separates sound signals into at least two channels, for example one channel for noise signals (noise channel) and one channel for desired speech signals (speech channel).
  • channel 430 i s the speech channel
  • channel 440 is the noise channel.
  • the speech channel contains an undesirable level noise signals and the noise channel still contains some speech signals.
  • improved ICA processing alone might not always adequately separate desired speech from noise.
  • the processed signals therefore may need to be post-processed to remove remaining levels of background noise and/or to further improve the quality of the speech signals.
  • a Wiener filter with the noise spectrum estimated from non-speech time intervals detected with a voice activity detector is used to achieve better SNR for signals degraded by background noise with long time support.
  • the bounded functions are only simplified approximations to the joint entropy calculations, and might not always reduce the signals' information redundancy completely. Therefore, after signals are s eparated using improved ICA processing, post processing may be performed to further improve the quality of the speech signals.
  • the separated noise signal channel could be discarded but may also be used for other purposes.
  • those signals in the desired speech channel whose signatures are similar to the signatures of the noise channel signals should be filtered out in the post-processing unit. For example, spectral subtraction techniques can be used to perform post processing. The signatures of the signals in the noise channel are identified.
  • the post processing is more flexible because it analyzes the noise signature of the particular environment and removes noise signals that represent the particular environment. It is therefore less likely to be over-inclusive or under-inclusive in noise removal.
  • Speech recognition applications can take advantage of speech signals separated by the speech enhancement process. With speech signals substantially separated from noise, speech recognition engines based on methods such as Hidden Markov Model chains, neural network learning and support vector machines can work with greater accuracy.
  • Method 500 may be used in a speech device, such as a portable wireless mobile phone, a telephone headset, or in a hands-free car kit, for example. It will be appreciated that method 500 may be used on other speech devices, and may be implemented on DSP processors, general computing processors, microprocessors, gate arrays, or other computational devices. In use, method 500 receives acoustic signals in the form of sound signals 502. These sound signals 502 may come from many sources, and may include the speech from a target user, speech from others in the vicinity, noise, reverberations, echoes, reflections, and other undesirable sounds. Although method 500 is shown identifying and separating a single target speech signal, it will be understood that method 500 may be modified to identify and separate additional target sound signals.
  • varying preprocessing techniques or information can be used to improve or facilitate the processing and separation of the mixed audio signals, such as utilizing a priori knowledge, maximizing divergent information or characteristics in the input signals and conditions, improving the conditioning of the mixing scenario, and the like.
  • an additional channel selection stage 510 processes the content of the separated channels based on a priori knowledge 501 about the desired speaker in an iterative manner.
  • the criteria 504 used to identify desired speaker speech characteristics can be based on, but are not limited to, spatial or temporal features, energy, volume, frequency content, zero crossing rate or speaker dependent and independent speech recognition scores computed in parallel to the separation process.
  • the criteria 504 could be configured to respond to constrained vocabulary such as a particular command, e.g., "wake up",
  • the speech device could respond to a sound signal emanating from a particular location or direction, such as the front driver's position in a car. hi this way a hands-free car kit could be configured to respond only to speech from the driver, while ignoring speech from passengers and the radio.
  • the conditions of the mixing scenario can be improved by modulating or manipulating the characteristics of the input signals, for example by spatial, temporal, energy, spectral, and the like, modulations and manipulations.
  • the microphones are consistently placed based on predefined distance from the speech source, the background noises or in relation to the other microphones, or have certain characteristics themselves to condition the input signals, e.g., directional microphones.
  • two microphones may be spaced apart and placed on the housing of a speech device.
  • a telephone headset is typically adjusted so that the microphones are within about one inch of the speaker's mouth, and the speaker's voice is typically the closest sound source to the microphone, hi a similar manner, the microphones for a handheld wireless phone, handset, or lapel microphone typically have a reasonably known distance to the target speaker's mouth.
  • the process 510 may select only a sound signal that comes from less than two inches away and that has a frequency component indicative of a male voice. In those cases where a two microphone setup is used, the microphones are arranged close to the desired speaker's mouth.
  • This setup allows to isolate the desired speaker's voice signal into one separated ICA channel so t hat t he r emaining s eparated o utput c hannel c ontaining o nly n oise c an be used as a noise reference for subsequent post processing of the desired speaker channel.
  • the two channel ICA algorithm is extended to a N-channel (microphone) algorithm in a similar fashion as explained earlier for the two channel scenario, with N*(N-1) ICA cross filters.
  • the latter one is used for source localization purposes along with the channel selection procedure presented in [ad2] to select among the N recorded channels the optimal two channel combination which is then processed in a two channel ICA algorithm to separate the desired speaker.
  • All kind of information sources resulting from the N-channel ICA separation like, but not limited to, relative energy changes from recorded input to separated output sources as well as learned ICA cross filter coefficients are exploited to this end.
  • Each of the spaced apart microphones receives a signal that is a mixture of the desired target sound and of several noise and reverberation sources.
  • the mixed sound signals 507 and 509 are receive in the ISA process 508 for separation.
  • the ICA process 508 separates the mixed sounds into a desired speech signal and a noise signal.
  • T he ICA process may use the noise signal to further process 512 the speech signal, for example, by using the noise signal to further refine and set weighting factors.
  • the noise signal may also be used by additional filtering 514 or processes to further remove noise content from the speech signal, as further described below.
  • FIGURE 6 is a flowchart showing one embodiment of a de-noising process.
  • de-noising is best used to separate out noise sources that are not spatially localized, such as wind noise that comes from all directions.
  • De- noising techniques can also be used to remove noise signals with fixed frequencies. From a start b lock 600, the process proceeds to a block 610. At the block 610, the process receives a block of speech signals x. The process proceeds to a block 620, where the system computes source coefficients s, preferably using the following formula
  • wy represents an ICA weight matrix.
  • An ICA method described in U.S. Patent 5,706,402 or an ICA method described in U.S. patent 6,424,960 can be used in the de-noising process.
  • the process then proceeds to a block 630, a block 640, or a block 650.
  • the blocks 630, 640 and 650 represent alternative embodiments.
  • the process selects a number of significant source coefficients based on the power of the signal Sj.
  • the process applies a maximum likelihood shrinkage function to the computed source coefficients to eliminate the insignificant coefficients.
  • the process filters the speech signals x with one of the basis functions for each time sample t.
  • a ⁇ represents the training signals produced by filtering incoming signals with-the weight factors.
  • the de-noising process thus removes noise and p roduces t he r econstructed s peech s ignals x new .
  • G ood d e-noising r esults are obtained when information about the noise sources is available.
  • the signatures of signals in the noise channel can be used by the de-noising process to remove noise from signals in the speech channel. From the block 660, the process proceeds to an end block 670.
  • FIGURE 7 illustrates one embodiment of a speech feature extraction process using ICA.
  • the process starts from a start block 700 to a block 710, where the process receives speech signals x.
  • the speech signals x can be the input speech signals, signals processed by speech enhancement, signals processed by de-noising, or signals processed by speech enhancement and de-noising.
  • the process proceeds from the block 710 to a block 720, where the process computes source coefficients using as described above by Eq.10.
  • the process then proceeds to a block 730, where the received speech signals are decomposed into basis functions.
  • the process proceeds to a block 740, where the computed source coefficients are used as feature vectors. For example, the computed coefficients Sj j . new or 21og sy, new are used in calculating feature vectors.
  • the process then proceeds to an end block 750.
  • the extracted speech features can be used to recognize speech or to distinguish recognizable speech from other audio signals.
  • the extracted speech features can be used by themselves or in conjunction with cepstral features (MFCC).
  • MFCC cepstral features
  • the extracted speech features can also be used to identify speakers, for example to identify individual speakers from speech signals of multiple speakers, or to identify speech signals as belonging to certain classes such as speech from male or female speakers.
  • the extracted speech features can also be used by a classification algorithm to detect speech signals. For example, a maximum likelihood calculation can be used to determine the likelihood that the signals in question are human speech signals.
  • the extracted speech features can also be applied in text-to-speech applications that produce computer readings of texts.
  • Text-to-speech systems use a large database of speech signals.
  • One challenge is to obtain a good representative database of phonemes.
  • Prior art systems use cepstral features to classify the speech data into the phoneme database.
  • the improved speech feature extraction method can better classify speech into phoneme segments and therefore produce a better database, thus allowing better speech quality for text-to-speech systems.
  • one set of basis functions is used for all speech signals to recognize speech.
  • one set of basis functions is used for each speaker to recognize each speaker. This may be particularly advantageous for multiple-speaker applications such as teleconferences.
  • one set of basis functions is used for one class of speakers to recognize each class. For example, one set of basis functions is used for male speakers and another set is used for female speakers.
  • U.S. patent 6,424,960 describes using an ICA mixture model to identify voices of different classes. Such a model can be used to identify speech signals of different speakers or different genders of speakers.
  • Speech recognition applications can take advantage of speech signals separated by improved ICA processing. With speech signals substantially separated from noise, speech recognition applications can work with greater accuracy. Methods such as Hidden Markov Model, neural network learning and support vector machines can be used in speech recognition applications. As described above, in a two-microphone arrangement, improved ICA processing separates input signals into a speech channel of desired speech signals and some noise signals, and a noise channel of noise signals and some speech signals.
  • noise reference signal to remove noise from speech signals based on the noise reference signal. For example, using speech spectral subtraction to remove, from a channel of substantially speech signals, signals that have the characteristics of the noise reference signal. Therefore, in a preferred speech recognition system for very noisy environments, the system receives a speech channel and a noise channel of signals and identifies a noise reference signal.
  • FIGURE 8 is a table 800 listing some o f the typical combinations o f speech enhancement, de-noising and speed feature extraction processes.
  • the left column of the table 800 lists the type of the signals and the right column lists the preferred processes for processing the corresponding type of signals.
  • input signals are first processed using speech enhancement, then processed using speech de-noising, and then processed using speech feature extraction.
  • Heavy noise refers to relatively low amplitude noise signals that come from multiple sources, for example on a street where various types of noises come from different directions but not one type of noise is particularly loud.
  • Competing source refers to high amplitude signals from one or few sources that compete with the desired speech signals, for example a car radio turned to a high volume when the driver is speaking on a car phone, hi another arrangement shown in row 820, input signals are first processed using speech enhancement and then processed using speech feature extraction. The speech de-noising process is omitted. The combination of speech enhancement and speech feature extraction processes works well when original signals contain competing source and do not contain heavy noise.
  • input signals are first processed using speech de-noising and then processed using speech feature extraction.
  • the speech enhancement process is omitted.
  • the combination of speech de-noising and speech feature extraction processes works well when input signals contain heavy noise and do not contain competing source.
  • only speech feature extraction is performed on the input signals. This process is sufficient to reach good results for relatively clean speech that does not contain heavy noise or competing source.
  • table 800 is only a list of examples and other embodiments can be used. For example, all of the speech enhancement, speech de-noising and speech feature extraction processes can be applied to process signals regardless of their types.
  • FIGURE 9 illustrates one embodiment of a cellular phone device.
  • the cell phone device 900 includes two microphones 910 and 920 for recording sound signals, and a speech separation system 200 for processing the recorded signals to separate the desired speech signal from background noise.
  • the speech separation system 200 includes at least an improved ICA processing sub-module that applies cross filters to the recorded signals to produce separated signals on channels 930 and 940.
  • the separated desired speech signals are then transmitted by transmitter 950 to an audio signal receiving device such as a wired phone or another cellular phone.
  • the separated noise signals may be discarded but may also be u sed for other purposes.
  • the separated noise signals may be used to determine environment characteristics and adjust cell phone parameters accordingly. For example, the noise signals may be used to determine the noise level of the speaker's environment. The cell phone then increases the volume of the microphones if the speaker is in environment with high noise level. As described above, the noise signals can also b e used as reference signals to further remove remaining noise from the separated speech signals.
  • FIGURE 9 shows two microphones, more than two microphones can be used.
  • Existing manufacturing technology can produce microphones that are about the size of a dime, a pin head or smaller, and multiple microphones can be placed on a device 900.
  • the conventional echo-cancellation process performed in a cell phone is replaced by an ICA process such as the process performed by the improved ICA sub-module.
  • the microphones are preferably placed acoustically apart on a cell phone.
  • one microphone can be placed on the front side of the cell phone while another microphone can be placed on the back side of the cell phone.
  • One microphone can be placed near the top or left side of the cell phone while another microphone can be placed near the bottom or right side of the cell phone.
  • Two microphones can be placed on different locations of the cell phone headset. In one embodiment, two microphones are placed on the headset and two more microphones are placed on the cell phone handheld unit. Therefore two microphones can record the user's speech regardless whether the user uses the handheld unit or the headset.
  • a cellular phone with improved ICA processing is described as an example, other speech communication mediums, such as voice command for electronic appliances, wired telephones, speakerphones, cordless telephones, teleconferences, CB radios, walkie-talkies, computer telephony applications, computer and automobile speech recognition applications, surveillance devices, intercoms and so forth and also take advantage of improved ICA processing to separate desired speech signals from other signals.
  • speech communication mediums such as voice command for electronic appliances, wired telephones, speakerphones, cordless telephones, teleconferences, CB radios, walkie-talkies, computer telephony applications, computer and automobile speech recognition applications, surveillance devices, intercoms and so forth and also take advantage of improved ICA processing to separate desired speech signals from other signals.
  • FIGURE 10 illustrates another embodiment of a cellular phone device.
  • the cell phone device 1000 includes two channels 1010 and 1020 for receiving sound signals from another communication device such as another cellular phone.
  • the channels 1010 and 1020 receive sound signals of the same conversation recorded by two microphones. More than two receiving units can be used to receive more than two channels of input signals.
  • the device 1000 also includes a speech separation system 200 for processing the received signals to separate the desired speech signal from background noise.
  • the separated desired speech signals are then amplified by an amplifier 1030 to reach the ear of the cell phone user.
  • the speech separation system 200 By placing the speech separation system 200 on the receiving c ell p hone, the user of the receiving cell phone can hear high-quality speech even if the transmitting cell phone does not have a speech separation system 200. However, this requires receiving two channels of signals of a conversation recorded by two microphones on the transmitting cell phone.
  • FIGURE 10 For ease of illustration, other cell phone parts such as the battery, the display panel and so forth are omitted from FIGURE 10.
  • Cell phone signal processing steps involving digital-to-analog conversion, demodulating or to enable FDMA (frequency division multiple access), TDMA (time division multiple access) or CDMA (channel division multiple access) and so forth are also omitted for ease of illustration.
  • Hyvaerinen, A. and Oja,E A fast fixed-point algorithm for independent component analysis. Neural Computation, 9, pp.1483-1492, 1997

Abstract

A system and method for separating a mixture of audio signal into desired audio signals (430) (e.g., speech) and a noise signal (440) is disclosed. Microphones (310, 320) are positioned to receive the mixed audio signals, and an independent component analysis (ICA) processes (212) the sound mixture using stability constraints. The ICA process (508) uses predefined characteristics of the desired speech signal to identify and isolate a target sound signal (430). Filter coefficients are adapted with a learning rule and filter weight update dynamics are stabilized to assist convergence to a stable separated ICA signal result. The separated signals may be peripherally-processed to further reduce noise effects using post-­processing (214) and pre-processing (220, 230) techniques and information. The proposed system is designed and easily adaptable for implementation on DSP units or CPUs in audio communication hardware environments.

Description

SYSTEM AND METHOD FOR SPEECH PROCESSING USING IMPROVED INDEPENDENT COMPONENT ANALYSIS
Background of the Invention
Field of the Invention
[0001] The present invention relates to systems and methods for audio signal processing, in particular to systems and methods for enhancing speech quality in an acoustic environment.
Description of the Related Art
[0002] Speech signal processing is important in many areas of everyday communication, particularly in those areas where noises are profuse. Noises in the real world abound from multiple sources, including apparently single source noises, which in the real world transgress into multiple sounds with echoes and reverberations. Unless separated and isolated, it is difficult to extract the desired noise from background noise. Background noise may include numerous noise signals generated by the general environment, signals generated by background conversations of other people, as well as the echoes, reflections, and reverberations generated from each of the signals. In communication where users often talk in noisy environments, it is desirable to separate the user's speech signals from background noise. Speech communication mediums, such as cell phones, speakerphones, headsets, hearing aids, cordless telephones, teleconferences, CB radios, walkie-talkies, computer telephony applications, computer and automobile voice command applications and other hands-free applications, intercoms, microphone systems and so forth, can take advantage of speech signal processing to separate the desired speech signals from background noise.
[0003] Many methods have been created to separate desired sound signals from background noise signals. Prior art noise filters identify signals with predetermined characteristics as white noise signals, and subtract such signals from the input signals. These methods, while simple and fast enough for real time processing of sound signals, are not easily adaptable to different sound environments, and can result in substantial degradation of the speech signal sought to be resolved. The predetermined assumptions of noise characteristics can be over-inclusive or under-inclusive. As a result, portions of a person's speech may be considered "noise" by these methods and therefore removed from the output speech signals, while portions of background noise such as music or conversation may be considered non-noise by these methods and therefore included in the output speech signals.
[0004] Other more recently developed methods, such as Independent Component Analysis ("ICA"), provide relatively accurate and flexible means for the separation of speech signals from background noise. For example, PCT publication WO 00/41441 discloses using a specific ICA technique to process input audio signals to reduce noise in the output audio signal. ICA is a technique for separating mixed source signals (components) which are presumably independent from each other. In its simplified form, independent component analysis operates an "un-mixing" matrix of weights on the mixed signals, for example multiplying the matrix with the mixed signals, to produce separated signals. The weights are assigned initial values, and then adjusted to maximize joint entropy of the signals in order to minimize information redundancy. This weight- adjusting and entropy-increasing process is repeated until the information redundancy of the signals is reduced to a minimum. Because this technique does not require information on the source of each signal, it is known as a "blind source separation" method ("BSS"). Blind separation problems refer to the idea of separating mixed signals that come from multiple independent sources.
[0005] One of the earliest discussions of ICA is that by Tony Bell in U.S. Patent No. 5,706,402 which spawned further research. There are now many different ICA techniques or algorithms. A summary of the most widely used algorithms and techniques can be found in books and references therein about ICA ( e.g Te-Won Lee, Independent Component Analysis: Theory and Applications, Kluwer Academic Publishers, Boston, September 1998, Hyvarinen et al., Independent Component Analysis, 1st edition (Wiley- Interscience, May 18, 2001); Mark Girolami, Self-Organizing Neural Networks: Independent C omponent Analysis and B lind S ource Separation (Perspectives in Neural Computing) (Springer Verlag, September 1999); and Mark Girolami (Editor), Advances in Independent Component Analysis (Perspectives in Neural Computing) (Springer Verlag August 2000). Singular value decomposition algorithms have been disclosed in Adaptive Filter Theory by Simon Haykin (Third Edition, Prentice-Hall (NT), (1996).
[0006] Many popular ICA algorithms have been developed to optimize their performance, including a number which have evolved by significant modifications of those which only existed a decade ago. For example, the work described in A. J. Bell and TJ Sejnowski, Neural Computation 7:1129-1159 (1995), and Bell, A.J. U.S. Patent No. 5,706,402, is usually not used in its patented form. Instead, in order to optimize its performance, this algorithm has gone through several recharacterizations by a number of different entities. One such change includes the use of the "natural gradient", described in Amari, Cichocki, Yang (1996). Other popular ICA algorithms include methods that compute higher-order statistics such as cumulants (Cardoso, 1992; Comon, 1994; Hyvaerinen and Oja, 1997).
[0007] However, many known ICA algorithms are not able to effectively separate signals that have been recorded in a real environment which inherently include acoustic echoes, such as those due to room reflections. It is emphasized that the methods mentioned so far are restricted to the separation of signals resulting from a linear stationary mixture o f source signals. The phenomenon resulting from the summing of direct path signals and their echoic counterparts is termed reverberation and poses a major issue in artificial speech enhancement and recognition systems. Presently, ICA algorithms require include long filters which can separate those time-delayed and echoed signals, thus precluding effective real time use.
[0008] FIGURE 1 shows one embodiment of a prior art ICA signal separation system 100. In such a prior art system, a network of filters, acting as a neural network, serve to resolve individual signals from any number of mixed signals inputted into the filter network. As shown in FIGURE 1, the system 100 includes two input channels 110 and 120 that receive input signals Xi and X . For signal Xi, an ICA direct filter Wi and an ICA cross filter C2 are applied. For signal X , an ICA direct filter W2 and an ICA cross filter C\ are applied. The direct filters Wi and W2 communicate for direct adjustments. The cross filters are feedback filters that merge their respective filtered signals with signals filtered by the direct filters. After convergence of the ICA filters, the produced output signals Ui and U2 represent the separated signals.
[0009] U.S. Patent No. 5,675,659, Torkkola et al., proposes methods and an apparatus for blind separation of delayed and filtered sources. Torkkola suggests an ICA system maximizing the entropy of separated outputs but employing un-mixing filters instead of static coefficients like in Bell's patent. However, the ICA calculations described in Torkkola to calculate the joint entropy and to adjust the cross filter weights are numerically unstable in the presence of input signals with time- varying input energy like speech signals and introduce reverberation artifacts into the separated output signals. The proposed filtering scheme therefore does not achieve stable and perceptually acceptable blind source separation of real-life speech signals.
[0010] Typical ICA implementations also face additional hurdles as requiring substantial c omputing p ower to r epeatedly c alculate t he joint entropy of signals and to adjust the filter weights. Many ICA implementations also require multiple rounds of feedback filters and direct correlation of filters. As a result, it is difficult to accomplish ICA filtering of speech in real time and use a large number of microphones to separate a large number of mixed source signals, h the case of sources originating from spatially localized locations, the un-mixing filter coefficients can be computed with a reasonable amount of filter taps and recording microphones. However if the source signals are distributed in space like background noise originating from vibrations, wind noise or background conversation, the signals recorded at microphone locations emanate from many different directions requiring either very long and complicated filter structures or a very large number of microphones. Since any ι real-life system is limited in processing power and hardware complexity, an additional processing approach has to complement the discussed ICA filter structure to provide a robust methodology for real-time speech signal enhancement. The computational complexity of such a system should be compatible with the processing power of small consumer devices such as c ell phones, Personal Digital Assistants (PDAs), audio surveillance devices, radios, and the like.
[0011] What is desired is a simplified speech processing method that can separate speech signals from background noise in real-time and does not require substantial computing power, but still produce relatively accurate results and can adapt flexibly to different environments.
Summary of the Invention
[0012] The present invention relates to systems and methods for speech processing useful to identify and separate desired audio signal(s), such as at least one speech signal, in a noisy acoustic environment. The speech process operates on a device(s) having at least two microphones, such as a wireless mobile phone, headset, or cell phone. At least two microphones are positioned on the housing of the device for receiving desired signals from a target, such as speech from a speaker. The microphones are positioned to receive the target user's speech, but also receive noise, speech from other sources, reverberations, echoes, and other undesirable acoustic signals. At least both microphones receive audio signals that include the desired target speech and a mixture of other undesired acoustic information. The mixed signals from the microphones are processed using a modified ICA ( independent c omponent analysis) process. The speech process uses a predefined speech characteristic, which has been predefined, to assist in identifying the speech signal, hi this way, the speech process generates a desired speech signal from the target user, and a noise signal. The noise signal may be u sed t o further filter a nd p rocess t he d esired speech signal.
[0013] An aspect of the invention relates to a speech separation system that includes at least two channels of input signals, each comprising one or a combination of audio signals, and two improved independent component analysis cross filters. The two channels of input signals are filtered by the cross filters, which are preferably infinitive impulse response filters with nonlinear bounded functions. The nonlinear bounded functions are nonlinear functions with pre-determined maximum and minimum values that can be computed quickly, for example a sign function that returns as output either a positive or a negative value based on the input value. Following repeated feedback of signals, two channels of output signals are produced, with one channel containing substantially desired audio signals and the other channel containing substantially noise signals.
[0014] One aspect of the invention relates to systems and methods of separating audio signals into desired speech signals and noise signals. Input signals, which are combinations of desired speech signals and noise signals, are received from at least two channels. An equal number of independent component analysis cross filters are employed. Signals from the first channel are filtered by the first cross filter and combined with signals from the second channel to form augmented signals on the second channel. The augmented signals on the second channel are filtered by the second cross filter and combined with signals from the first channel to form augmented signals on the first channel. The augmented signals on the first channel can be further filtered by the first cross filter. The filtering and combining processes are repeated to reduce information redundancy between the two channels of signals. The produced two channels of output signals represent one channel of predominantly speech signals and one channel of predominantly non-speech signals. Additional speech enhancement methods, such as spectral subtraction, Wiener filtering, de-noising and speech feature extraction may be performed to further improve speech quality.
[0015] Another aspect of the invention relates to the inclusion of stabilizing elements in the design of the feedback filtering scheme. In one stabilization example, the filter weight adaptation rule is designed in such a manner that the weight adaptation dynamics are in pace with the overall stability requirement of the feedback structure. Unlike previous approaches, the overall system performance is thus not solely directed towards the desired entropy maximization of separated outputs but considers stability constraints to meet a more realistic objective. This objective is better described as a maximum likelihood principle under stability constraint. These stability constraints in maximum 1 ikelihood e stimation c orrespond to modeling temporal characteristics of the source signals. In entropy maximization approaches signal sources are assumed i.i.d. (independently, identically drawn) random variables. However, real signals such as sounds and speech signals are not random signals but have correlations in time and are smooth in frequency. This results in a corresponding original ICA filter coefficient learning rule.
[0016] In another stabilization example, since this learning rule is directly dependent on the recorded input amplitude, the input channels are scaled down by an adaptive scaling factor to constrain the filter weight adaptation speed. The scaling factor is determined from a recursive equation and is a function of the channel input energy. It is thus unrelated to the entropy maximization of the subsequent ICA filter operations. Furthermore the adaptive nature of the ICA filter structure implies that the separated output signals contain reverberation artifacts if filter coefficients are adjusted too fast or exhibit oscillating behavior. Thus the learned filter weights have to be smoothed in the time and frequency domains to avoid reverberation effects. Since this smoothing operation slows down the filter learning process, this enhanced speech intelligibility design aspect has an additional stabilizing effect on the overall system performance.
[0017] To increase performance of blind source separation of spatially distributed background noise which may arise to limitations in computational resources and number of microphones, the ICA computed inputs and outputs can be each pre-process or post- processed, respectively. For example, an alternative embodiment of the present invention contemplates including voice activity detection and adaptive Wiener filtering since these methods exploit solely temporal or spectral information about the processed signals, and would thus complement the ICA filtering unit.
[0018] A final aspect of the invention is concerned with computational precision and power issues of the filter feedback structure, h a finite bit precision arithmetic environment (typically 16 bit or 32 bit), the filtering operation is subject to filter coefficient quantization errors. These typically result in deteriorated convergence performance and overall system stability. Quantization effects can be controlled by limiting the cross filter lengths and by changing the original feedback structure s o the post-processed ICA output is instead fed back into the ICA filter structure. It is emphasized that the down scaling of input energy in a finite precision environment is not only necessary from a stability point of view, but also because of the finite range of computed numerical values. Although performance in finite precision environments is reliable and adjustable, the proposed speech processing scheme should preferably be implemented in floating point precision environments. Finally implementation under computational constraints is accomplished by appropriately choosing the filter length and tuning the filter coefficient update frequency. Indeed the computational complexity of the ICA filter structure is a direct function of these latter variables.
[0019] Other aspects and embodiments are illustrated in drawings, described below in the "Detailed Description" section, or defined by the scope of the claims.
Brief Description of the Drawings
[0020] FIGURE 1 illustrates a block diagram of prior art ICA signal separation systems.
[0021] FIGURE 2 is a block diagram of one embodiment of a speech separation system in accordance with the present invention [0022] FIGURE 3 a block diagram of one embodiment of an improved ICA processing sub-module in accordance with the present invention.
[0023] FIGURE 4 a block diagram of one embodiment of an improved ICA speech separation process in accordance with the present invention.
[0024] FIGURE 5 is a flowchart of a speech processing method in accordance with the present invention.
[0025] FIGURE 6 is a flowchart of a speech de-noising process in accordance with the present invention.
[0026] FIGURE 7 is a flowchart of a speech feature extraction process in accordance with the present invention
[0027] FIGURE 8 is a table showing examples of combinations of speech processing processes in accordance with the present invention.
[0028] FIGURE 9 is a block diagram one embodiment of a cellular phone with a speech separation system in accordance with the present invention.
[0029] FIGURE 10 is a block diagram of another embodiment of a cellular phone with a speech separation system.
Detailed Description of the Preferred Embodiment [0030] Preferred embodiments of a speech separation system are described below in connection with the drawings. In order to enable real-time processing with limited computing power, the system uses an improved ICA processing sub-module of cross filters with simple and easy-to-compute bounded functions. Compared to conventional approaches, this simplified ICA method reduces the computing power requirement and successfully separates speech signals from non-speech signals.
Speech, Separation System Overview
[0031] Figure 2 illustrates one embodiment of a speech s eparation system 200. The system 200 includes a speech enhancement module 210, an optional speech de- noising module 220, and an optional speech feature extraction module 230. The speech enhancement module 210 includes an improved ICA processing sub-module 212 and optionally a post-processing sub-module 214. The improved ICA processing sub-module 212 uses simplified and improved ICA processing to achieve real-time speech separation with relatively low computing power. In applications that do not require real-time speech separation, the improved ICA processing can further reduce the requirement on computing power. As used herein, the terms ICA and BSS are interchangeable and refer to methods for minimizing or maximizing the mathematical formulation of mutual information directly or indirectly through approximations, including time- and frequency- domain based decorrelation methods such as time delay decorrelation or any other second or higher order statistics based decorrelation methods.
[0032] As used herein, a "module" or "sub-module" can refer to any method, apparatus, device, unit or computer-readable data storage medium that includes computer instructions in software, hardware or firmware form. It is to be understood that multiple modules or systems can be combined into one module or system and one module or system can be separated into multiple modules or systems to perform the same functions. In preferred embodiments with respect to cell phone applications, the improved ICA processing sub-module 212, in its own or in combination with other modules, is embodied in a microprocessor chip located in a cell phone. When implemented in software or other computer-executable instructions, the elements of the present invention are essentially the code segments to perform the necessary tasks, such as with routines, programs, objects, components, data structures, and the like. The program or code segments can be stored in a processor readable medium or transmitted by a computer data signal embodied in a carrier wave over a transmission medium or communication link. The "processor readable medium" may include any medium that can store or transfer information, including volatile, nonvolatile, removable and non-removable media. Examples of the processor readable medium include an electronic circuit, a semiconductor memory device, a ROM, a flash- memory, an erasable ROM (EROM), a floppy diskette or other magnetic storage, a CD-ROM/DVD or other optical storage, a hard disk, a fiber optic medium, a radio frequency (RF) link, or any other medium which can be used to store the desired information and which can be accessed. The computer data signal may include any signal that can propagate over a transmission medium such as electronic network channels, optical fibers, air, electromagnetic, RF links, etc. The code segments may be downloaded via computer networks such as the Internet, Intranet, etc. In any case, the present invention should not be construed as limited by such embodiments.
[0033] A speech separation system 200 may include various combinations of one or more speech enhancement modules 210, speech de-noising modules 220, and speech feature extraction modules 230. The speech separation system 200 may also include one or more speech recognition modules (not shown) to be described below. ' Each of the modules can be used by itself as a stand-alone system or as part of a larger system. As described below, the speech separation system is preferably incorporated into an electronic device that accepts speech input in order to control certain functions, or otherwise requires separation of desired noises from background noises. Many applications require enhancing or separating clear desired sound from background sounds originating from multiple directions. Such applications include human-machine interfaces such as in electronic or computational devices which incorporate capabilities such as voice recognition and detection, speech enhancement and separation, voice- activated control, and the like. Due to the lower processing power required by the invention speech separation system, it is suitable in devices that only provide limited processing capabilities.
Improved ICA Processing
[0034] FIGURE 3 illustrates one embodiment 300 of an improved ICA or BSS processing sub-module 212. Input signals Xi and X2 are received from channels 310 and 320, respectively. Typically, each of these signals would come from at least one microphone, but it will be appreciated other sources may be used. Cross filters Wi and W2 are applied to each of the input signals to produce a channel 330 of separated signals Ui and a channel 340 of separated signals U2. Channel 330 (speech channel) contains predominantly desired signals and channel 340 (noise channel) c ontains p redominantly noise signals. It should be understood that although the terms "speech channel" and "noise channel" are used, the terms "speech" and "noise" are interchangeable based on desirability, e.g., it may be that one speech and/or noise is desirable over other speeches and/or noises. In addition, the method can also be used to separate the mixed noise signals from more than two sources.
[0035] Infinitive impulse response filters are preferably used in the improved ICA processing process. An infinitive impulse response filter is a filter whose output signal is fed back into the filter as at least a part of an input signal. A finite impulse response filter is a filter whose output signal is not feedback as input. The cross filters W2ι and W12 can have sparsely distributed coefficients over time to capture a long period of time delays, hi a most simplified form, the cross filters W2ιand Wι2are gain factors with only one filter coefficient per filter, for example a delay gain factor for the time delay between the output signal and the feedback input signal and an amplitude gain factor for amplifying the input signal. In other forms, the cross filters can each have dozens, hundreds or thousands of filter coefficients. As described below, the output signals Ui and U2 can be further processed by a post processing sub-module, a de-noising module or a speech feature extraction module.
[0036] Although the ICA learning rule has been explicitly derived to achieve blind source separation, its practical implementation to speech processing in an acoustic environment may lead to unstable behavior of the filtering scheme. To ensure stability of this system, the adaptation dynamics of Wj2 and similarly W ι have to be stable in the first place. The gain margin for such a system is low in general meaning that an increase in input gain, such as encountered with non stationary speech signals, can lead to instability and therefore exponential increase of weight coefficients. Since speech signals generally exhibit a sparse distribution with zero mean, the sign function will oscillate frequently in time and contribute to the unstable behavior. Finally since a large learning parameter is desired for fast convergence, there is an inherent trade-off between stability and performance since a large input gain will make the system more unstable. The known learning rule not only lead to instability, but also tend to oscillate due to the nonlinear sign function, especially when approaching the stability limit, leading to reverberation of the filtered output signals Yι[t] and Y2[t]. To address these issues, the adaptation rules for Wι2 and W2ι need to be stabilized. If the learning rules for the filter coefficients are stable, extensive analytical and empirical studies have shown that systems are stable in the BIBO (bounded input bounded output). The final corresponding objective of the overall processing scheme will thus be blind source separation of noisy speech signals under stability constraints.
[0037] The principal way to ensure stability is therefore to scale the input appropriately as i llustrated b y F igure 3 . In this framework the s caling factor s c_fact i s adapted based on the incoming input signal characteristics. For example, if the input is too high, this will lead to an increase in sc_fact, thus reducing the input amplitude. There is a compromise between performance and stability. Scaling the input down by sc fact reduces the SNR which leads to diminished separation performance. The input should thus only be scaled to a degree necessary to ensure stability. Additional stabilizing can be achieved for the cross filters by running a filter architecture that accounts for short term fluctuation in weight coefficients at every sample, thereby avoiding associated reverberation. This adaptation rule filter can be viewed as time domain smoothing. Further filter smoothing can be performed in the frequency domain to enforce coherence of the converged separating filter over neighboring frequency bins. This can be conveniently done by zero tapping the K-tap filter to length L, then Fourier transforming this filter with increased time support followed by Inverse Transforming. Since the filter has effectively been windowed with a rectangular time domain window, it is correspondingly smoothed by a sine function in the frequency domain. This frequency domain smoothing can be accomplished at regular time intervals to periodically reinitialize the adapted filter coefficients to a coherent solution.
[0038] The following equations are examples of nonlinear bounded functions that can be used for each time sample window of size t and with k being a time variable, Uι(t) = Xι(t) + W12 (t) ® X2(t) (Eq. 1)
U2(t) = X2(t) + W2ι (t) ® Xι(t) (Eq. 2)
Yl = sign (Ul) (Eq. 3)
Y2 = sign (U2) (Eq. 4)
ΔWi2k = - f (Yι) * U2[t-k] (Eq. 5)
ΔW21k = - f (Y2) x Uι[t-k] (Eq. 6)
[0039] The function f(x) is a nonlinear bounded function, namely a nonlinear function with a predetermined maximum value and a predetermined minimum value. Preferably, f(x) is a nonlinear bounded function which quickly approaches the maximum value or the minimum value depending on the sign of the variable x. For example, Eq. 3 and Eq. 4 above use a sign function as a simple bounded function. A sign function f(x) is a function with binary values of 1 or -1 depending on whether x is positive or negative. Example nonlinear bounded functions include, but are not limited to:
Figure imgf000013_0001
f(x) = tanh( ) (Eq. 8) e e
1 x ≥ ε f(x) = simple(x) ■ x/ε ε > x > ε (Eq. 9) -1 x ≤ -ε [0040] These rules assume that floating point precision is available to perform the necessary computations. Although floating point precision is preferred, fixed point arithmetic may be employed as well, more particularly as it applies to devices with minimized computational processing capabilities. Notwithstanding the capability to employ fixed point arithmetic, convergence to the optimal ICA solution is more difficult. Indeed the ICA algorithm is based on the principle that the interfering source has to be cancelled out. Because of certain inaccuracies of fixed point arithmetic in situations when almost equal numbers are subtracted (or very different numbers are added), the ICA algorithm may show less than optimal convergence properties.
[0041] Another factor which may affect separation performance is the filter coefficient quantization error effect. Because of the limited filter coefficient resolution, adaptation of filter coefficients will yield gradual additional separation improvements at a certain point and thus a consideration in determining convergence properties. The quantization error effect depends on a number of factors but is mainly a function of the filter length and the bit resolution used. The input scaling issues listed previously are also necessary in finite precision computations where they prevent numerical overflow. Because the convolutions involved in the filtering process could potentially add up to numbers larger than the available resolution range, the scaling factor has to ensure the filter input is sufficiently small to prevent this from happening.
Multi-channel Improved ICA Processing
[0042] The improved ICA processing sub-module 212 receives input signals from at least two audio input channels, such as microphones. The number of audio input channels can be increased beyond the minimum of two channels. As the number of input channels increases, speech separation quality may improve, generally to the point where the number of input channels equals the number of audio signal sources. For example, if the sources of the input audio signals include a speaker, a background speaker, a background music source, and a general background noise produced by distant road noise and wind noise, then a four-channel speech separation system will normally outperform a two-channel system. Of course, as more input channels are used, more filters and more computing power are required.
[0043] The improved ICA processing sub-module and process can be used to separate more than two channels of input signals. For example, in a cellular phone application, one channel may contain substantially desired speech signal, another channel may contain substantially noise signals from one noise source, and another channel may contain substantially audio signals from another noise source. For example, in a multiuser environment, one channel may include speech predominantly from one target user, while another channel may include speech predominantly from a different target user. A third channel may include noise, and be useful for further process the two speech channels. It will be appreciated that additional speech or target channels may be useful.
[0044] A lthough some applications involve only one source of desired speech signals, in other applications there may be multiple sources of desired speech signals. For example, teleconference applications or audio surveillance applications may require separating the speech signals of multiple speakers from background noise and from each other. The improved ICA process can be used to not only separate one source of speech signals from b ackground n oise, b ut a lso t o s eparate o ne s peaker's s peech s ignals from another speaker's speech signals.
Peripheral Processing;
[0045] To increase performance of the invention methods and systems in efficacy and robustness, varying peripheral processing techniques can be applied to the input and output signals and in varying degrees. Pre-processing techniques as well as postprocessing techniques which complement the methods and systems described herein clearly will enhance the performance of blind source separation techniques applied to audio mixtures. For example, post-processing techniques can be used to improve the quality of the desired s ignal utilizing the undesirable output or the unseparated inputs. Similarly, pre-processing techniques or information can enhance the performance of blind source separation techniques applied to audio mixtures by improving the conditioning of the mixing scenario to complement the methods and systems described herein.
[0046] Improved ICA processing separates sound signals into at least two channels, for example one channel for noise signals (noise channel) and one channel for desired speech signals (speech channel). As shown in FIGURE 4, channel 430 i s the speech channel and channel 440 is the noise channel. It is quite possible that the speech channel contains an undesirable level noise signals and the noise channel still contains some speech signals. For example, if there are more than two significant sound sources and only two microphones, or if the two microphones are located close together but the sound sources are located far apart, then improved ICA processing alone might not always adequately separate desired speech from noise. The processed signals therefore may need to be post-processed to remove remaining levels of background noise and/or to further improve the quality of the speech signals. This is achieved by feeding the separated ICA outputs through a single or multi channel speech enhancement algorithm, for example. A Wiener filter with the noise spectrum estimated from non-speech time intervals detected with a voice activity detector is used to achieve better SNR for signals degraded by background noise with long time support. In addition, the bounded functions are only simplified approximations to the joint entropy calculations, and might not always reduce the signals' information redundancy completely. Therefore, after signals are s eparated using improved ICA processing, post processing may be performed to further improve the quality of the speech signals.
[0047] The separated noise signal channel could be discarded but may also be used for other purposes. Based on the reasonable assumption that the remaining noise signals in the speech channel have similar signal signatures as the noise signals in the noise channel, those signals in the desired speech channel whose signatures are similar to the signatures of the noise channel signals should be filtered out in the post-processing unit. For example, spectral subtraction techniques can be used to perform post processing. The signatures of the signals in the noise channel are identified. Compared to prior art noise filters that relay on predetermined assumptions of noise characteristics, the post processing is more flexible because it analyzes the noise signature of the particular environment and removes noise signals that represent the particular environment. It is therefore less likely to be over-inclusive or under-inclusive in noise removal. Other filtering techniques such as Wiener filtering and Kalman filtering can also be used to perform post processing. Since the ICA filter solution will only converge to a limit cycle of the true solution, the filter coefficients will keep on adapting without resulting in better separation performance. Some coefficients have been observed to drift to their resolution limits. Therefore a post-processed version of the ICA output containing the desired speaker signal is fed back through the H . feedback structure as illustrated by Figure 4 so the convergence limit cycle is overcome and not destabilizing the ICA algorithm. A beneficial byproduct of this procedure is that convergence is accelerated considerably.
[0048] Other processes such as de-noising, speech feature extraction can be used together with speech enhancement to further improve the quality of the speech signals. Speech recognition applications can take advantage of speech signals separated by the speech enhancement process. With speech signals substantially separated from noise, speech recognition engines based on methods such as Hidden Markov Model chains, neural network learning and support vector machines can work with greater accuracy.
[0049] Referring now to Fig. 5, a flowchart of a speech process is shown. Method 500 may be used in a speech device, such as a portable wireless mobile phone, a telephone headset, or in a hands-free car kit, for example. It will be appreciated that method 500 may be used on other speech devices, and may be implemented on DSP processors, general computing processors, microprocessors, gate arrays, or other computational devices. In use, method 500 receives acoustic signals in the form of sound signals 502. These sound signals 502 may come from many sources, and may include the speech from a target user, speech from others in the vicinity, noise, reverberations, echoes, reflections, and other undesirable sounds. Although method 500 is shown identifying and separating a single target speech signal, it will be understood that method 500 may be modified to identify and separate additional target sound signals.
[0050] In addition, varying preprocessing techniques or information can be used to improve or facilitate the processing and separation of the mixed audio signals, such as utilizing a priori knowledge, maximizing divergent information or characteristics in the input signals and conditions, improving the conditioning of the mixing scenario, and the like. For example, since the output order of the separated ICA sound channels is in general unknown beforehand, an additional channel selection stage 510 processes the content of the separated channels based on a priori knowledge 501 about the desired speaker in an iterative manner. The criteria 504 used to identify desired speaker speech characteristics can be based on, but are not limited to, spatial or temporal features, energy, volume, frequency content, zero crossing rate or speaker dependent and independent speech recognition scores computed in parallel to the separation process. For example, the criteria 504 could be configured to respond to constrained vocabulary such as a particular command, e.g., "wake up", hi another example, the speech device could respond to a sound signal emanating from a particular location or direction, such as the front driver's position in a car. hi this way a hands-free car kit could be configured to respond only to speech from the driver, while ignoring speech from passengers and the radio. Alternatively, the conditions of the mixing scenario can be improved by modulating or manipulating the characteristics of the input signals, for example by spatial, temporal, energy, spectral, and the like, modulations and manipulations.
[0051] On some speech devices, the microphones are consistently placed based on predefined distance from the speech source, the background noises or in relation to the other microphones, or have certain characteristics themselves to condition the input signals, e.g., directional microphones. As shown in block 506, two microphones may be spaced apart and placed on the housing of a speech device. For example, a telephone headset is typically adjusted so that the microphones are within about one inch of the speaker's mouth, and the speaker's voice is typically the closest sound source to the microphone, hi a similar manner, the microphones for a handheld wireless phone, handset, or lapel microphone typically have a reasonably known distance to the target speaker's mouth. Since the distance from the microphones to the target source is known, this distance may be used a characteristic to identify the target speech signal. Also, it will be appreciated that multiple characteristics may be used. For example, the process 510 may select only a sound signal that comes from less than two inches away and that has a frequency component indicative of a male voice. In those cases where a two microphone setup is used, the microphones are arranged close to the desired speaker's mouth. This setup allows to isolate the desired speaker's voice signal into one separated ICA channel so t hat t he r emaining s eparated o utput c hannel c ontaining o nly n oise c an be used as a noise reference for subsequent post processing of the desired speaker channel.
[0052] hi recording scenarios where more than two microphones are used, the two channel ICA algorithm is extended to a N-channel (microphone) algorithm in a similar fashion as explained earlier for the two channel scenario, with N*(N-1) ICA cross filters. The latter one is used for source localization purposes along with the channel selection procedure presented in [ad2] to select among the N recorded channels the optimal two channel combination which is then processed in a two channel ICA algorithm to separate the desired speaker. All kind of information sources resulting from the N-channel ICA separation like, but not limited to, relative energy changes from recorded input to separated output sources as well as learned ICA cross filter coefficients are exploited to this end.
[0053] Each of the spaced apart microphones receives a signal that is a mixture of the desired target sound and of several noise and reverberation sources. The mixed sound signals 507 and 509 are receive in the ISA process 508 for separation. After identifying the target speech signal using the identification process 510, the ICA process 508 separates the mixed sounds into a desired speech signal and a noise signal. T he ICA process may use the noise signal to further process 512 the speech signal, for example, by using the noise signal to further refine and set weighting factors. Also, the noise signal may also be used by additional filtering 514 or processes to further remove noise content from the speech signal, as further described below.
De-noising
[0054] FIGURE 6 is a flowchart showing one embodiment of a de-noising process. In cell phone applications, de-noising is best used to separate out noise sources that are not spatially localized, such as wind noise that comes from all directions. De- noising techniques can also be used to remove noise signals with fixed frequencies. From a start b lock 600, the process proceeds to a block 610. At the block 610, the process receives a block of speech signals x. The process proceeds to a block 620, where the system computes source coefficients s, preferably using the following formula
Figure imgf000019_0001
[0055] hi the formula above, wy represents an ICA weight matrix. An ICA method described in U.S. Patent 5,706,402 or an ICA method described in U.S. patent 6,424,960 can be used in the de-noising process. The process then proceeds to a block 630, a block 640, or a block 650. The blocks 630, 640 and 650 represent alternative embodiments. At the block 630, the process selects a number of significant source coefficients based on the power of the signal Sj. At the block 640, the process applies a maximum likelihood shrinkage function to the computed source coefficients to eliminate the insignificant coefficients. At the block 650, the process filters the speech signals x with one of the basis functions for each time sample t.
[0056] From the block 630- 640, or 650, the process proceeds to a block 660, where the process reconstructs the speech signals, preferably using the following formula
Xnew ~ 2-ι aij S j, shrinked (x - * O j
[0057] hi the above formula, a^ represents the training signals produced by filtering incoming signals with-the weight factors. The de-noising process thus removes noise and p roduces t he r econstructed s peech s ignals x new. G ood d e-noising r esults are obtained when information about the noise sources is available. As described above in connection with the improved ICA process, the signatures of signals in the noise channel can be used by the de-noising process to remove noise from signals in the speech channel. From the block 660, the process proceeds to an end block 670.
Speech Feature Extraction
[0058] FIGURE 7 illustrates one embodiment of a speech feature extraction process using ICA. The process starts from a start block 700 to a block 710, where the process receives speech signals x. As described below in connection with FIGURE 9, the speech signals x can be the input speech signals, signals processed by speech enhancement, signals processed by de-noising, or signals processed by speech enhancement and de-noising.
[0059] Referring back to FIGURE 7, the process proceeds from the block 710 to a block 720, where the process computes source coefficients using
Figure imgf000020_0001
as described above by Eq.10. The process then proceeds to a block 730, where the received speech signals are decomposed into basis functions. From the block 730, the process proceeds to a block 740, where the computed source coefficients are used as feature vectors. For example, the computed coefficients Sjj. new or 21og sy, new are used in calculating feature vectors. The process then proceeds to an end block 750.
[0060] The extracted speech features can be used to recognize speech or to distinguish recognizable speech from other audio signals. The extracted speech features can be used by themselves or in conjunction with cepstral features (MFCC). The extracted speech features can also be used to identify speakers, for example to identify individual speakers from speech signals of multiple speakers, or to identify speech signals as belonging to certain classes such as speech from male or female speakers. The extracted speech features can also be used by a classification algorithm to detect speech signals. For example, a maximum likelihood calculation can be used to determine the likelihood that the signals in question are human speech signals.
[0061] The extracted speech features can also be applied in text-to-speech applications that produce computer readings of texts. Text-to-speech systems use a large database of speech signals. One challenge is to obtain a good representative database of phonemes. Prior art systems use cepstral features to classify the speech data into the phoneme database. By decomposing speech signals into basis functions, the improved speech feature extraction method can better classify speech into phoneme segments and therefore produce a better database, thus allowing better speech quality for text-to-speech systems.
[0062] In one embodiment of a speech feature extraction process, one set of basis functions is used for all speech signals to recognize speech. In another embodiment, one set of basis functions is used for each speaker to recognize each speaker. This may be particularly advantageous for multiple-speaker applications such as teleconferences. In yet another embodiment, one set of basis functions is used for one class of speakers to recognize each class. For example, one set of basis functions is used for male speakers and another set is used for female speakers. U.S. patent 6,424,960 describes using an ICA mixture model to identify voices of different classes. Such a model can be used to identify speech signals of different speakers or different genders of speakers.
Speech Recognition
[0063] Speech recognition applications can take advantage of speech signals separated by improved ICA processing. With speech signals substantially separated from noise, speech recognition applications can work with greater accuracy. Methods such as Hidden Markov Model, neural network learning and support vector machines can be used in speech recognition applications. As described above, in a two-microphone arrangement, improved ICA processing separates input signals into a speech channel of desired speech signals and some noise signals, and a noise channel of noise signals and some speech signals.
[0064] To improve speech recognition accuracy in noisy environments, it is preferable to have an accurate noise reference signal to remove noise from speech signals based on the noise reference signal. For example, using speech spectral subtraction to remove, from a channel of substantially speech signals, signals that have the characteristics of the noise reference signal. Therefore, in a preferred speech recognition system for very noisy environments, the system receives a speech channel and a noise channel of signals and identifies a noise reference signal.
Process Combinations
[0065] Certain embodiments of speech feature extraction, de-noising and speech recognition processes have been described along with the speech enhancement processes. It is worth noting that not all processes need to be used together. FIGURE 8 is a table 800 listing some o f the typical combinations o f speech enhancement, de-noising and speed feature extraction processes. The left column of the table 800 lists the type of the signals and the right column lists the preferred processes for processing the corresponding type of signals.
[0066] hi one arrangement shown in row 810, input signals are first processed using speech enhancement, then processed using speech de-noising, and then processed using speech feature extraction. T he c ombination o ft hese t hree p rocesses w orks w ell when input signals contain heavy noise and competing source. Heavy noise refers to relatively low amplitude noise signals that come from multiple sources, for example on a street where various types of noises come from different directions but not one type of noise is particularly loud. Competing source refers to high amplitude signals from one or few sources that compete with the desired speech signals, for example a car radio turned to a high volume when the driver is speaking on a car phone, hi another arrangement shown in row 820, input signals are first processed using speech enhancement and then processed using speech feature extraction. The speech de-noising process is omitted. The combination of speech enhancement and speech feature extraction processes works well when original signals contain competing source and do not contain heavy noise.
[0067] In yet another arrangement shown in row 830, input signals are first processed using speech de-noising and then processed using speech feature extraction. The speech enhancement process is omitted. The combination of speech de-noising and speech feature extraction processes works well when input signals contain heavy noise and do not contain competing source. In still another arrangement shown in row 840, only speech feature extraction is performed on the input signals. This process is sufficient to reach good results for relatively clean speech that does not contain heavy noise or competing source. Of course, table 800 is only a list of examples and other embodiments can be used. For example, all of the speech enhancement, speech de-noising and speech feature extraction processes can be applied to process signals regardless of their types.
Cellular Phone Applications
[0068] FIGURE 9 illustrates one embodiment of a cellular phone device. The cell phone device 900 includes two microphones 910 and 920 for recording sound signals, and a speech separation system 200 for processing the recorded signals to separate the desired speech signal from background noise. The speech separation system 200 includes at least an improved ICA processing sub-module that applies cross filters to the recorded signals to produce separated signals on channels 930 and 940. The separated desired speech signals are then transmitted by transmitter 950 to an audio signal receiving device such as a wired phone or another cellular phone.
[0069] The separated noise signals may be discarded but may also be u sed for other purposes. The separated noise signals may be used to determine environment characteristics and adjust cell phone parameters accordingly. For example, the noise signals may be used to determine the noise level of the speaker's environment. The cell phone then increases the volume of the microphones if the speaker is in environment with high noise level. As described above, the noise signals can also b e used as reference signals to further remove remaining noise from the separated speech signals.
[0070] For ease of illustration, other cell phone parts such as the battery, the display p anel and s o forth are. o mitted from F IGURE 9. Cell phone signal processing steps involving analog-to-digital conversion, modulating or to enable FDMA (frequency division multiple access), TDMA (time division multiple access) or CDMA (channel division multiple access) and so forth are also omitted for ease of illustration.
[0071] Although FIGURE 9 shows two microphones, more than two microphones can be used. Existing manufacturing technology can produce microphones that are about the size of a dime, a pin head or smaller, and multiple microphones can be placed on a device 900.
[0072] In one embodiment, the conventional echo-cancellation process performed in a cell phone is replaced by an ICA process such as the process performed by the improved ICA sub-module.
[0073] Since the audio signal sources are typically apart from each other, the microphones are preferably placed acoustically apart on a cell phone. For example, one microphone can be placed on the front side of the cell phone while another microphone can be placed on the back side of the cell phone. One microphone can be placed near the top or left side of the cell phone while another microphone can be placed near the bottom or right side of the cell phone. Two microphones can be placed on different locations of the cell phone headset. In one embodiment, two microphones are placed on the headset and two more microphones are placed on the cell phone handheld unit. Therefore two microphones can record the user's speech regardless whether the user uses the handheld unit or the headset.
[0074] Although a cellular phone with improved ICA processing is described as an example, other speech communication mediums, such as voice command for electronic appliances, wired telephones, speakerphones, cordless telephones, teleconferences, CB radios, walkie-talkies, computer telephony applications, computer and automobile speech recognition applications, surveillance devices, intercoms and so forth and also take advantage of improved ICA processing to separate desired speech signals from other signals.
[0075] FIGURE 10 illustrates another embodiment of a cellular phone device. The cell phone device 1000 includes two channels 1010 and 1020 for receiving sound signals from another communication device such as another cellular phone. The channels 1010 and 1020 receive sound signals of the same conversation recorded by two microphones. More than two receiving units can be used to receive more than two channels of input signals. The device 1000 also includes a speech separation system 200 for processing the received signals to separate the desired speech signal from background noise. The separated desired speech signals are then amplified by an amplifier 1030 to reach the ear of the cell phone user. By placing the speech separation system 200 on the receiving c ell p hone, the user of the receiving cell phone can hear high-quality speech even if the transmitting cell phone does not have a speech separation system 200. However, this requires receiving two channels of signals of a conversation recorded by two microphones on the transmitting cell phone.
[0076] For ease of illustration, other cell phone parts such as the battery, the display panel and so forth are omitted from FIGURE 10. Cell phone signal processing steps involving digital-to-analog conversion, demodulating or to enable FDMA (frequency division multiple access), TDMA (time division multiple access) or CDMA (channel division multiple access) and so forth are also omitted for ease of illustration.
[0077] Certain aspects, advantages and novel features of the invention have been described herein. Of course, it is to be understood that not necessarily all such aspects, advantages or features will be embodied in any particular embodiment of the invention. The embodiments discussed herein are provided as examples o f the invention, and are subject to additions, alterations and adjustments. For example, although equations 7, 8, and 9 present examples of a nonlinear bounded function, nonlinear bounded functions are not limited to these examples but can include any nonlinear function with pre-determined maximum and minimum values. Therefore, the scope of the invention should be defined by the following claims.
References Hyvaerinen, A., Karhunen, J, Oja, E. Independent component analysis. John Wiley & Sons, h e. 2001
Te-Won Lee, Independent Component Analysis: Theory and Applications, Kluwer Academic Publishers, Boston, September 1998
Mark Girolami, Self-Organizing Neural Networks: Independent Component Analysis and Blind Source Separation. In Perspectives in Neural Computing, Springer Verlag, September 1999
Mark Girolami (Editor), Advances in Independent Component Analysis, hi Perspectives in Neural Computing,, Springer Verlag, August 2000
Simon Haykin, Adaptive Filter Theory, Third Edition, Prentice-Hall (NJ), 1996. Bell, A., Sejnowski, T., Neural Computation 7:1129-1159, 1995
Amari, S., Cichocki, A., Yang, H., A New Learning Algorithm for Blind Signal Separation, h : Advances in Neural Information Processing Systems 8, Editors D. Touretzky, M. Mozer, and M. Hasselmo, pp.757-763, MIT Press, Cambridge MA, 1996. Cardoso, J.-F., Iterative techniques for blind source separation using only fourth order cumulants In Proc. EUSIPCO, pages 739-742, 1992.
Comon, P., Independent component analysis, a new concept? Signal Processing, 36(3):287-314, April 1994.
Hyvaerinen, A. and Oja,E, A fast fixed-point algorithm for independent component analysis. Neural Computation, 9, pp.1483-1492, 1997

Claims

WHAT IS CLAIMED IS:
1. A method of separating a desired speech signal in an acoustic environment, comprising: receiving a plurality of input signals, the input signals being generated responsive to the desired speech signal and other acoustic signals; processing the received input signals using an independent component analysis (ICA) or blind source separation (BSS) under stability constraints; and separating the received input signals into at one or more desired audio signals and one or more noise signals.
2. The method according to claim 1, wherein one of the desired audio signals is the desired speech signal.
3. The method according to claim 1 where the ICA or BSS process includes minimizing or maximizing the mathematical formulation of mutual information directly or indirectly through approximations.
4. The method according to claim 1 further comprising the step of stabilizing the ICA process by pacing ICA weight adaptation dynamics.
5. The method according to claim 1 further comprising the step of stabilizing the ICA process by scaling ICA inputs using an adaptive scaling factor to constrain weight adaptation speed.
6. The method according to claim 1 further comprising the step of stabilizing the ICA process by filtering learned filter weights in the time domain and the frequency domain to avoid reverberation effects.
7. The method according to claim 1, wherein peripheral processing techniques are applied to the input and separated signals in varying degrees.
8. The method according to claim 1, further comprising utilizing pre-processing techniques or information to enhance the performance of the separation.
9. The method according to claim 8, further comprising improving the conditioning of a mixing scenario applied to the input signals.
10. The method according to claim 2 further comprising utilizing characteristic information of the desired speech signal to identify the channel containing the separated desired speech signal.
11. The method according to claim 10 wherein the characteristic information is spatial, spectral or temporal information.
12. The method according to claim 1, wherein post-processing techniques are used to improve the quality of the desired signal utilizing the at least one of the noise signals or at least one of the input signals.
13. The method according to claim 12 further including the step of using the separated noise signal to further separate and enhance the desired speech signal.
14. The method according to claim 13 wherein the using step includes using the noise signal to estimate the noise spectrum for a noise filter.
15. The method according to claim 1 further including: spacing apart at least two microphones; and generating one of the input signals at each respective microphone.
16. The method according to claim 15 wherein the spacing step includes spacing the microphones between about 1mm and about lm apart.
17. The method, according. to claim 15 wherein the spacing step includes spacing the microphones apart on a telephone receiver, a headset, or a hands-free kit.
18. The method according to claim 15, wherein the ICA process includes: a first adaptive independent component analysis (ICA) filter connected to a first output channel and to a second input channel, the first filter being adapted by a recursive learning rule involving the application of a nonlinear bounded sign function to the noise signal channel; a second adaptive independent component analysis filter connected to a first input channel and to a second output channel,, the second filter being adapted by a recursive learning rule involving the application of a nonlinear bounded sign function to the desired speech signal channel; wherein the first filter and the second filter are repeatedly applied to produce the desired speech signal.
19. The method according to claim 18, wherein (a) the desired speech channel recursively filtered by the first adaptive independent component analysis filter is fed back and added to the input channel from the second microphone, thereby producing the noise signal channel, and (b) the noise signal channel recursively filtered by the second adaptive independent component analysis filter is fed back and added to the input channel from the first microphone, producing the desired speech signal channel.
20. The method according to claim 19, wherein the input channel signals are scaled down by an adaptive scaling factor computed from a recursive equation as a function of the incoming signal energy.
21. The method according to claim 18, wherein the filter weight learning rule for the first adaptive ICA cross filter is stabilized by smoothing the filter coefficients in time, and wherein the filter weight learning rule for the second adaptive ICA cross filter is stabilized by smoothing the filter coefficients in time.
22. The method according to claim 18, wherein the first adaptive ICA cross filter weights are filtered in the frequency domain, and wherein the second adaptive ICA cross filter weights are filtered in the frequency domain.
23. The method according to claim 18, further comprising a post processing module connected to the desired speech signal which applies a single or multi channel speech enhancement module including voice activity detection and wherein the post-processed outputs are not fed back to the input channels.
24. The method according to claim 18 wherein the ICA process is implemented in a fixed point precision environment where the adaptive ICA cross filters are applied at every sampling instant but where filter coefficients are updated at multiples of the sampling instant and filter lengths of variable size are used to fit the computational power available.
25. The method according to claim 18, further comprising post processing the desired speech signal using the noise signal, the post processing module applying spectral subtraction to the desired speech signal based on the noise signal.
26. The method according to claim 18, further comprising post processing the desired speech signal using the noise signal, the post processing module applying Wiener filtering to the desired speech signal based on the noise signal.
27. The method according to claim 18, further comprising receiving a third set of audio input signals from a third channel, and applying a nonlinear bounded function to incoming signals using a third filter.
28. A speech device, comprising: at least two spaced-apart microphones constructed to receive acoustic sound signals, the microphones being an expected distance from a speech source ,and; an ICA or BSS processor coupled to the microphones, the processor operating steps comprising: receiving sound signals from the two microphones, separating the sound signals under stability constraints into at least one desired speech signal line and at least one noise signal line.
29. The speech device according to claim 28, further comprising a post process filter coupled to the noise line and to the desired speech signal line.
30. The speech device according to claim 28, wherein the microphones are spaced apart about 1 mm to about lm.
31. The method according to claim 30 further including pre-processing the acoustic sound signals received at each microphone.
32. The speech device according to claim 28, wherein one of the microphones is on a face of the device housing and the other microphone is on another face of the device housing.
33. The speech device according to claim 28, wherein the speech device is constructed to be a wireless phone.
34. The speech device according to claim 28, wherein the speech device is constructed to be a wireless phone.
35. The speech device according to claim 28, wherein the speech device is constructed to be a hands-free car kit.
36. The speech device according to claim 28, wherein the speech device is constructed to be a headset.
37. The speech device according to claim 28, wherein the speech device is constructed to be a personal data assistant.
38. The speech device according to claim 28, wherein the speech device is constructed to be a handheld bar-code scanning device.
39. A system for separating desired speech signals in an acoustic environment, comprising a plurality of input channels each receiving one or more acoustic signals; at least one ICA or BSS filter, wherein the filter s eparates the received signals under stability constraints into one or more desired audio signals and one or more noise signals; and a plurality of output channels transmitting the separated signals.
40. The system according to claim 39, wherein the desired audio signal is a speech signal received in the plurality of acoustic signals.
41. The system according to claim 39, wherein the filter modulates the mathematical formulation of mutual information directly or indirectly through approximations.
42. The system according to claim 39, wherein the filter stabilizes the ICA process by pacing ICA weight adaptation dynamics.
43. The system according to claim 39, wherein the filter stabilizes the ICA process by scaling ICA inputs using an adaptive scaling factor to constrain weight adaptation speed.
44. The system according to claim 39, wherein the filter stabilizes the ICA process by filtering learned filter weights in the time domain and the frequency domain to avoid reverberation effects.
45. The system according to claim 39, further comprising one or more peripheral processing filters applied to the input and/or output signals.
46. The system according to claim 45, further comprising one or more pre-processing filters.
47. The system according to claim 45, further comprising one or more post-processing filters.
48. The system according to claim 39, further comprising one or more microphones connected to tlie input cήannels.
49. The system according to claim 48, comprising two or more microphones each spaced apart between about 1mm and about lm apart.
50. The system according to claim 39, wherein the system is constructed on a handheld device.
51. The system according to claim 39, wherein the filter includes: a first adaptive independent component analysis (ICA) filter connected a first output channel and to a second input channel, the first filter being adapted by a recursive learning rule involving the application of a nonlinear bounded sign function to the noise signal channel; a second adaptive independent component analysis filter connected to a first output channel and to a second input channel, the second filter being adapted by a recursive learning rule involving the application of a nonlinear bounded sign function to the desired speech signal channel; wherein the first filter and the second filter are repeatedly applied to produce the desired speech signal.
52. A system for isolating a speech signal, comprising: a set of signal generators, each signal generator arranged to generate a mixed signal indicative of a mixture of the speech signal and other acoustic signals; a processor configured to receive each of the mixed signals; the processor operating a method further comprising: processing the set of mixed signals using an independent component analysis (ICA) or blind source separation (BSS) under stability constraints; and separating the mixed signals into the speech signal and at least one noise signal; and a speech enabled unit receiving the speech signal.
53. - The system-according to claim 52, wherein the signal generators are constructed as- acoustic transducers.
54. The system according to claim 53 wherein the acoustic transducers are microphones constructed to receive acoustic signals in the human-speech frequency range.
PCT/US2003/039593 2002-12-11 2003-12-11 System and method for speech processing using independent component analysis under stability constraints WO2004053839A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US10/537,985 US7383178B2 (en) 2002-12-11 2003-12-11 System and method for speech processing using independent component analysis under stability constraints
JP2005511772A JP2006510069A (en) 2002-12-11 2003-12-11 System and method for speech processing using improved independent component analysis
AU2003296976A AU2003296976A1 (en) 2002-12-11 2003-12-11 System and method for speech processing using independent component analysis under stability constraints
EP03812979A EP1570464A4 (en) 2002-12-11 2003-12-11 System and method for speech processing using independent component analysis under stability constraints
IL169587A IL169587A0 (en) 2002-12-11 2005-07-07 Systen and method for speech processing using independent component analysis under stability constraints

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US43269102P 2002-12-11 2002-12-11
US60/432,691 2002-12-11
US50225303P 2003-09-12 2003-09-12
US60/502,253 2003-09-12

Publications (1)

Publication Number Publication Date
WO2004053839A1 true WO2004053839A1 (en) 2004-06-24

Family

ID=32511658

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/039593 WO2004053839A1 (en) 2002-12-11 2003-12-11 System and method for speech processing using independent component analysis under stability constraints

Country Status (6)

Country Link
US (1) US7383178B2 (en)
EP (1) EP1570464A4 (en)
JP (1) JP2006510069A (en)
KR (1) KR20050115857A (en)
AU (1) AU2003296976A1 (en)
WO (1) WO2004053839A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006084928A (en) * 2004-09-17 2006-03-30 Nissan Motor Co Ltd Sound input device
EP1784820A2 (en) * 2004-07-22 2007-05-16 Softmax, Inc. Separation of target acoustic signals in a multi-transducer arrangement
US7409375B2 (en) 2005-05-23 2008-08-05 Knowmtech, Llc Plasticity-induced self organizing nanotechnology for the extraction of independent components from a data stream
KR100875264B1 (en) 2006-08-29 2008-12-22 학교법인 동의학원 Post-processing method for blind signal separation
WO2009051959A1 (en) 2007-10-18 2009-04-23 Motorola, Inc. Robust two microphone noise suppression system
WO2009106918A1 (en) * 2008-02-27 2009-09-03 Sony Ericsson Mobile Communications Ab Electronic devices and methods that adapt filtering of a microphone signal responsive to recognition of a targeted speaker's voice
US8160273B2 (en) 2007-02-26 2012-04-17 Erik Visser Systems, methods, and apparatus for signal separation using data driven techniques
US8175291B2 (en) 2007-12-19 2012-05-08 Qualcomm Incorporated Systems, methods, and apparatus for multi-microphone based speech enhancement
KR101184394B1 (en) 2006-05-10 2012-09-20 에이펫(주) method of noise source separation using Window-Disjoint Orthogonal model
US8311236B2 (en) 2007-10-04 2012-11-13 Panasonic Corporation Noise extraction device using microphone
US8321214B2 (en) 2008-06-02 2012-11-27 Qualcomm Incorporated Systems, methods, and apparatus for multichannel signal amplitude balancing
WO2012161555A2 (en) * 2011-05-26 2012-11-29 주식회사 마이티웍스 Signal-separation system using a directional microphone array and method for providing same
KR101280253B1 (en) 2008-12-22 2013-07-05 한국전자통신연구원 Method for separating source signals and its apparatus
US8553922B2 (en) 2010-02-24 2013-10-08 Yamaha Corporation Earphone microphone
US8898056B2 (en) 2006-03-01 2014-11-25 Qualcomm Incorporated System and method for generating a separated signal by reordering frequency components
CN107924685A (en) * 2015-12-21 2018-04-17 华为技术有限公司 Signal processing apparatus and method
CN111402883A (en) * 2020-03-31 2020-07-10 云知声智能科技股份有限公司 Nearby response system and method in distributed voice interaction system in complex environment

Families Citing this family (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7266501B2 (en) * 2000-03-02 2007-09-04 Akiba Electronics Institute Llc Method and apparatus for accommodating primary content audio and secondary content remaining audio capability in the digital audio production process
DE60304859T2 (en) * 2003-08-21 2006-11-02 Bernafon Ag Method for processing audio signals
KR100600313B1 (en) * 2004-02-26 2006-07-14 남승현 Method and apparatus for frequency domain blind separation of multipath multichannel mixed signal
KR100653173B1 (en) * 2005-11-01 2006-12-05 한국전자통신연구원 Multi-channel blind source separation mechanism for solving the permutation ambiguity
KR100741608B1 (en) * 2005-11-18 2007-07-20 엘지노텔 주식회사 Mobile communication system having a virtual originating call generating function and controlling method therefore
JP2007215163A (en) * 2006-01-12 2007-08-23 Kobe Steel Ltd Sound source separation apparatus, program for sound source separation apparatus and sound source separation method
US8874439B2 (en) * 2006-03-01 2014-10-28 The Regents Of The University Of California Systems and methods for blind source signal separation
US8068627B2 (en) 2006-03-14 2011-11-29 Starkey Laboratories, Inc. System for automatic reception enhancement of hearing assistance devices
US8494193B2 (en) * 2006-03-14 2013-07-23 Starkey Laboratories, Inc. Environment detection and adaptation in hearing assistance devices
US7986790B2 (en) * 2006-03-14 2011-07-26 Starkey Laboratories, Inc. System for evaluating hearing assistance device settings using detected sound environment
US7970564B2 (en) * 2006-05-02 2011-06-28 Qualcomm Incorporated Enhancement techniques for blind source separation (BSS)
US20080010065A1 (en) * 2006-06-05 2008-01-10 Harry Bratt Method and apparatus for speaker recognition
KR100776803B1 (en) * 2006-09-26 2007-11-19 한국전자통신연구원 Apparatus and method for recognizing speaker using fuzzy fusion based multichannel in intelligence robot
EP1912472A1 (en) * 2006-10-10 2008-04-16 Siemens Audiologische Technik GmbH Method for operating a hearing aid and hearing aid
KR100848789B1 (en) * 2006-10-31 2008-07-30 한국전력공사 Postprocessing method for removing cross talk
US8380494B2 (en) * 2007-01-24 2013-02-19 P.E.S. Institute Of Technology Speech detection using order statistics
JP4449987B2 (en) * 2007-02-15 2010-04-14 ソニー株式会社 Audio processing apparatus, audio processing method and program
JP2010519602A (en) * 2007-02-26 2010-06-03 クゥアルコム・インコーポレイテッド System, method and apparatus for signal separation
US8348839B2 (en) * 2007-04-10 2013-01-08 General Electric Company Systems and methods for active listening/observing and event detection
US7742746B2 (en) * 2007-04-30 2010-06-22 Qualcomm Incorporated Automatic volume and dynamic range adjustment for mobile audio devices
KR100890708B1 (en) * 2007-06-04 2009-03-27 에스케이 텔레콤주식회사 Apparatus and method for removing residual noise
US20080310751A1 (en) * 2007-06-15 2008-12-18 Barinder Singh Rai Method And Apparatus For Providing A Variable Blur
ATE532324T1 (en) * 2007-07-16 2011-11-15 Nuance Communications Inc METHOD AND SYSTEM FOR PROCESSING AUDIO SIGNALS IN A MULTIMEDIA SYSTEM OF A VEHICLE
WO2009020001A1 (en) * 2007-08-07 2009-02-12 Nec Corporation Voice mixing device, and its noise suppressing method and program
US8175871B2 (en) 2007-09-28 2012-05-08 Qualcomm Incorporated Apparatus and method of noise and echo reduction in multiple microphone audio systems
US8954324B2 (en) 2007-09-28 2015-02-10 Qualcomm Incorporated Multiple microphone voice activity detector
US8223988B2 (en) * 2008-01-29 2012-07-17 Qualcomm Incorporated Enhanced blind source separation algorithm for highly correlated mixtures
US8045661B2 (en) * 2008-02-04 2011-10-25 Texas Instruments Incorporated System and method for blind identification of multichannel finite impulse response filters using an iterative structured total least-squares technique
US8144896B2 (en) * 2008-02-22 2012-03-27 Microsoft Corporation Speech separation with microphone arrays
DE102008023370B4 (en) * 2008-05-13 2013-08-01 Siemens Medical Instruments Pte. Ltd. Method for operating a hearing aid and hearing aid
KR101178801B1 (en) * 2008-12-09 2012-08-31 한국전자통신연구원 Apparatus and method for speech recognition by using source separation and source identification
JP5605575B2 (en) * 2009-02-13 2014-10-15 日本電気株式会社 Multi-channel acoustic signal processing method, system and program thereof
WO2010092913A1 (en) * 2009-02-13 2010-08-19 日本電気株式会社 Method for processing multichannel acoustic signal, system thereof, and program
JP2011107603A (en) * 2009-11-20 2011-06-02 Sony Corp Speech recognition device, speech recognition method and program
JP5641186B2 (en) * 2010-01-13 2014-12-17 ヤマハ株式会社 Noise suppression device and program
US9357307B2 (en) 2011-02-10 2016-05-31 Dolby Laboratories Licensing Corporation Multi-channel wind noise suppression system and method
JP5568530B2 (en) * 2011-09-06 2014-08-06 日本電信電話株式会社 Sound source separation device, method and program thereof
WO2013093569A1 (en) * 2011-12-23 2013-06-27 Nokia Corporation Audio processing for mono signals
CN103325383A (en) 2012-03-23 2013-09-25 杜比实验室特许公司 Audio processing method and audio processing device
US10497381B2 (en) 2012-05-04 2019-12-03 Xmos Inc. Methods and systems for improved measurement, entity and parameter estimation, and path propagation effect measurement and mitigation in source signal separation
KR102118411B1 (en) 2012-05-04 2020-06-03 액스모스 인코포레이티드 Systems and methods for source signal separation
US9881616B2 (en) * 2012-06-06 2018-01-30 Qualcomm Incorporated Method and systems having improved speech recognition
US8958586B2 (en) 2012-12-21 2015-02-17 Starkey Laboratories, Inc. Sound environment classification by coordinated sensing using hearing assistance devices
EP3042377B1 (en) 2013-03-15 2023-01-11 Xmos Inc. Method and system for generating advanced feature discrimination vectors for use in speech recognition
US9466310B2 (en) 2013-12-20 2016-10-11 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Compensating for identifiable background content in a speech recognition device
US9390712B2 (en) 2014-03-24 2016-07-12 Microsoft Technology Licensing, Llc. Mixed speech recognition
KR20170063618A (en) 2014-10-07 2017-06-08 삼성전자주식회사 Electronic device and its reverberation removing method
US9668066B1 (en) * 2015-04-03 2017-05-30 Cedar Audio Ltd. Blind source separation systems
WO2017084397A1 (en) 2015-11-19 2017-05-26 The Hong Kong University Of Science And Technology Method, system and storage medium for signal separation
US20170206904A1 (en) * 2016-01-19 2017-07-20 Knuedge Incorporated Classifying signals using feature trajectories
US10318813B1 (en) * 2016-03-11 2019-06-11 Gracenote, Inc. Digital video fingerprinting using motion segmentation
US10249305B2 (en) 2016-05-19 2019-04-02 Microsoft Technology Licensing, Llc Permutation invariant training for talker-independent multi-talker speech separation
CN107437420A (en) * 2016-05-27 2017-12-05 富泰华工业(深圳)有限公司 Method of reseptance, system and the device of voice messaging
US10431211B2 (en) * 2016-07-29 2019-10-01 Qualcomm Incorporated Directional processing of far-field audio
US10957337B2 (en) 2018-04-11 2021-03-23 Microsoft Technology Licensing, Llc Multi-microphone speech separation
CN108766455B (en) 2018-05-16 2020-04-03 南京地平线机器人技术有限公司 Method and device for denoising mixed signal
CN110738990B (en) * 2018-07-19 2022-03-25 南京地平线机器人技术有限公司 Method and device for recognizing voice
JP7044040B2 (en) * 2018-11-28 2022-03-30 トヨタ自動車株式会社 Question answering device, question answering method and program
CN112002339B (en) * 2020-07-22 2024-01-26 海尔优家智能科技(北京)有限公司 Speech noise reduction method and device, computer-readable storage medium and electronic device
CN113470689B (en) * 2021-08-23 2024-01-30 杭州国芯科技股份有限公司 Voice separation method
CN114333897B (en) * 2022-03-14 2022-05-31 青岛科技大学 BrBCA blind source separation method based on multi-channel noise variance estimation

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5383164A (en) * 1993-06-10 1995-01-17 The Salk Institute For Biological Studies Adaptive system for broadband multisignal discrimination in a channel with reverberation
US5706402A (en) * 1994-11-29 1998-01-06 The Salk Institute For Biological Studies Blind signal processing system employing information maximization to recover unknown signals through unsupervised minimization of output redundancy
US5770841A (en) * 1995-09-29 1998-06-23 United Parcel Service Of America, Inc. System and method for reading package information
US5999567A (en) * 1996-10-31 1999-12-07 Motorola, Inc. Method for recovering a source signal from a composite signal and apparatus therefor
US6002776A (en) * 1995-09-18 1999-12-14 Interval Research Corporation Directional acoustic signal processor and method therefor
EP1006652A2 (en) * 1998-12-01 2000-06-07 Siemens Corporate Research, Inc. An estimator of independent sources from degenerate mixtures
US6167417A (en) * 1998-04-08 2000-12-26 Sarnoff Corporation Convolutive blind source separation using a multiple decorrelation method
WO2001027874A1 (en) * 1999-10-14 2001-04-19 The Salk Institute Unsupervised adaptation and classification of multi-source data using a generalized gaussian mixture model
US20020110256A1 (en) * 2001-02-14 2002-08-15 Watson Alan R. Vehicle accessory microphone
US20020136328A1 (en) * 2000-11-01 2002-09-26 International Business Machines Corporation Signal separation method and apparatus for restoring original signal from observed data

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4649505A (en) 1984-07-02 1987-03-10 General Electric Company Two-input crosstalk-resistant adaptive noise canceller
US4912767A (en) 1988-03-14 1990-03-27 International Business Machines Corporation Distributed noise cancellation system
US5327178A (en) 1991-06-17 1994-07-05 Mcmanigal Scott P Stereo speakers mounted on head
US5208786A (en) 1991-08-28 1993-05-04 Massachusetts Institute Of Technology Multi-channel signal separation
US5251263A (en) 1992-05-22 1993-10-05 Andrea Electronics Corporation Adaptive noise cancellation and speech enhancement system and apparatus therefor
US5375174A (en) 1993-07-28 1994-12-20 Noise Cancellation Technologies, Inc. Remote siren headset
US5675659A (en) * 1995-12-12 1997-10-07 Motorola Methods and apparatus for blind separation of delayed and filtered sources
US6130949A (en) 1996-09-18 2000-10-10 Nippon Telegraph And Telephone Corporation Method and apparatus for separation of source, program recorded medium therefor, method and apparatus for detection of sound source zone, and program recorded medium therefor
AU4826697A (en) 1996-10-17 1998-05-11 Andrea Electronics Corporation Noise cancelling acoustical improvement to wireless telephone or cellular phone
FR2759824A1 (en) * 1997-02-18 1998-08-21 Philips Electronics Nv SYSTEM FOR SEPARATING NON-STATIONARY SOURCES
US7072476B2 (en) 1997-02-18 2006-07-04 Matech, Inc. Audio headset
JP3927701B2 (en) * 1998-09-22 2007-06-13 日本放送協会 Sound source signal estimation device
US6606506B1 (en) 1998-11-19 2003-08-12 Albert C. Jones Personal entertainment and communication device
US6381570B2 (en) 1999-02-12 2002-04-30 Telogy Networks, Inc. Adaptive two-threshold method for discriminating noise from speech in a communication signal
US6526148B1 (en) * 1999-05-18 2003-02-25 Siemens Corporate Research, Inc. Device and method for demixing signal mixtures using fast blind source separation technique based on delay and attenuation compensation, and for selecting channels for the demixed signals
US6321200B1 (en) * 1999-07-02 2001-11-20 Mitsubish Electric Research Laboratories, Inc Method for extracting features from a mixture of signals
US6549630B1 (en) 2000-02-04 2003-04-15 Plantronics, Inc. Signal expander with discrimination between close and distant acoustic source
US8903737B2 (en) 2000-04-25 2014-12-02 Accenture Global Service Limited Method and system for a wireless universal mobile product interface
US6879952B2 (en) 2000-04-26 2005-04-12 Microsoft Corporation Sound source separation using convolutional mixing and a priori sound source knowledge
US20030179888A1 (en) 2002-03-05 2003-09-25 Burnett Gregory C. Voice activity detection (VAD) devices and methods for use with noise suppression systems
DE60203379T2 (en) * 2001-01-30 2006-01-26 Thomson Licensing S.A., Boulogne SIGNAL PROCESSING TECHNOLOGY FOR GEOMETRIC SOURCE DISTRACTION
US7206418B2 (en) 2001-02-12 2007-04-17 Fortemedia, Inc. Noise suppression for a wireless communication device
AU2002309146A1 (en) 2002-06-14 2003-12-31 Nokia Corporation Enhanced error concealment for spatial audio
US7142682B2 (en) 2002-12-20 2006-11-28 Sonion Mems A/S Silicon-based transducer for use in hearing instruments and listening devices
US7099821B2 (en) 2003-09-12 2006-08-29 Softmax, Inc. Separation of target acoustic signals in a multi-transducer arrangement

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5383164A (en) * 1993-06-10 1995-01-17 The Salk Institute For Biological Studies Adaptive system for broadband multisignal discrimination in a channel with reverberation
US5706402A (en) * 1994-11-29 1998-01-06 The Salk Institute For Biological Studies Blind signal processing system employing information maximization to recover unknown signals through unsupervised minimization of output redundancy
US6002776A (en) * 1995-09-18 1999-12-14 Interval Research Corporation Directional acoustic signal processor and method therefor
US5770841A (en) * 1995-09-29 1998-06-23 United Parcel Service Of America, Inc. System and method for reading package information
US5999567A (en) * 1996-10-31 1999-12-07 Motorola, Inc. Method for recovering a source signal from a composite signal and apparatus therefor
US6167417A (en) * 1998-04-08 2000-12-26 Sarnoff Corporation Convolutive blind source separation using a multiple decorrelation method
EP1006652A2 (en) * 1998-12-01 2000-06-07 Siemens Corporate Research, Inc. An estimator of independent sources from degenerate mixtures
WO2001027874A1 (en) * 1999-10-14 2001-04-19 The Salk Institute Unsupervised adaptation and classification of multi-source data using a generalized gaussian mixture model
US20020136328A1 (en) * 2000-11-01 2002-09-26 International Business Machines Corporation Signal separation method and apparatus for restoring original signal from observed data
US20020110256A1 (en) * 2001-02-14 2002-08-15 Watson Alan R. Vehicle accessory microphone

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HYVORINEN AAPO: "Fast and Robust Fixed-Point Algorithms for Independent Component Analysis", IEEE TRANSACTION ON NEURAL NETWORKS, 1999, pages 1 - 17, XP002980698 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7983907B2 (en) 2004-07-22 2011-07-19 Softmax, Inc. Headset for separation of speech signals in a noisy environment
EP1784820A2 (en) * 2004-07-22 2007-05-16 Softmax, Inc. Separation of target acoustic signals in a multi-transducer arrangement
EP1784816A2 (en) * 2004-07-22 2007-05-16 Softmax, Inc. Headset for separation of speech signals in a noisy environment
EP1784816A4 (en) * 2004-07-22 2009-06-24 Softmax Inc Headset for separation of speech signals in a noisy environment
EP1784820A4 (en) * 2004-07-22 2009-11-11 Softmax Inc Separation of target acoustic signals in a multi-transducer arrangement
JP2006084928A (en) * 2004-09-17 2006-03-30 Nissan Motor Co Ltd Sound input device
US7409375B2 (en) 2005-05-23 2008-08-05 Knowmtech, Llc Plasticity-induced self organizing nanotechnology for the extraction of independent components from a data stream
US8898056B2 (en) 2006-03-01 2014-11-25 Qualcomm Incorporated System and method for generating a separated signal by reordering frequency components
KR101184394B1 (en) 2006-05-10 2012-09-20 에이펫(주) method of noise source separation using Window-Disjoint Orthogonal model
KR100875264B1 (en) 2006-08-29 2008-12-22 학교법인 동의학원 Post-processing method for blind signal separation
US8160273B2 (en) 2007-02-26 2012-04-17 Erik Visser Systems, methods, and apparatus for signal separation using data driven techniques
US8311236B2 (en) 2007-10-04 2012-11-13 Panasonic Corporation Noise extraction device using microphone
EP2207168A3 (en) * 2007-10-18 2010-10-20 Motorola, Inc. Robust two microphone noise suppression system
EP2183853A4 (en) * 2007-10-18 2010-11-03 Motorola Inc Robust two microphone noise suppression system
EP2183853A1 (en) * 2007-10-18 2010-05-12 Motorola, Inc. Robust two microphone noise suppression system
US8046219B2 (en) 2007-10-18 2011-10-25 Motorola Mobility, Inc. Robust two microphone noise suppression system
WO2009051959A1 (en) 2007-10-18 2009-04-23 Motorola, Inc. Robust two microphone noise suppression system
KR101171494B1 (en) * 2007-10-18 2012-08-07 모토로라 모빌리티, 인크. Robust two microphone noise suppression system
KR101184806B1 (en) * 2007-10-18 2012-09-20 모토로라 모빌리티 엘엘씨 Robust two microphone noise suppression system
US8175291B2 (en) 2007-12-19 2012-05-08 Qualcomm Incorporated Systems, methods, and apparatus for multi-microphone based speech enhancement
WO2009106918A1 (en) * 2008-02-27 2009-09-03 Sony Ericsson Mobile Communications Ab Electronic devices and methods that adapt filtering of a microphone signal responsive to recognition of a targeted speaker's voice
US7974841B2 (en) 2008-02-27 2011-07-05 Sony Ericsson Mobile Communications Ab Electronic devices and methods that adapt filtering of a microphone signal responsive to recognition of a targeted speaker's voice
US8321214B2 (en) 2008-06-02 2012-11-27 Qualcomm Incorporated Systems, methods, and apparatus for multichannel signal amplitude balancing
KR101280253B1 (en) 2008-12-22 2013-07-05 한국전자통신연구원 Method for separating source signals and its apparatus
US8553922B2 (en) 2010-02-24 2013-10-08 Yamaha Corporation Earphone microphone
WO2012161555A3 (en) * 2011-05-26 2013-01-24 주식회사 마이티웍스 Signal-separation system using a directional microphone array and method for providing same
WO2012161555A2 (en) * 2011-05-26 2012-11-29 주식회사 마이티웍스 Signal-separation system using a directional microphone array and method for providing same
US9516411B2 (en) 2011-05-26 2016-12-06 Mightyworks Co., Ltd. Signal-separation system using a directional microphone array and method for providing same
CN107924685A (en) * 2015-12-21 2018-04-17 华为技术有限公司 Signal processing apparatus and method
US10679642B2 (en) 2015-12-21 2020-06-09 Huawei Technologies Co., Ltd. Signal processing apparatus and method
CN107924685B (en) * 2015-12-21 2021-06-29 华为技术有限公司 Signal processing apparatus and method
CN111402883A (en) * 2020-03-31 2020-07-10 云知声智能科技股份有限公司 Nearby response system and method in distributed voice interaction system in complex environment
CN111402883B (en) * 2020-03-31 2023-05-26 云知声智能科技股份有限公司 Nearby response system and method in distributed voice interaction system under complex environment

Also Published As

Publication number Publication date
KR20050115857A (en) 2005-12-08
EP1570464A4 (en) 2006-01-18
US20060053002A1 (en) 2006-03-09
US7383178B2 (en) 2008-06-03
EP1570464A1 (en) 2005-09-07
JP2006510069A (en) 2006-03-23
AU2003296976A1 (en) 2004-06-30

Similar Documents

Publication Publication Date Title
US7383178B2 (en) System and method for speech processing using independent component analysis under stability constraints
CN100392723C (en) System and method for speech processing using independent component analysis under stability restraints
US7099821B2 (en) Separation of target acoustic signals in a multi-transducer arrangement
KR101340215B1 (en) Systems, methods, apparatus, and computer-readable media for dereverberation of multichannel signal
US7386135B2 (en) Cardioid beam with a desired null based acoustic devices, systems and methods
KR101210313B1 (en) System and method for utilizing inter?microphone level differences for speech enhancement
US7464029B2 (en) Robust separation of speech signals in a noisy environment
EP2306457B1 (en) Automatic sound recognition based on binary time frequency units
CN106663445A (en) Voice processing device, voice processing method, and program
GB2398913A (en) Noise estimation in speech recognition
US9406293B2 (en) Apparatuses and methods to detect and obtain desired audio
Prasad et al. Two microphone technique to improve the speech intelligibility under noisy environment
The et al. A Method for Extracting Target Speaker in Dual–Microphone System
Choi et al. Blind separation of delayed and superimposed acoustic sources: learning algorithms and experimental study
Okuma et al. Two-channel microphone system with variable arbitrary directional pattern
CN113936687A (en) Method for real-time voice separation voice transcription
Chen et al. An improved phase-error based dual-microphone noise reduction method
Kouhi-Jelehkaran et al. Phone-based filter parameter optimization for robust speech recognition using likelihood maximization
Kouhi-Jelehkaran et al. Maximum-Likelihood Phone-Based Filter Parameter Optimization for Microphone Array Speech Recognition
Kang et al. On-line speech enhancement by time-frequency masking under prior knowledge of source location
Rana A Survey on Speech Enhancement.
Thea Speech Source Separation Based on Dual–Microphone System
Tsujikawa et al. Hands-free speech recognition using blind source separation post-processed by two-stage spectral subtraction.

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2005511772

Country of ref document: JP

ENP Entry into the national phase

Ref document number: 2006053002

Country of ref document: US

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2003812979

Country of ref document: EP

Ref document number: 10537985

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 1020057010611

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 2003296976

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 169587

Country of ref document: IL

WWE Wipo information: entry into national phase

Ref document number: 1571/CHENP/2005

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 20038A96815

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 2003812979

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1020057010611

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 10537985

Country of ref document: US