WO1992008330A1 - Bimodal speech processor - Google Patents

Bimodal speech processor Download PDF

Info

Publication number
WO1992008330A1
WO1992008330A1 PCT/AU1991/000506 AU9100506W WO9208330A1 WO 1992008330 A1 WO1992008330 A1 WO 1992008330A1 AU 9100506 W AU9100506 W AU 9100506W WO 9208330 A1 WO9208330 A1 WO 9208330A1
Authority
WO
WIPO (PCT)
Prior art keywords
aid
processor
speech
signal
acoustic
Prior art date
Application number
PCT/AU1991/000506
Other languages
French (fr)
Inventor
Gary John Dooley
Peter John Blamey
Graeme Milbourne Clark
Peter Misha Seligman
Original Assignee
Cochlear Pty. Limited
The University Of Melbourne
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cochlear Pty. Limited, The University Of Melbourne filed Critical Cochlear Pty. Limited
Priority to JP3517611A priority Critical patent/JPH06506322A/en
Publication of WO1992008330A1 publication Critical patent/WO1992008330A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/36036Applying electric currents by contact electrodes alternating or intermittent currents for stimulation of the outer, middle or inner ear
    • A61N1/36038Cochlear stimulation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/02Details
    • A61N1/04Electrodes
    • A61N1/05Electrodes for implantation or insertion into the body, e.g. heart electrode
    • A61N1/0526Head electrodes
    • A61N1/0541Cochlear electrodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/35Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using translation techniques
    • H04R25/356Amplitude, e.g. amplitude shift or compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/60Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles
    • H04R25/604Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers
    • H04R25/606Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers acting directly on the eardrum, the ossicles or the skull, e.g. mastoid, tooth, maxillary or mandibular bone, or mechanically stimulating the cochlea, e.g. at the oval window

Definitions

  • the present invention relates to improvements in the processing of sound for the purposes of supplying an information signal to either an acoustic hearing aid, a cochlear implant aid device or both so as to improve the quality of hearing of a patient.
  • an acoustic hearing aid is reference to an aid of the type adapted to fit in or adjacent an ear of a patient and which provides an acoustic output suitable to at least partially compensate for hearing deficiencies of the patient.
  • a cochlear implant aid will refer to a device which includes components which are fitted within the body of a patient and which are adapted to electrically stimulate the nervous system of a patient in order to at least partially compensate for usually profound hearing loss of the patient.
  • cochlear implants to patients with some residual hearing in the contralateral ear. Many patients recognise speech better using conventional acoustic hearing aids together with the cochlear implant than they do using either device alone but find the use of the' combination unacceptable.
  • acoustic hearing aid or the cochlear implant aid but not both devices together. It is an object of the present invention to provide a bimodal aid device which can drive both an acoustic hearing aid and a cochlear implant aid which thereby improves the quality of binaural information received by a patient.
  • Two further problems experienced in the prior art of hearing aids are (1) quickly and easily measuring the nature and degree of hearing impairment of a client for the purposes of providing an appropriate hearing aid and (2) the difficulty in matching appropriate hearing aid qualities and capabilities to the specific requirements of the user.
  • a bimodal aid for the hearing impaired which " includes processing means adapted to receive and process audio information received from a microphone; said processing means supplying processed information derived from said audio information to an implant aid adapted to be implanted in a first ear of a patient and to an acoustic aid adapted to be worn in or adjacent a second ear of said patient whereby binaural information is provided to said patient.
  • a bimodal aid for the hearing impaired comprising a sound/speech processor electrically connected to a hearing aid transducer adapted to be worn adjacent to or in an ear of a patient and electrically connected to an electrical signal transducer adapted to be worn in an ear of a patient; aid speech processor receiving and processing audio input information so as to produce an acoustic signal from said hearing aid transducer and an electrical signal from said electrical transducer whereby coherent binaural information is provided to said patient.
  • an electronically configurable sound/speech processor for the hearing impaired; said processor including configuration means and signal processing means; said processor adapted to receive audio information and to process said audio information by said signal processing means in accordance with parameters set by said configuration means so as to produce an output signal adapted to stimulate a hearing aid transducer; said configuration means adapted to receive one or more of electronic signal input or software input for the purpose of modifying said parameters.
  • said sound/speech processor utilises the speech features FO, Fl, F2, A0, Al, A2, A3, A4, A5 and voiced/voiceless sound decisions to produce said output signal adapted to stimulate a Rearing aid transducer, wherein said features are defined as follows:- FO is the fundamental frequency, Fl is the first formant frequency, F2 is the second formant frequency, A0 is the amplitude of FO, Al is the amplitude of Fl,
  • A2 is the amplitude of F2
  • said signal processing means includes means for dynamically changing the amplitude of different frequency bands in the defined speech spectrum as a function of the speech features Al, A2, A3, A4, and A5 parameters so that the loudness in these bands is appropriately scaled between the threshold and maximum comfortable levels of the hearing aid user at the Fl and F2 frequencies and in the higher frequency bands.
  • said signal processing means includes filter means whose settings are dynamically varied according to speech parameters extracted from said signal processing means whereby said filter means dynamically adapts said output signal to overcome the effects of noise and/or particular deficiencies in the hearing of a user.
  • said signal processing means within said processor includes means for reconstructing signals in real time whereby the amplitude and/or frequency characteristics of said output signal can be controlled optimally so as to enhance speech recognition in a user.
  • said filter means set by said configuration means are provided based on measurements set by an audiologist during a hearing aid fitting procedure. The filter settings remain fixed after completion of the procedure and remain fixed thereafter.
  • said processor includes filter means whose parameters are changed dynamically while said processor is in use providing said output signal to said hearing aid transducer in accordance with information provided to said configuration means by speech parameter extraction means acting on said audio information.
  • said output signal is synthesised by said signal processing means utilising only speech parameters.
  • Fig. 1 is a schematic representation of a bimodal aid according to a first embodiment of the invention
  • Fig. 2 is a schematic representation of the main functional components which comprise the entire bimodal aid
  • Fig. 3 is a schematic block diagram of the implant processing circuitry together with the acoustic processor circuitry
  • Fig. 4 is a chart showing an example of the pattern of electrical stimulation of the implant electrodes for various steady state phonemes using the multi-peak coding strategy
  • Fig. 5 is a graph showing the standard loudness growth function for the speech processor portion when driving the implant
  • Fig. 6 is a schematic block diagram of the functional components of the bimodal processor which drive the acoustic aid
  • Fig. 7 is a schematic block diagram of the components comprising the acoustic processor which processes the speech signal in accordance with particular forms of the invention to drive the acoustic aid,
  • Fig. 8 is a component block diagram of the acoustic processor
  • Fig. 9 is a component block diagram of a Biquad filter as utilised in the acoustic processor of Fig. 8
  • Fig. 10 is a component block diagram of the output driver for the acoustic aid
  • Fig. 11 is a schematic representation of preferred modes of operation of the acoustic processor for voiced vowel input
  • Fig. 12 shows a sound intensity against frequency for an acoustic aid operating according to mode 1 ,
  • Fig. 13 shows comparative plots of sound intensity against frequency in relation to an example of modes one and three operation of the acoustic processor
  • Fig. 14 outlines a fitting strategy for the bimodal aid
  • Fig. 15 is a flowchart of a fitting strategy for the acoustic aid portion of the device in mode 1
  • Fig. 16 is a schematic block diagram of the utilisation of the diagnostic and programming unit for use in association with the bimodal aid. Description of Preferred Embodiments
  • the bimodal aid is a hearing aid device which has the capability to provide information through a cochlear implant aid in one ear and a speech processing acoustic hearing aid in the other ear of a patient. Both the implant aid and the acoustic aid are controlled by the same speech processor which derives its raw audio input from a single microphone.
  • the speech processor extracts certain parameters from the incoming acoustic waveform that are relevant to the perception of speech sounds. Some of the speech parameters extracted by the speech processor are translated into an electrical signal and delivered to a cochlear implant. These features can also be used as a basis for modification of the speech waveform following which it is then amplified and presented to the acoustic aid.
  • the cochlear implant produces stimulation at positions in the cochlear that correspond to higher frequencies (usually above 700Hz) .
  • frequencies usually above 700Hz
  • the frequency and temporal resolution of residual hearing can be better than that provided by the pulsatile electric sig ⁇ al of the cochlear implant aid portion of the bimodal aid.
  • the acoustic aid driver part of the device can also be used as a speech processing hearing aid independent of the cochlear implant aid output.
  • a speech processing hearing aid independent of the cochlear implant aid output.
  • Conventional hearing aids are limited in practice because the adjustments to the frequency/gain characteristics are restricted to a small number of options and there are many users who are not optimally aided. There is a need for a hearing aid with a more flexible frequency/gain characteristic and this can be achieved with this aid.
  • the feature extraction circuits which are the basis of the cochlear implant aid allow the hardware to measure important characteristics of the speech signal in quiet conditions and in conditions of moderate amounts of background noise.
  • acoustic signal processor (12) which outputs to an acoustic aid.
  • the synthesized waveform is used to overcome special problems. For example, high frequency sounds above the limit of a user's hearing can be presented as lower frequencies within the user's hearing range. Broad peaks in the speech spectrum can be made narrower if this provides better frequency resolution and helps to reduce masking of one peak by other adjacent peaks. There is no,other single, wearable device capable of implementing all these processes.
  • the sound/speech processor can take in a speech sgnal from a microphone, measure selected features of that signal (including the frequency and amplitude of formants for voiced speech) and control the outputs to both a cochlear implant aid and an acoustic hearing aid in the case of the first embodiment or only the acoustic hearing aid in the case of the second embodiment.
  • FIG. 2 shows a schematic diagram of the operation of the device.
  • the cochlear implant aid portion of the device is covered by existing patents or patent applications to the same applicant and the implant aid operates upon one ear of a patient using similar strategies to those already developed for implant users.
  • the users of the bimodal device will receive an auditory signal via an acoustic aid in the non-implanted ear.
  • the capabilities of the bimodal aid allow this signal to be specially tailored in order to convey information complementary to the implant and utilise the' residual hearing of the patient maximally.
  • Fig. 2 discloses the body worn portion of the bimodal device comprises a speech processor 11 intimately connected to an acoustic aid processor 12 together with a microphone 13, an acoustic hearing aid 14 and an implant aid 15.
  • the implant aid 15 comprises an electrode array 16 electrically connected by harness 17 to a receiver stimulator 18 which is in radio communication with speech processor 11 by way of internal coil 19 and external coil 20.
  • Fig. 2 shows auxiliary items being the diagnostic and programming unit 21 and the diagnostic programming interface 22.
  • the diagnostic and programming unit 21 is implemented as a program running on a personal computer whilst the diagnostic programming interface 22 is a communications card connected to the PC bus.
  • the diagnostic and programming unit 21 is utilised in a clinical situation to test for and control device parameters of operation for the speech processor 11 and/or acoustic aid processor 12 which optimise hearing performance for a patient according to defined criteria. These parameters are communicated via the diagnostic programming interface 22 to a map memory storage 23 in the speech processor 11. It is by reference to the parameters stored in the map memory storage 23 that the manner of processing of the audio signal received from microphone 13 is determined both for the speech processor 11 when driving the implant aid 15 and the acoustic aid processor 12 when driving the acoustic aid 14.
  • Fig. 2 The components illustrated in Fig. 2 other than the acoustic aid processor 12 and the acoustic aid 14 and the computer program controlling the function of the diagnostic and programming unit 21 have been described elsewhere in earlier filed patents and patent applications and remain the same in so far as operation of the Cochlear implant aid is concerned.
  • the speech processor 11 and the precise methodology for exciting electrically the implant aid has varied since the inception of these devices and can be expected to continue to vary.
  • excitation of the stimulating electrodes placed within the ear of a patient can be either digital or analogue in nature.
  • Cochlear Pty. Limited has pursued a strategy of digital electronic stimulation using what have been termed pulsatile electrical signals applied to a pulsatile electrical signal transducer.
  • the speech processor 11 has been commercially available in a number of forms since around 1982 from Cochlear Pty. Limited (one of the co-applicants for the present application) .
  • the early units and, indeed, even the most recent units are primarily aimed at improving speech perception in favour of all other sounds received from microphone 13. This is done by causing speech processor 11 to discern and process from the raw audio input received from microphone 13 acoustic features of speech which have been determined to best characterise speech information as perceived by the human auditory system.
  • This latest coding scheme provides all of the information available in the F0F1F2 scheme while providing additional information from three high frequency band pass filters. These filters cover the following frequency ranges: 2000 to 2800 Hz, 2800 to 4000 Hz and 4000 to 8000 Hz. The energy within these ranges controls the amplitude of electrical stimulation of three fixed electrode pairs in the basal end of the electrode array. Thus, additional information about high frequency sounds is presented at a tonotopically appropriate place within the cochlear.
  • the overall stimulation rate for voiced sounds remains as F0 (fundamental frequency or voice pitch) but in the new scheme four electrical stimulation pulses occur for each glottal pulse. This compares with the F0F1F2 strategy in which only two pulses occur per voice pitch period.
  • the two pulses representing the first and second formant are still provided and additional stimulation pulses occur representing energy in the 2000 to 2800 Hz and the 2800 to 4000 Hz ranges.
  • the latest noise suppression algorithm operates in a continuous manner, rather than as a voice activated switch as previously been used. This removes the perceptually annoying switching on and off of the earlier system.
  • the noise flow is continuously assessed in each frequency band over a period of ten seconds. The lowest level over this period is assumed to be background noise and is subtracted from the amplitude relevant to that frequency band. Thus any increase in signal amplitude above the noise level is presented to the patient while the ambient noise level itself is reduced to near threshold.
  • Fig. 3 illustrates the basic filter and processing structure of a bimodal aid incorporating means to implement any of the above described latest processing scheme.
  • International Patent Application PCT/AU90/00407 to the present applicant entitled Multi-peak Speech Processor describes in detail the operation of these components. The entire text and drawings of the specification of that application are incorporated herein by cross reference. The most pertinent portions of that specification are included immediately below.
  • the coding strategy extracts and codes the Fl and F2 spectral peaks from the microphone audio signal, using the extracted frequency estimates to select a more apical and a more basal pair of electrodes for stimulation. Each selected electrode is stimulated at a pulse rate equal to the fundamental frequency F0.
  • Fl and F2 three high frequency bands of spectral information are extracted.
  • the amplitude estimates from band three (2000-2800 Hz), band four (2800-4000 Hz), and band five (above 4000 Hz) are presented to fixed electrodes, for example the seventh, fourth and first electrodes, respectively, of the electrode array 16 (Fig. 2 and Fig. 4) .
  • the first, fourth and seventh electrodes are selected as the default electrodes for the high,-frequency bands because they are spaced far enough apart so that most patients will be able to discriminate between stimulation at these three locations. Note that these default assignments may be reprogrammed as required. If the three high frequency bands were assigned only to the three most basal electrodes in the MAP, many patients might not find the additional high frequency information as useful since patients often do not demonstrate good place-pitch discrimination between adjacent basal electrodes. Additionally, the overall pitch percept resulting from the electrical stimulation might be too high.
  • Table I indicates the frequency ranges of the various formants employed in the speech coding scheme for the present invention.
  • FIG. 4 illustrates the pattern of electrical stimulation for various steady state phonemes when using this coding strategy.
  • a primary function of the MAP is to translate the frequency of the dominant spectral peaks (Fl and F2) to electrode selection.
  • Electrode 1 is the most basal electrode and electrode 22 is the most apical in the electrode array. Stimulation of different electrodes normally results in pitch perceptions that reflect the tonotopic organization of the cochlea. Electrode 22 elicits the lowest place-pitch percept, or the "dullest” sound. Electrode 1 elicits the highest place-pitch percept, or "sharpest” sound. To allocate the frequency range for the Fl and F2 spectral peaks to the total number of electrodes, a default mapping algorithm splits up the total number of electrodes available to use into a ratio of approximately 1:2, as shown in FIG. 4.
  • a random access memory stores a set of number tables, referred to collectively as the MAP memory storage 23.
  • the MAP determines both stimulus parameters for Fl, F2 and bands 3-5, and the amplitude estimates.
  • the encoding of the stimulus parameters follows a sequence of distinct steps. The steps may be summarized as follows:
  • the first formant frequency (Fl) is converted to a number based on the dominant spectral peak in the region between 280-1000 Hz.
  • the Fl number is used, in conjunction with one of the MAP tables, to determine the electrode to be stimulated to represent the first formant.
  • the indifferent electrode is determined by the mode.
  • the second formant frequency (F2) is converted to a number based on the dominant spectral peak in region between 800-4000 Hz.
  • the F2 number is used, in conjunction with " one of the MAP tables to determine the electrode to be stimulated to represent the second ormant.
  • the indifferent electrode is determined by the mode. 5.
  • the amplitude estimates for bands 3, 4 and 5 are assigned to the three default electrodes 7, 4 and 1 for bands 3, 4 and 5, respectively, or such other electrodes that may be selected when the MAP is being jprepared. 6.
  • the amplitude of the acoustic signal in each of the frequency bands is converted to a number ranging from 0 - 150.
  • the level of stimulation that will be delivered is determined by referring to set MAP tables that relate acoustic amplitude (in range of 0-150) to stimulation level for the specific electrodes selected in steps 2, 4 and 5, above.
  • the data are further encoded in the speech processor and transmitted to the receiver/stimulator 18. It, in turn, decodes the data and sends the stimuli to the appropriate electrodes. Stimulus pulses are presented at a rate equal to F0 during voiced periods and at a random a periodic rate within the range of FO and Fl formants (typically 200 to 300 Hz) during unvoiced periods.
  • the speech processor 11 additionally includes a non- linear loudness growth algorithm that converts acoustic signal amplitude to electrical stimulation parameters.
  • the speech processor 11 converts the amplitude of the acoustic signal into a digital linear scale with values from 0 - 150 as shown in FIG. 5. That digital scale (in combination with the information stored in the patient's MAP) determines the actual charge delivered to the electrodes in the electrode array 16. Improvements on this assembly are disclosed in co- pending applications to Cochlear Pty. Limited. Specifically International Application PCT/AU90/00406 discloses an improved connection system between migrophone 13 and speech processor 11 and between the external coil assembly 20 and the speech processor 11. The text and drawings of the specification of that application are incorporated herein by cross reference.
  • FIG. 6 is a block diagram of the processing circuitry showing the functional interconnection of components for driving the acoustic hearing aid 14.
  • the main components comprise microphone 13, automatic gain control 24, speech parameter extractor 25, encoder 26, patient MAP memory storage 23, noise generator 27 and acoustic aid signal processor 12.
  • the heart of the bimodal aid as far as allowing the acoustic aid 14 to be driven from the speech processor 11 is the acoustic aid signal processor 12.
  • the acoustic aid signal processor is software configurative and contains three two-pole filters each of which can be used in either bandpass, lowpass or highpass configuration. The centre frequency, bandwidth and output amplitude of these filters are controlled by the processor.
  • the filters can be used in series or in parallel and the input waveform can be the speech waveform, pulses, a noise signal or external signal.
  • the external signal can be from another microphone, other acoustic output or another acoustic signal processor. This results in a particularly flexible aid that can operate either in a manner similar to a conventional acoustic hearing aid (though with more accurate gain fitting than most currently available aids can provide) or as an aid providing different types of processed speech information.
  • the acoustic aid signal processor including the three programmable filters has been implemented on a single silicon chip.
  • Each filter is usable as a high-pass, band ⁇ pass, or low-pass filter.
  • This chip has the flexibility to cover a wide range of frequencies, amplitudes and spectral shapes.
  • DAC digital-to-analog converter
  • the DAC can produce waveforms of arbitrary shape (such as sinusoidal or pulsatile) controlled directly by the processor, or can be switched to provide excitation by the speech waveform, or a white noise generator.
  • a schematic diagram of the three-filter circuit is shown in Fig 7.
  • a functional specification for a single chip implementation of the acoustic aid signal processor 12 is provided by FIGS. 8, 9 and 10. Details of the specification are as follows: ⁇ Topology
  • Fig. 8 shows the overall topology of the chip.
  • Three programmable filters in which centre frequency and band width can be independently controlled are provided. The outputs of these filters can be independently attenuated or amplified and then mixed. The output of one of the three filters can be inverted if necessary by setting an INV bit.
  • Fig. 9 shows details of one of the Biquad filters forming the three filter array together with the frequency latches and Q latches which determine the parameters of the Biquad filter.
  • the topology of the chip can be altered from serial to parallel or a mixed structure by three PARn bits.
  • the signal source for this structure can be selected by a four channel multiplexer (MUX) . This selects +5 volts, a buffered output of the audio signal, an internally generated noise source, or an external signal. This signal source is fed to a 7 bit digital to analog converter (DAC) as a reference voltage.
  • MUX four channel multiplexer
  • the multiplying DAC can convert the DC level into a pulse generator, or provide a fine gain control on the audio or external signal, or noise source.
  • the most significant bit (MSB) is used to invert the output. All filter outputs are summed and passed to a push pull earphone driver which can provide effectively 10 volts peak- to-peak across a 270 ohm (nominal) earphone.
  • the chip uses a single supply of 5 volts.
  • the earphone has a DC resistance of 88 ohm with the impedance rising gradually to 270 ohm at 1 kHz.
  • the output stage consists of a bridge of P and N transistor switches as shown in Fig. 10. The switches are pulse width modulated by a signal derived from a comparator driven from a triangle wave on one input and the audio signal on the other. The on resistance of the switches should be less than 5 ohms (lower if possible).
  • the chip is programmed by writing to the MAP of the speech processor. To distinguish between chip and MAP writes, bits A8 - A12 are decoded. Any write to the block 1800 - 18FF in MAP will ' also write to the chip. Two addresses, one odd and one even (Y13 and Y14) are decoded and ORed and the output of these (R/W new) can be used to write to the filter chip in a more selective manner. Odd writes are used to select the MUX and even writes set the auto sensitivity control (ASC) latch on that chip. The four lowest address bits are used to write to 14 registers which contain the programming information for the acoustic processor chip. Registers Y0 to Yll program frequency, Q, gain (attenuation or amplification) and configuration in turn for each of the 3 filters. Register
  • Register Y15 is used to write to the DAC.
  • the topology latch Y12 is as follows:
  • Dl - PAR1 sends Filter 1 output to summer D2 - PAR2 sends Filter 2 output to summer D3 - PAR3 sends Filter 3 output to summer D6 - DIR sends the DAC output direct to summer
  • the cascaded filter and attenuator/amplifier are sent to the summer but the filter output itself is sent to a bus so that it can be made available to other filters.
  • the filter-attenuator/amplifier combination is sent to the bus. In this way the filter gains may be scaled if the filters are cascaded.
  • the filter inputs selected by D6 and D5 vary as shown below
  • a clock pre-scaler is provided to extend the frequency ranges of the filters. This is done by dividing the clock by 2 or 4 before feeding it to the filter's own divider. Decoding is as follows:
  • the filters consist of two integrators in a loop with a variable gain feedback path.
  • the input may be a switched or an unswitched capacitor, it may be applied to either the first or second input and the output may be taken from either the first or second integrator. This produces various different transfer functions as given below.
  • the programmable filter clock divider comprises a 7 bit ripple counter which is compared with the contents of the frequency latch. The first match produces a counter reset.
  • the frequency latch must be fed with the complement of the count required. If for example a reset is required after a count of 2 then the latch is all high except for bit Dl which is low. The outputs of all the NOR gates connected to the latch will be low except for the one connected to Dl. Now if the counter counts up from zero, on the first occasion of Q2 going high, this NOR will also go low and the 7 input NOR at the output will reset the counter.
  • the filter Q is programmed by using a capacitor which has a 4 bit binary sequence, together with a 3 bit programmable resistive divider.
  • the resistors are programmed by a number n which is represented by bits D4-D6 in the Yl latch (Y5 and Y9 for filters 2 and 3).
  • the capacitors are programmed by m, represented by bits D0-D3.
  • the Q is given by:
  • the feedback resistor of an amplifier can be configured to be like a buffered attenuator - i.e. the output can drive another attenuator or inverter.
  • Two separate sections are used, one to give 8, 4 dB steps giving a total of +/- 28 dB and another to give 8, 0.5 dB steps with a maximum of +/- 3.5 dB. The total range is therefore +/- 31.5 dB.
  • Two 8 channel MUXes are used to select the required taps on the potential dividers.
  • the attenuator/amplifiers can be selected by addressing latches Y2, Y6 and Y10.
  • Bit D6 1 gives attenuation, 0 amplification.
  • Y2 gain in dB x 2
  • Y2 64 - (atten in dB x 2) If Y2 is set to 0 (all bits low) then the attenuator/amplifier is powered down and switched ⁇ f. a Note that setting HF adds 6 dB to the gain.
  • the attenuator is arranged so that its gain is not changed except when the signal passes through zero. This is to prevent clicks and pops during gain changes.
  • a zero crossing detector which produces a pulse on either a positive 10 or negative going zero crossing is used to strobe a latch which transfers the required gain to the attenuator MUXes.
  • the multiplying DAC is a 6 bit resistive ladder type and multiplies the input REF, selected by the SOURCE SELECT (Fig. 8) by the digital quantity in latch Y15. Bit D6 inverts the 15 output. The settling time is 50 microseconds.
  • the acoustic aid processor 12 by its flexible and programmable construction, allows many signal processing strategies to be tried and ultimately settled upon so as to best adapt the acoustic aid to provide information to the 20 wearer which compliments information received from an implant aid worn in the other ear of the wearer. This same flexibility and programability also can be used to tailor the bimodal processor for operation of an acoustic hearing aid only. 25 In both cases it is the combination of use of a single microphone together with the preprocessing capabilities of the speech processor 11 combined with the flexibility and programability of the acoustic aid processor 12 which provides features and advantages not found in hearing aid devices to date.
  • the bimodal aid will now be desc ibed when used to drive an acoustic hearing aid only.
  • the modes of operation to be described in relation thereto are equally usable to help obtain the complementary behaviour mentioned above in the first embodiment in a relation to the use of both an acoustic aid and an implant aid by a single wearer. The following description therefore and to that extent should be taken as applying equally to the first embodiment.
  • the inherent flexibility of the acoustic aid signal processor 12 incorporating the three software configurable filters provides for an almost unlimited degree of flexibility of processing signals in the frequency domain received from the speech processor 11 and destined for the acoustic aid 14.
  • Four particular modes of operation have been identified as desirable and achievable by the acoustic aid signal processor 12.
  • mode 1 the filter parameters are set by the audiologist during the iterative fitting procedure and remain fixed thereafter.
  • modes 2-4 the speech parameter extraction circuits provide instantaneous information about the speech signal that is used to change the filter parameters dynamically while the aid is in use.
  • the input signal is the speech waveform.
  • the output signal is manipulated in different ways by controlling the filters to emphasize the chosen parts of the waveform (such as the formants) and to attenuate other parts (such as the background noise) .
  • mode 4 the speech waveform is used only by the speech parameter extractor, and the output waveform is synthesized completely using the speech parameters. The differences between the original speech waveform and the output of the hearing aid become greater as one progresses from mode 1 to 4, and the control over the frequency spectrum and intensity of the output signal als ' o increases.
  • the acoustic output is tailored to match the patient's hearing loss.
  • the 6 poles of filtering enable this to be done accurately (usually within 2 dB of the ideal gain specified by the audiologist at all frequencies) and the Automatic Gain Control allows the limited dynamic range of the residual hearing to be used.
  • the acoustic signal processor of the second embodiment configured in mode 1 provides both operational and practical advantages over conventional hearing aids. These advantages can best be appreciated by considering the steps involved in setting up both types of hearing aid for operation:
  • the conventional aid The majority of commercially available hearing aids merely amplify, and sometimes compress, the incoming sound. To fit one of these aids the audiologist would normally measure the user's thresholds using an audiometer, calculate the appropriate ideal gain by hand using a prescribed fitting rule (e.g. the National Acoustics Laboratory (NAL) rule, Byrne and Dillon, 1986). The audiologist would then search through the specifications of the aids stocked in the clinic to find one with a gain that most closely resembled the ideal gain. On all aids some changes can be made by the audiologist although the amount of control depends on the type of aid. The features that can be varied may include any combination of the overall gain, the maximum output, the level at which compression begins.
  • NAL National Acoustics Laboratory
  • Frequency specific variation of gain is usually only in two frequency bands corresponding to 'high' frequencies and 'low' frequencies respectively.
  • Behind-the-ear aids and body-worn aids usually offer greater scope for change by the audiologist than in-the-ear aids. This is because, with these types of aid, the acoustic properties of the tube and earmould can be varied in addition to the controls on the aid itself.
  • In-the-ear aids also require an earmould to be made specifically for that aid by the manufacturer. This is a costly and time consuming business. This makes testing and comparing in-the-ear aids difficult and expensive and many clinics avoid using them.
  • In-the-ear aids are also usually more limited in their maximum output and are therefore not often suitable for more severe hearing losses.
  • the aid When the aid is configured it is then tested on the client. If it proves unacceptable the audiologist must choose and reconfigure another sort of aid. This is repeated until an aid is found that the client considers acceptable.
  • the speech-processing hearing aid of the second embodiment (mode 1): the audiologist measures the client's hearing thresholds and any other hearing levels that might be needed for the strategies to be tested, e.g. maximum comfortable level (MCL) .
  • MCL maximum comfortable level
  • the measurements are made using the hearing aid and diagnostic and programming unit with associated configuring software rather than a separate audiometer. These values are then stored in a data file automatically.
  • a strategy is chosen and the aid is configured accordingly taking a maximum time of about five minutes. Calculation and fitting of ideal gain is done automatically and can be quickly accessed in a graphical form at any time by the audiologist.
  • the configured aid is then presented to the subject for evaluation. Different fittings can be tried in quick succession until an appropriate one is found.
  • the audiologist can change the ideal gain function at will if he/she believes that the ideal gain based on the client's threshold measurements is not optimal for that client. With many conventional hearing aids this can only be done grossly by changing the gain in "high” or “low” frequency bands or by choosing a different aid with quoted specifications closer to the new requirement for the client.
  • the device of the se ond embodiment can be configured exactly as many conventional aids, often more accurately. Setting up and testing the device are quicker, more efficient, and less prone to sources of error.
  • Fig. 12 provides an example fitting in mode 1 which is achievable utilising the acoustic aid signal processor 12 of the bimodal speech processor.
  • mode 2 This is similar to mode 1 except that the level output at any specific frequency is mapped non-linearly on a frequency specific basis by dynamically changing the gain parameters of the three filters in response to amplitude and frequency variations measured by the processor.
  • This requires that the audiologist measure the maximum comfort levels to which maximum amplitude can be mapped in addition to client thresholds.
  • the advantage of this mode is that it makes a more accurate mapping of dynamic range possible.
  • the frequency parameters of the three filters are changed dynamically (unlike modes 1 and 2 where they are fixed).
  • salient speech features can be enhanced.
  • the centre frequencies of two bandpass filters can be used to track the Fl and F2 (first and second formant) peaks in the signal. This acts as both a form of noise cancellation and also a removal of parts of the signal that might mask the information in the peaks to be traced.
  • the resulting signal after filtering is amplified to the appropriate loudness for the user on a frequency- specific basis as in mode 2. This may be most useful for users with impaired frequency resolution as well as raised thresholds.
  • the device used in this mode can also be used to amplitude modulate the signal at the fundamental frequency (FO) which can be another way of enhancing this parameter.
  • the most similar commercially available devices are the
  • noise-cancelling hearing aids such as those containing the "Zeta-noise-blocker” chip. These devices calculate an average long-term spectrum that represents the background noise and this noise is then filtered out of the signal along with any speech that happens to be at the same frequencies.
  • This mode 3 scheme is based on enhancement of the speech signal at the measured formant frequencies rather than cancellation of noise. This means that speech information which is close in frequency to the noise will not be lost although the noise further from the formants will be reduced. The scheme will also enhance the selected speech features in quiet conditions as well as in noisy conditions.
  • Fig. 13 provides an example of mode 3 wherein selective peak sharpening is performed by the acoustic aid signal processor 12.
  • Mode 4 - Speech Reconstruction This mode differs from the other modes of operation of the second embodiment in that the user does not receive a modified version of the input signal, but a completely synthesized signal constructed using parameters extracted by the speech processor. The signal can be reconstructed in many different ways depending on the user's hearing loss. This reconstruction provides very tight control over the signals presented and hence allows very accurate mapping onto the user's residual hearing abilities. It may be most useful for users with very limited hearing for whom normal amplification would provide no open set speech recognition. A second example of the use of this mode is for frequency transposition.
  • Each mode of operation allows a ⁇ wide range of potential strategies.
  • the modes are not discrete and some strategies that combine elements from different modes can be implemented.
  • a reconstructed signal representing FO information (mode 4) can be added to a filtered speech signal (mode 3) . 3.0 USE OF THE BIMODAL AID
  • the bimodal aid is programmed by use of a diagnostic and programming unit 21 which communicates with the speech processor 11 and, in turn, with the acoustic aid processor 12 by way of a diagnostic programming interface 22.
  • the diagnostic and programming unit 21 is implemented as a program on a personal computer.
  • the interface 22 is a communications card connected on the PC bus.
  • Fig. 15 outlines the procedure in flow chart form with particular reference to obtaining an optimum setting for the acoustic aid 14 in mode 1.
  • Fig. 16 outlines the basic interaction between the control program in the diagnostic and programming unit and the fitting and mapping procedures performed by the Audiologist.

Abstract

A bimodal aid comprising a speech processor (11) linked to an acoustic aid processor (12). Both processors derive audible information, particularly speech information, from a microphone (13). The speech processor processes the audio information according to patient-specific settings stored in a memory (23) in order to apply a control signal to an implant aid (15) in one ear of a patient. The acoustic aid signal processor (12) further processes information derived from and by the speech processor (11) in accordance with patient-specific settings in memory (23) so as to supply a control signal to an acoustic aid (14) located in the other ear of the patient. The acoustic aid signal processor (12) incorporates a programmable filter device which allows for rapid, iterative adaptation of the bimodal aid to the subjective auditory requirements of the patient. The bimodal aid can be used to drive an implant aid (15) only or to drive an acoustic aid (14) only.

Description

BIMODAL SPEECH PROCESSOR Technical Field
The present invention relates to improvements in the processing of sound for the purposes of supplying an information signal to either an acoustic hearing aid, a cochlear implant aid device or both so as to improve the quality of hearing of a patient. Background of the Invention
Throughout this specification, reference to an acoustic hearing aid is reference to an aid of the type adapted to fit in or adjacent an ear of a patient and which provides an acoustic output suitable to at least partially compensate for hearing deficiencies of the patient. Throughout this specification a cochlear implant aid will refer to a device which includes components which are fitted within the body of a patient and which are adapted to electrically stimulate the nervous system of a patient in order to at least partially compensate for usually profound hearing loss of the patient. There is a trend towards fitting cochlear implants to patients with some residual hearing in the contralateral ear. Many patients recognise speech better using conventional acoustic hearing aids together with the cochlear implant than they do using either device alone but find the use of the' combination unacceptable. These patients opt to use either the acoustic hearing aid or the cochlear implant aid but not both devices together. It is an object of the present invention to provide a bimodal aid device which can drive both an acoustic hearing aid and a cochlear implant aid which thereby improves the quality of binaural information received by a patient. Two further problems experienced in the prior art of hearing aids are (1) quickly and easily measuring the nature and degree of hearing impairment of a client for the purposes of providing an appropriate hearing aid and (2) the difficulty in matching appropriate hearing aid qualities and capabilities to the specific requirements of the user.
Recently, a few hearing aid devices have appeared on the market which allow on-line control of the gain characteristics of the device at different frequencies. However, these devices do not provide speech processing capability such as is provided by formant extraction and like feature extraction circuits.
It is a further object of particular embodiments of the present invention to provide a signal processing device for use in association with an acoustic hearing aid which addresses these problems. Summary of the Invention
Accordingly, in one broad form of the invention there is provided a bimodal aid for the hearing impaired which " includes processing means adapted to receive and process audio information received from a microphone; said processing means supplying processed information derived from said audio information to an implant aid adapted to be implanted in a first ear of a patient and to an acoustic aid adapted to be worn in or adjacent a second ear of said patient whereby binaural information is provided to said patient.
In yet a further broad form of the invention there is provided a bimodal aid for the hearing impaired comprising a sound/speech processor electrically connected to a hearing aid transducer adapted to be worn adjacent to or in an ear of a patient and electrically connected to an electrical signal transducer adapted to be worn in an ear of a patient; aid speech processor receiving and processing audio input information so as to produce an acoustic signal from said hearing aid transducer and an electrical signal from said electrical transducer whereby coherent binaural information is provided to said patient.
In a further broad form there is provided an electronically configurable sound/speech processor for the hearing impaired; said processor including configuration means and signal processing means; said processor adapted to receive audio information and to process said audio information by said signal processing means in accordance with parameters set by said configuration means so as to produce an output signal adapted to stimulate a hearing aid transducer; said configuration means adapted to receive one or more of electronic signal input or software input for the purpose of modifying said parameters. In a further broad form of the invention there is provided a method of and a means for control of a hearing aid and a cochlear implant by means of said sound/speech processor. In a particular form said sound/speech processor utilises the speech features FO, Fl, F2, A0, Al, A2, A3, A4, A5 and voiced/voiceless sound decisions to produce said output signal adapted to stimulate a Rearing aid transducer, wherein said features are defined as follows:- FO is the fundamental frequency, Fl is the first formant frequency, F2 is the second formant frequency, A0 is the amplitude of FO, Al is the amplitude of Fl,
A2 is the amplitude of F2,
A3 is the amplitude of band 3: 2000 to 2800 Hz, A4 is the amplitude of band 4: 2800 to 4000 Hz, A5 is the amplitude of band 5: 4000 Hz and above. In a further particular form of said sound/speech processor, said signal processing means includes means for dynamically changing the amplitude of different frequency bands in the defined speech spectrum as a function of the speech features Al, A2, A3, A4, and A5 parameters so that the loudness in these bands is appropriately scaled between the threshold and maximum comfortable levels of the hearing aid user at the Fl and F2 frequencies and in the higher frequency bands.
In a further particular form of said sound/speech processor, said signal processing means includes filter means whose settings are dynamically varied according to speech parameters extracted from said signal processing means whereby said filter means dynamically adapts said output signal to overcome the effects of noise and/or particular deficiencies in the hearing of a user.
In yet a further particular form„of said sound/speech processor, said signal processing means within said processor includes means for reconstructing signals in real time whereby the amplitude and/or frequency characteristics of said output signal can be controlled optimally so as to enhance speech recognition in a user. In a first mode of operation of said sound/speech processor, said filter means set by said configuration means are provided based on measurements set by an audiologist during a hearing aid fitting procedure. The filter settings remain fixed after completion of the procedure and remain fixed thereafter.
In an alternative mode of operation of said sound/speech processor, said processor includes filter means whose parameters are changed dynamically while said processor is in use providing said output signal to said hearing aid transducer in accordance with information provided to said configuration means by speech parameter extraction means acting on said audio information.
In a further particular mode of operation of said sound/speech processor said output signal is synthesised by said signal processing means utilising only speech parameters. Brief Description of the Drawings
Embodiments of the invention will now be described with reference to the drawings wherein:-
Fig. 1 is a schematic representation of a bimodal aid according to a first embodiment of the invention, Fig. 2 is a schematic representation of the main functional components which comprise the entire bimodal aid, Fig. 3 is a schematic block diagram of the implant processing circuitry together with the acoustic processor circuitry,
Fig. 4 is a chart showing an example of the pattern of electrical stimulation of the implant electrodes for various steady state phonemes using the multi-peak coding strategy, Fig. 5 is a graph showing the standard loudness growth function for the speech processor portion when driving the implant, Fig. 6 is a schematic block diagram of the functional components of the bimodal processor which drive the acoustic aid,
Fig. 7 is a schematic block diagram of the components comprising the acoustic processor which processes the speech signal in accordance with particular forms of the invention to drive the acoustic aid,
Fig. 8 is a component block diagram of the acoustic processor, Fig. 9 is a component block diagram of a Biquad filter as utilised in the acoustic processor of Fig. 8, Fig. 10 is a component block diagram of the output driver for the acoustic aid,
Fig. 11 is a schematic representation of preferred modes of operation of the acoustic processor for voiced vowel input, Fig. 12 shows a sound intensity against frequency for an acoustic aid operating according to mode 1 ,
Fig. 13 shows comparative plots of sound intensity against frequency in relation to an example of modes one and three operation of the acoustic processor, Fig. 14 outlines a fitting strategy for the bimodal aid, Fig. 15 is a flowchart of a fitting strategy for the acoustic aid portion of the device in mode 1, and Fig. 16 is a schematic block diagram of the utilisation of the diagnostic and programming unit for use in association with the bimodal aid. Description of Preferred Embodiments
With reference to Fig. 1, the bimodal aid is a hearing aid device which has the capability to provide information through a cochlear implant aid in one ear and a speech processing acoustic hearing aid in the other ear of a patient. Both the implant aid and the acoustic aid are controlled by the same speech processor which derives its raw audio input from a single microphone.
The speech processor extracts certain parameters from the incoming acoustic waveform that are relevant to the perception of speech sounds. Some of the speech parameters extracted by the speech processor are translated into an electrical signal and delivered to a cochlear implant. These features can also be used as a basis for modification of the speech waveform following which it is then amplified and presented to the acoustic aid.
There are some patients with a small amount of residual hearing who have already received an implant, and who have previously worn hearing aids in the non-implanted ear. These patients often report that the sounds produced by conventional acoustic hearing aids are incompatible with those produced by the implant. Such patients tend to resort to one or the other and thus do not make maximal use of their limited auditory capacities. These patients are candidates for the bimodal aid. Such a device incorporating a cochlear implant aid and a speech processing acoustic aid can provide information which will allow these patients to discriminate speech better than any currently available hearing device alone.
Generally, if patients have some residual hearing, it tends to be low frequency. The cochlear implant produces stimulation at positions in the cochlear that correspond to higher frequencies (usually above 700Hz) . Thus, by combining the two channels it is possible to provide useful information over a much wider range of frequencies than either channel could provide alone. Furthermore, the frequency and temporal resolution of residual hearing can be better than that provided by the pulsatile electric sigμal of the cochlear implant aid portion of the bimodal aid.
In addition to the above "bimodal" uses the acoustic aid driver part of the device can also be used as a speech processing hearing aid independent of the cochlear implant aid output. When used in this manner it has advantages over conventional acoustic hearing aids. Conventional hearing aids are limited in practice because the adjustments to the frequency/gain characteristics are restricted to a small number of options and there are many users who are not optimally aided. There is a need for a hearing aid with a more flexible frequency/gain characteristic and this can be achieved with this aid. In addition, the feature extraction circuits which are the basis of the cochlear implant aid allow the hardware to measure important characteristics of the speech signal in quiet conditions and in conditions of moderate amounts of background noise. These characteristics can then be amplified selectively and enhanced relative to the rest of the acoustic signal, or used to synthesize a new speech-like waveform that carries the same information exclusively. This is performed by the acoustic signal processor (12) which outputs to an acoustic aid.
The synthesized waveform is used to overcome special problems. For example, high frequency sounds above the limit of a user's hearing can be presented as lower frequencies within the user's hearing range. Broad peaks in the speech spectrum can be made narrower if this provides better frequency resolution and helps to reduce masking of one peak by other adjacent peaks. There is no,other single, wearable device capable of implementing all these processes.
The sound/speech processor can take in a speech sgnal from a microphone, measure selected features of that signal (including the frequency and amplitude of formants for voiced speech) and control the outputs to both a cochlear implant aid and an acoustic hearing aid in the case of the first embodiment or only the acoustic hearing aid in the case of the second embodiment.
1.0 FIRST EMBODIMENT - BIMODAL AID
Figure 2 shows a schematic diagram of the operation of the device. The cochlear implant aid portion of the device is covered by existing patents or patent applications to the same applicant and the implant aid operates upon one ear of a patient using similar strategies to those already developed for implant users. In addition, the users of the bimodal device will receive an auditory signal via an acoustic aid in the non-implanted ear. The capabilities of the bimodal aid allow this signal to be specially tailored in order to convey information complementary to the implant and utilise the' residual hearing of the patient maximally. Specifically, Fig. 2 discloses the body worn portion of the bimodal device comprises a speech processor 11 intimately connected to an acoustic aid processor 12 together with a microphone 13, an acoustic hearing aid 14 and an implant aid 15. The implant aid 15 comprises an electrode array 16 electrically connected by harness 17 to a receiver stimulator 18 which is in radio communication with speech processor 11 by way of internal coil 19 and external coil 20. In addition Fig. 2 shows auxiliary items being the diagnostic and programming unit 21 and the diagnostic programming interface 22.
Currently the diagnostic and programming unit 21 is implemented as a program running on a personal computer whilst the diagnostic programming interface 22 is a communications card connected to the PC bus. The diagnostic and programming unit 21 is utilised in a clinical situation to test for and control device parameters of operation for the speech processor 11 and/or acoustic aid processor 12 which optimise hearing performance for a patient according to defined criteria. These parameters are communicated via the diagnostic programming interface 22 to a map memory storage 23 in the speech processor 11. It is by reference to the parameters stored in the map memory storage 23 that the manner of processing of the audio signal received from microphone 13 is determined both for the speech processor 11 when driving the implant aid 15 and the acoustic aid processor 12 when driving the acoustic aid 14.
The components illustrated in Fig. 2 other than the acoustic aid processor 12 and the acoustic aid 14 and the computer program controlling the function of the diagnostic and programming unit 21 have been described elsewhere in earlier filed patents and patent applications and remain the same in so far as operation of the Cochlear implant aid is concerned.
The speech processor 11 and the precise methodology for exciting electrically the implant aid,has varied since the inception of these devices and can be expected to continue to vary. For example excitation of the stimulating electrodes placed within the ear of a patient can be either digital or analogue in nature. To date, one of the present applicants Cochlear Pty. Limited, has pursued a strategy of digital electronic stimulation using what have been termed pulsatile electrical signals applied to a pulsatile electrical signal transducer.
Particularly, the speech processor 11 has been commercially available in a number of forms since around 1982 from Cochlear Pty. Limited (one of the co-applicants for the present application) . The early units and, indeed, even the most recent units are primarily aimed at improving speech perception in favour of all other sounds received from microphone 13. This is done by causing speech processor 11 to discern and process from the raw audio input received from microphone 13 acoustic features of speech which have been determined to best characterise speech information as perceived by the human auditory system.
Early forms of the speech processor 11 presented three acoustic features of speech to implant users. These were amplitude, presented as current level of electrical stimulation; fundamental frequency (FO) or voice pitch, presented as rate of pulsatile stimulation; and the second formant frequency (F2) represented by the position of the stimulation electrode pair located within the ear of the patient. The F2 frequency is usually found within the frequency range 800 to 2500 Hz. Later a second stimulating electrode pair was added representing the first formant (Fl) of speech. The Fl signal is typically found within the frequency range 280 Hz to 1000 Hz. This scheme (known as the F0F1F2 scheme) provided improved performance in areas of speech perception as against the earlier F0F2 scheme. In most recent times the information provided to and processed by the speech processor 11 has been increased with one particular purpose being to improve speech intelligibility under moderate levels of background noise. This latest coding scheme provides all of the information available in the F0F1F2 scheme while providing additional information from three high frequency band pass filters. These filters cover the following frequency ranges: 2000 to 2800 Hz, 2800 to 4000 Hz and 4000 to 8000 Hz. The energy within these ranges controls the amplitude of electrical stimulation of three fixed electrode pairs in the basal end of the electrode array. Thus, additional information about high frequency sounds is presented at a tonotopically appropriate place within the cochlear. The overall stimulation rate for voiced sounds remains as F0 (fundamental frequency or voice pitch) but in the new scheme four electrical stimulation pulses occur for each glottal pulse. This compares with the F0F1F2 strategy in which only two pulses occur per voice pitch period. In the new coding scheme, for voiced speech sounds, the two pulses representing the first and second formant are still provided and additional stimulation pulses occur representing energy in the 2000 to 2800 Hz and the 2800 to 4000 Hz ranges.
For unvoiced phonemes, yet another pulse representing energy above 4000 Hz is provided while no stimulation for the first formant is provided, since there is usually little energy in this frequency range. Stimulation occurs at a random pulse rate of approximately 260 Hz which is about double that used in the earlier strategy.
The latest noise suppression algorithm operates in a continuous manner, rather than as a voice activated switch as previously been used. This removes the perceptually annoying switching on and off of the earlier system. In the new algorithm the noise flow is continuously assessed in each frequency band over a period of ten seconds. The lowest level over this period is assumed to be background noise and is subtracted from the amplitude relevant to that frequency band. Thus any increase in signal amplitude above the noise level is presented to the patient while the ambient noise level itself is reduced to near threshold.
Fig. 3 illustrates the basic filter and processing structure of a bimodal aid incorporating means to implement any of the above described latest processing scheme. International Patent Application PCT/AU90/00407 to the present applicant entitled Multi-peak Speech Processor describes in detail the operation of these components. The entire text and drawings of the specification of that application are incorporated herein by cross reference. The most pertinent portions of that specification are included immediately below.
The nature of the electrode array that is utilised in conjunction with the latest coding strategy and the manner and nature of its implantation is described in the literature, for example Cochlear rostheses, editors Clark G.M., Tong G., Patrick; published by Churchill Livingstone 1990. Chapter 9 of that book entitled "The Surgery of Cochlear Implantation" by Webb R.L., Pyman B.C., Franz B. K-H and Clark G.M. is particularly pertinent. The text and drawings of that chapter are incorporated herein by cross reference.
The coding strategy extracts and codes the Fl and F2 spectral peaks from the microphone audio signal, using the extracted frequency estimates to select a more apical and a more basal pair of electrodes for stimulation. Each selected electrode is stimulated at a pulse rate equal to the fundamental frequency F0. In addition to Fl and F2, three high frequency bands of spectral information are extracted. The amplitude estimates from band three (2000-2800 Hz), band four (2800-4000 Hz), and band five (above 4000 Hz) are presented to fixed electrodes, for example the seventh, fourth and first electrodes, respectively, of the electrode array 16 (Fig. 2 and Fig. 4) .
The first, fourth and seventh electrodes are selected as the default electrodes for the high,-frequency bands because they are spaced far enough apart so that most patients will be able to discriminate between stimulation at these three locations. Note that these default assignments may be reprogrammed as required. If the three high frequency bands were assigned only to the three most basal electrodes in the MAP, many patients might not find the additional high frequency information as useful since patients often do not demonstrate good place-pitch discrimination between adjacent basal electrodes. Additionally, the overall pitch percept resulting from the electrical stimulation might be too high.
Table I below indicates the frequency ranges of the various formants employed in the speech coding scheme for the present invention.
TABLE I
Frequency Range Formant or Band 280 - 1000 Hz Fl 800 - 4000 Hz F2 2000 - 2800 Hz Band 3 - Electrode 7 2800 - 4000 Hz Band 4 - Electrode 1* 4000 Hz and above Band 5 - Electrode 1 If the input signal is voiced, it has a periodic fundamental frequency. The electrode pairs selected from the estimates of Fl , F2 and bands 3 and 4 are stimulated sequentially at the rate equal to FO. The most basal electrode pair is stimulated first, followed by progressively more apical electrode pairs, as shown in Fig. 4. Band 5 is not presented in Fig. 4 because negligible information is contained in this frequency band for most voiced sounds.
If the input signal is unvoiced, energy in the Fl band (280-1000 Hz) is usually zero. Consequently it is replaced with the frequency band that extracts information above 4000 Hz. In this situation, the electrodes pairs selected from the estimates of F2, and bands 3, 4 and 5 receive the pulsatile stimulation. The rate of stimulation is periodic and varies between 200-300 Hz. The coding strategy thus may be seen to extract and code five spectral peaks but only four spectral peaks are encoded for any one stimulus sequence. FIG. 4 illustrates the pattern of electrical stimulation for various steady state phonemes when using this coding strategy. A primary function of the MAP is to translate the frequency of the dominant spectral peaks (Fl and F2) to electrode selection. To perform this function, the electrodes are numbered sequentially starting at the round window of the cochlea. Electrode 1 is the most basal electrode and electrode 22 is the most apical in the electrode array. Stimulation of different electrodes normally results in pitch perceptions that reflect the tonotopic organization of the cochlea. Electrode 22 elicits the lowest place-pitch percept, or the "dullest" sound. Electrode 1 elicits the highest place-pitch percept, or "sharpest" sound. To allocate the frequency range for the Fl and F2 spectral peaks to the total number of electrodes, a default mapping algorithm splits up the total number of electrodes available to use into a ratio of approximately 1:2, as shown in FIG. 4.
Inside the speech processor a random access memory stores a set of number tables, referred to collectively as the MAP memory storage 23. The MAP determines both stimulus parameters for Fl, F2 and bands 3-5, and the amplitude estimates. The encoding of the stimulus parameters follows a sequence of distinct steps. The steps may be summarized as follows:
1. The first formant frequency (Fl) is converted to a number based on the dominant spectral peak in the region between 280-1000 Hz.
2. The Fl number is used, in conjunction with one of the MAP tables, to determine the electrode to be stimulated to represent the first formant. The indifferent electrode is determined by the mode. 3. The second formant frequency (F2) is converted to a number based on the dominant spectral peak in region between 800-4000 Hz.
4. The F2 number is used, in conjunction with "one of the MAP tables to determine the electrode to be stimulated to represent the second ormant. The indifferent electrode is determined by the mode. 5. The amplitude estimates for bands 3, 4 and 5 are assigned to the three default electrodes 7, 4 and 1 for bands 3, 4 and 5, respectively, or such other electrodes that may be selected when the MAP is being jprepared. 6. The amplitude of the acoustic signal in each of the frequency bands is converted to a number ranging from 0 - 150. The level of stimulation that will be delivered is determined by referring to set MAP tables that relate acoustic amplitude (in range of 0-150) to stimulation level for the specific electrodes selected in steps 2, 4 and 5, above.
7. The data are further encoded in the speech processor and transmitted to the receiver/stimulator 18. It, in turn, decodes the data and sends the stimuli to the appropriate electrodes. Stimulus pulses are presented at a rate equal to F0 during voiced periods and at a random a periodic rate within the range of FO and Fl formants (typically 200 to 300 Hz) during unvoiced periods.
The speech processor 11 additionally includes a non- linear loudness growth algorithm that converts acoustic signal amplitude to electrical stimulation parameters. The speech processor 11 converts the amplitude of the acoustic signal into a digital linear scale with values from 0 - 150 as shown in FIG. 5. That digital scale (in combination with the information stored in the patient's MAP) determines the actual charge delivered to the electrodes in the electrode array 16. Improvements on this assembly are disclosed in co- pending applications to Cochlear Pty. Limited. Specifically International Application PCT/AU90/00406 discloses an improved connection system between migrophone 13 and speech processor 11 and between the external coil assembly 20 and the speech processor 11. The text and drawings of the specification of that application are incorporated herein by cross reference.
A noise suppression circuit is disclosed in International Patent Application PCT/AU90/00404. The text and drawings of the specification which accompanied that application are incorporated herein by cross reference.
FIG. 6 is a block diagram of the processing circuitry showing the functional interconnection of components for driving the acoustic hearing aid 14. The main components comprise microphone 13, automatic gain control 24, speech parameter extractor 25, encoder 26, patient MAP memory storage 23, noise generator 27 and acoustic aid signal processor 12. The heart of the bimodal aid as far as allowing the acoustic aid 14 to be driven from the speech processor 11 is the acoustic aid signal processor 12.
The acoustic aid signal processor is software configurative and contains three two-pole filters each of which can be used in either bandpass, lowpass or highpass configuration. The centre frequency, bandwidth and output amplitude of these filters are controlled by the processor. The filters can be used in series or in parallel and the input waveform can be the speech waveform, pulses, a noise signal or external signal. The external signal can be from another microphone, other acoustic output or another acoustic signal processor. This results in a particularly flexible aid that can operate either in a manner similar to a conventional acoustic hearing aid (though with more accurate gain fitting than most currently available aids can provide) or as an aid providing different types of processed speech information. The acoustic aid signal processor including the three programmable filters has been implemented on a single silicon chip. Each filter is usable as a high-pass, band¬ pass, or low-pass filter. 128 centre frequencies between 100Hz and 16000 Hz, 128 Q values between 0.53 and 120, and 128 amplitude values between 0 and 64 dB are available for each filter (Q=centre frequency/B.W. ) . This chip has the flexibility to cover a wide range of frequencies, amplitudes and spectral shapes.
It also includes a digital-to-analog converter (DAC) that is used to produce the excitation waveform for the filters. The DAC can produce waveforms of arbitrary shape (such as sinusoidal or pulsatile) controlled directly by the processor, or can be switched to provide excitation by the speech waveform, or a white noise generator. A schematic diagram of the three-filter circuit is shown in Fig 7. A functional specification for a single chip implementation of the acoustic aid signal processor 12 is provided by FIGS. 8, 9 and 10. Details of the specification are as follows: ^ Topology
Fig. 8 shows the overall topology of the chip. Three programmable filters in which centre frequency and band width can be independently controlled are provided. The outputs of these filters can be independently attenuated or amplified and then mixed. The output of one of the three filters can be inverted if necessary by setting an INV bit.
Fig. 9 shows details of one of the Biquad filters forming the three filter array together with the frequency latches and Q latches which determine the parameters of the Biquad filter.
The topology of the chip can be altered from serial to parallel or a mixed structure by three PARn bits.
The signal source for this structure can be selected by a four channel multiplexer (MUX) . This selects +5 volts, a buffered output of the audio signal, an internally generated noise source, or an external signal. This signal source is fed to a 7 bit digital to analog converter (DAC) as a reference voltage.
The multiplying DAC can convert the DC level into a pulse generator, or provide a fine gain control on the audio or external signal, or noise source. The most significant bit (MSB) is used to invert the output. All filter outputs are summed and passed to a push pull earphone driver which can provide effectively 10 volts peak- to-peak across a 270 ohm (nominal) earphone. The chip uses a single supply of 5 volts. Note that the earphone has a DC resistance of 88 ohm with the impedance rising gradually to 270 ohm at 1 kHz. The output stage consists of a bridge of P and N transistor switches as shown in Fig. 10. The switches are pulse width modulated by a signal derived from a comparator driven from a triangle wave on one input and the audio signal on the other. The on resistance of the switches should be less than 5 ohms (lower if possible).
Apart from the class D output, there is a single ended linear output. This should be capable of sourcing or sinking 5 inA with less than 1 volt drop.
The chip is programmed by writing to the MAP of the speech processor. To distinguish between chip and MAP writes, bits A8 - A12 are decoded. Any write to the block 1800 - 18FF in MAP will' also write to the chip. Two addresses, one odd and one even (Y13 and Y14) are decoded and ORed and the output of these (R/W new) can be used to write to the filter chip in a more selective manner. Odd writes are used to select the MUX and even writes set the auto sensitivity control (ASC) latch on that chip. The four lowest address bits are used to write to 14 registers which contain the programming information for the acoustic processor chip. Registers Y0 to Yll program frequency, Q, gain (attenuation or amplification) and configuration in turn for each of the 3 filters. Register
Y12 sets the chip topology. Register Y15 is used to write to the DAC. Referring to Fig. 8, the topology latch Y12 is as follows:
DO INV
Dl, D2, D3 Topology bits
D4, D5 DAC source D6 DIR
DAC source
Figure imgf000026_0001
DO - INV inverts output of Filter 1
Dl - PAR1 sends Filter 1 output to summer D2 - PAR2 sends Filter 2 output to summer D3 - PAR3 sends Filter 3 output to summer D6 - DIR sends the DAC output direct to summer
When a filter's PAR bit is set, the cascaded filter and attenuator/amplifier are sent to the summer but the filter output itself is sent to a bus so that it can be made available to other filters. When the PAR bit is not set, the filter-attenuator/amplifier combination is sent to the bus. In this way the filter gains may be scaled if the filters are cascaded.
Filter programming bits
Configuration latches Y3, Y , Yll DO, Dl, D2, filter type select D3, D4 clock select
D5, D6 filter input
Filter input selection
The filter inputs selected by D6 and D5 vary as shown below
Figure imgf000027_0001
A clock pre-scaler is provided to extend the frequency ranges of the filters. This is done by dividing the clock by 2 or 4 before feeding it to the filter's own divider. Decoding is as follows:
Figure imgf000027_0002
are opened in the filter to give double the centre frequency, Filter type
With reference to Fig. 9, the filters consist of two integrators in a loop with a variable gain feedback path. The input may be a switched or an unswitched capacitor, it may be applied to either the first or second input and the output may be taken from either the first or second integrator. This produces various different transfer functions as given below.
DO 1st Power down
2nd Lowpass 1st Lowpass 2nd Bandpass 1st Highpass 2nd Bandpass
1st Bandpass
Figure imgf000028_0001
2nd Highpass In this table a bit zero selects the particular condition. Thus mode 0, i.e. DO, Dl, D2 all zero, powers down the filter and switches its output off.
In some cases it is desirable to shut all other functions off. This is done by an external pin, PDB to power down bimodal operation. Only the Ram Cell monitor is still operational. The programmable filter clock divider comprises a 7 bit ripple counter which is compared with the contents of the frequency latch. The first match produces a counter reset. The frequency latch must be fed with the complement of the count required. If for example a reset is required after a count of 2 then the latch is all high except for bit Dl which is low. The outputs of all the NOR gates connected to the latch will be low except for the one connected to Dl. Now if the counter counts up from zero, on the first occasion of Q2 going high, this NOR will also go low and the 7 input NOR at the output will reset the counter.
The filter Q is programmed by using a capacitor which has a 4 bit binary sequence, together with a 3 bit programmable resistive divider. The resistors are programmed by a number n which is represented by bits D4-D6 in the Yl latch (Y5 and Y9 for filters 2 and 3). The capacitors are programmed by m, represented by bits D0-D3. The Q is given by:
Q = 8(1 + 1.875n)/m By using switches, the feedback resistor of an amplifier can be configured to be like a buffered attenuator - i.e. the output can drive another attenuator or inverter. Two separate sections are used, one to give 8, 4 dB steps giving a total of +/- 28 dB and another to give 8, 0.5 dB steps with a maximum of +/- 3.5 dB. The total range is therefore +/- 31.5 dB. Two 8 channel MUXes are used to select the required taps on the potential dividers. The attenuator/amplifiers can be selected by addressing latches Y2, Y6 and Y10. Bit D6 = 1 gives attenuation, 0 amplification. To set the gain: Y2 = gain in dB x 2 To set the attenuation: Y2 = 64 - (atten in dB x 2) If Y2 is set to 0 (all bits low) then the attenuator/amplifier is powered down and switched ^ f. a Note that setting HF adds 6 dB to the gain.
The attenuator is arranged so that its gain is not changed except when the signal passes through zero. This is to prevent clicks and pops during gain changes. A zero crossing detector which produces a pulse on either a positive 10 or negative going zero crossing is used to strobe a latch which transfers the required gain to the attenuator MUXes. The multiplying DAC is a 6 bit resistive ladder type and multiplies the input REF, selected by the SOURCE SELECT (Fig. 8) by the digital quantity in latch Y15. Bit D6 inverts the 15 output. The settling time is 50 microseconds.
The acoustic aid processor 12, by its flexible and programmable construction, allows many signal processing strategies to be tried and ultimately settled upon so as to best adapt the acoustic aid to provide information to the 20 wearer which compliments information received from an implant aid worn in the other ear of the wearer. This same flexibility and programability also can be used to tailor the bimodal processor for operation of an acoustic hearing aid only. 25 In both cases it is the combination of use of a single microphone together with the preprocessing capabilities of the speech processor 11 combined with the flexibility and programability of the acoustic aid processor 12 which provides features and advantages not found in hearing aid devices to date.
The bimodal aid will now be desc ibed when used to drive an acoustic hearing aid only. However the modes of operation to be described in relation thereto are equally usable to help obtain the complementary behaviour mentioned above in the first embodiment in a relation to the use of both an acoustic aid and an implant aid by a single wearer. The following description therefore and to that extent should be taken as applying equally to the first embodiment.
It should be understood that the nature of the complementary behaviour between the two aids is subjective and is determined by a combination of iterative testing and wearing experience. The structure of the bimodal aid described herein allows this complementarity to be achieved. The testing procedures and methods for storing desired patient parameters in the MAP memory storage 23 will be described later in the specification. 2.0 SECOND EMBODIMENT - BIMODAL AID USED AS ACOUSTIC AID ONLY
The inherent flexibility of the acoustic aid signal processor 12 incorporating the three software configurable filters provides for an almost unlimited degree of flexibility of processing signals in the frequency domain received from the speech processor 11 and destined for the acoustic aid 14. Four particular modes of operation have been identified as desirable and achievable by the acoustic aid signal processor 12.
The four basic modes of operation of the acoustic output to the acoustic hearing aid are available and these are shown schematically in Figure 11. Each mode encompasses a large number of variations.
In mode 1, the filter parameters are set by the audiologist during the iterative fitting procedure and remain fixed thereafter. In modes 2-4, the speech parameter extraction circuits provide instantaneous information about the speech signal that is used to change the filter parameters dynamically while the aid is in use. In modes 2 and 3 the input signal is the speech waveform. The output signal is manipulated in different ways by controlling the filters to emphasize the chosen parts of the waveform (such as the formants) and to attenuate other parts (such as the background noise) . In mode 4 the speech waveform is used only by the speech parameter extractor, and the output waveform is synthesized completely using the speech parameters. The differences between the original speech waveform and the output of the hearing aid become greater as one progresses from mode 1 to 4, and the control over the frequency spectrum and intensity of the output signal als'o increases. 2.1 Mode 1 - Frequency Response Tailoring
In this mode the acoustic output is tailored to match the patient's hearing loss. The 6 poles of filtering enable this to be done accurately (usually within 2 dB of the ideal gain specified by the audiologist at all frequencies) and the Automatic Gain Control allows the limited dynamic range of the residual hearing to be used. The acoustic signal processor of the second embodiment configured in mode 1, provides both operational and practical advantages over conventional hearing aids. These advantages can best be appreciated by considering the steps involved in setting up both types of hearing aid for operation:
(a) The conventional aid: The majority of commercially available hearing aids merely amplify, and sometimes compress, the incoming sound. To fit one of these aids the audiologist would normally measure the user's thresholds using an audiometer, calculate the appropriate ideal gain by hand using a prescribed fitting rule (e.g. the National Acoustics Laboratory (NAL) rule, Byrne and Dillon, 1986). The audiologist would then search through the specifications of the aids stocked in the clinic to find one with a gain that most closely resembled the ideal gain. On all aids some changes can be made by the audiologist although the amount of control depends on the type of aid. The features that can be varied may include any combination of the overall gain, the maximum output, the level at which compression begins. Frequency specific variation of gain, if available at all, is usually only in two frequency bands corresponding to 'high' frequencies and 'low' frequencies respectively. Behind-the-ear aids and body-worn aids, though less cosmetically acceptable, usually offer greater scope for change by the audiologist than in-the-ear aids. This is because, with these types of aid, the acoustic properties of the tube and earmould can be varied in addition to the controls on the aid itself. In-the-ear aids also require an earmould to be made specifically for that aid by the manufacturer. This is a costly and time consuming business. This makes testing and comparing in-the-ear aids difficult and expensive and many clinics avoid using them. In-the-ear aids are also usually more limited in their maximum output and are therefore not often suitable for more severe hearing losses.
When the aid is configured it is then tested on the client. If it proves unacceptable the audiologist must choose and reconfigure another sort of aid. This is repeated until an aid is found that the client considers acceptable.
(b) The speech-processing hearing aid of the second embodiment (mode 1): the audiologist measures the client's hearing thresholds and any other hearing levels that might be needed for the strategies to be tested, e.g. maximum comfortable level (MCL) . The measurements are made using the hearing aid and diagnostic and programming unit with associated configuring software rather than a separate audiometer. These values are then stored in a data file automatically. A strategy is chosen and the aid is configured accordingly taking a maximum time of about five minutes. Calculation and fitting of ideal gain is done automatically and can be quickly accessed in a graphical form at any time by the audiologist. The configured aid is then presented to the subject for evaluation. Different fittings can be tried in quick succession until an appropriate one is found. Hence, the advantages are
1) The actual device, earmould and transducer are used for measurement of client thresholds allowing more accurate assessment of the ideal gain required for the device. For conventional hearing aids these measures are usually made using headphones and the effect of the earmould acoustics is estimated separately. For in- the-ear aids it is not possible to measure earmould acoustics before fitting because the mould and the aid are manufactured together.
2) Different fitting procedures (e.g. NAL formulation, Byrne and Dillon, 1986) can be implemented for testing very quickly without requiring a change of aid because the changes are programmed in software rather than by hardware adjustment. 3) The gain fitting can often be more accurate than is possible on many commercially available aids because of the flexible programming of frequency responses. These values are then stored in a data file automatically. A strategy is chosen and the aid is configured accordingly, taking a maximum time of about five minutes. Calculation and fitting of ideal gain is done automatically and can be quickly accessed in a graphical form at any time by the audiologist. The configured aid is then presented to the subject for evaluation. Different fittings can be tried in quick succession modelling various available aids because of the flexible programming of frequency responses.
4) The audiologist can change the ideal gain function at will if he/she believes that the ideal gain based on the client's threshold measurements is not optimal for that client. With many conventional hearing aids this can only be done grossly by changing the gain in "high" or "low" frequency bands or by choosing a different aid with quoted specifications closer to the new requirement for the client.
5) Information about the fitting is available to the audiologist at any stage thus giving them more "on¬ line" control over the fitting than with any aid on the market.
6) The calculation of ideal gain is done automatically for most fitting procedures, (with the exception of those used on insertion gain bridges, this has to be done by hand) and thus the new device saves time and removes a possible source of error. In summary, the device of the se ond embodiment can be configured exactly as many conventional aids, often more accurately. Setting up and testing the device are quicker, more efficient, and less prone to sources of error.
Fig. 12 provides an example fitting in mode 1 which is achievable utilising the acoustic aid signal processor 12 of the bimodal speech processor.
2.2 Mode 2 - Loudness Mapping
This is similar to mode 1 except that the level output at any specific frequency is mapped non-linearly on a frequency specific basis by dynamically changing the gain parameters of the three filters in response to amplitude and frequency variations measured by the processor. This requires that the audiologist measure the maximum comfort levels to which maximum amplitude can be mapped in addition to client thresholds. The advantage of this mode is that it makes a more accurate mapping of dynamic range possible.
Hence, if used appropriately, the relative loudness of the spectral components is preserved. This may be better than mode 1 for users whose dynamic range changes a lot as a function of frequency. This method of loudness control avoids many of the undesirable spectral distortions that accompany more commonly used schemes such as peak limiting and non¬ linear compression. 2.3 Mode 3 - Dynamic Enhancement of Spectral Features
In this mode the frequency parameters of the three filters are changed dynamically (unlike modes 1 and 2 where they are fixed). When the values are nade to change in a manner depending on the speech parameters measured by the processor then salient speech features can be enhanced. This gives rise to a wide range of possible speech-processing strategies in this mode. For example, the centre frequencies of two bandpass filters can be used to track the Fl and F2 (first and second formant) peaks in the signal. This acts as both a form of noise cancellation and also a removal of parts of the signal that might mask the information in the peaks to be traced. The resulting signal after filtering is amplified to the appropriate loudness for the user on a frequency- specific basis as in mode 2. This may be most useful for users with impaired frequency resolution as well as raised thresholds. The device used in this mode can also be used to amplitude modulate the signal at the fundamental frequency (FO) which can be another way of enhancing this parameter. The most similar commercially available devices are the
"noise-cancelling" hearing aids such as those containing the "Zeta-noise-blocker" chip. These devices calculate an average long-term spectrum that represents the background noise and this noise is then filtered out of the signal along with any speech that happens to be at the same frequencies. This mode 3 scheme is based on enhancement of the speech signal at the measured formant frequencies rather than cancellation of noise. This means that speech information which is close in frequency to the noise will not be lost although the noise further from the formants will be reduced. The scheme will also enhance the selected speech features in quiet conditions as well as in noisy conditions.
Fig. 13 provides an example of mode 3 wherein selective peak sharpening is performed by the acoustic aid signal processor 12. 2.4 Mode 4 - Speech Reconstruction This mode differs from the other modes of operation of the second embodiment in that the user does not receive a modified version of the input signal, but a completely synthesized signal constructed using parameters extracted by the speech processor. The signal can be reconstructed in many different ways depending on the user's hearing loss. This reconstruction provides very tight control over the signals presented and hence allows very accurate mapping onto the user's residual hearing abilities. It may be most useful for users with very limited hearing for whom normal amplification would provide no open set speech recognition. A second example of the use of this mode is for frequency transposition. Sounds normally occurring at frequencies that are inaudible to the user can be represehted by synthesized signals within the audible range for that user. Such schemes have been attempted in the past, but not using a completely re-synthesized waveform as in the present case. The re-synthesis scheme has been shown to work for electrical stimulation with cochlear implant users and may be of benefit to severely-to-profoundly impaired hearing aid users as well.
Each mode of operation allows a^wide range of potential strategies. The modes are not discrete and some strategies that combine elements from different modes can be implemented. For example, a reconstructed signal representing FO information (mode 4) can be added to a filtered speech signal (mode 3) . 3.0 USE OF THE BIMODAL AID
With reference to Fig. 2 the bimodal aid is programmed by use of a diagnostic and programming unit 21 which communicates with the speech processor 11 and, in turn, with the acoustic aid processor 12 by way of a diagnostic programming interface 22. The diagnostic and programming unit 21 is implemented as a program on a personal computer. The interface 22 is a communications card connected on the PC bus.
Software has been written to find the optimum filter settings to produce the frequency/gain characteristic specified by the audiologist, for use in the frequency response tailoring mode of operation described above. Software to program to the other modes of operation has also been programmed and tested. With reference to Figs. 14, 15 and 16 the basic procedure for use of the bimodal device is as follows. In bimodal use the fitting procedure is as outlined in flow chart form in Fig. 14. The bimodal MAP is produced on the personal computer following an iterative testing procedure of the subjective performance of the bimodal aid for a multiplicity of trial settings of the MAP.
Fig. 15 outlines the procedure in flow chart form with particular reference to obtaining an optimum setting for the acoustic aid 14 in mode 1.
Fig. 16 outlines the basic interaction between the control program in the diagnostic and programming unit and the fitting and mapping procedures performed by the Audiologist.
The above describes only some embodiments of the present invention and modifications obvious to those skilled in the art can be made without departing from the scope and spirit of the present invention.

Claims

1. A bimodal aid for the hearing impaired which includes processing means adapted to receive and process audio information received from a microphone^ said processing means supplying processed information derived from said audio information to an implant aid adapted to be implanted in a first ear of a patient and to an acoustic aid adapted to be worn in or adjacent a second ear of said patient whereby binaural information is provided to said patient.
2. The aid of claim 1 wherein said processing means comprises an implant aid signal processor and an acoustic aid signal processor; said implant aid signal processor adapted to operate on said audio information so as to electrically stimulate said implant aid; said acoustic aid signal processor operating on said audio information and processed information received from said implant aid processor so as to stimulate said acoustic aid.
3. The bimodal aid of claim 2 wherein said implant aid includes a plurality of electrodes which, when stimulated by said implant aid processor apply electrical stimuli directly to the cochlea of a patient.
4. The bimodal aid of claim 2 wherein said implant aid processor processes said audio information according to a" multi-peak strategy as defined in the specification on the basis of the speech features F0,F1,F2,A0,A1,A2,A3,A4,A5.
5. The bimodal aid of claim 2 wherein said acoustic aid signal processor includes an electronically configurable sound/speech processor which includes filter means whose parameters can be electronically varied according to information stored in said aid.
6. The aid of claim 5 wherein said filter means comprises an array of three filters whose parameters and interconnection can be varied according to said information stored in said aid.
7. The bimodal aid of claim 2 wherein said implant aid speech processor includes configuration means and signal processing means; said processor adapted to receive audio information and to process said audio information by said signal processing means in accordance with parameters set by said con iguration means so as to produce an output signal adapted to stimulate an acoustic aid; said configuration means adapted to receive one or more of electronic signal input or software input for the purpose of modifying said parameters.
8. The processor of claim 7, wherein said sound/speech processor utilises the speech features FO, Fl, F2, AO, Al, A2, A3, A4, A5, (as defined in the specification) and voiced/voiceless sound decisions to produce said output signal adapted to stimulate a hearing aid transducer.
9. The aid of claim 8 wherein said signal processing means includes means for dynamically changing the amplitude of different frequency bands in the defined speech spectrum as a function of the speech features Al, A2, A3, A4, and A5 parameters so that the loudness in these bands is appropriately scaled between the threshold and maximum comfortable levels of the hearing aid user at the Fl and F2 frequencies and in the higher frequency bands.
10. The aid of claim 9 wherein said signal processing means includes filter means whose settings are dynamically varied according to speech parameters extracted from said signal processing means whereby said filter means dynamically adapts said output signal to overcome the effects of noise and/or particular deficiencies in the hearing of a user.
11. The aid of claim 10, wherein said signal processing means within said processor includes means for reconstructing speech signals in real time whereby the amplitude and/or frequency characteristics of said output signal can be controlled optimally so as to enhance speech recognition in a user.
12. The aid of claim 11 wherein in a first mode of operation of said electronically configurable sound/speech processor, said filter means is set by said configuration means based on measurements made by an audiologist during a hearing aid fitting procedure and remain fixed thereafter:
13. The aid of claim 11 wherein said processor includes filter means whose parameters are changed dynamically while said processor is in use providing said output signal to said hearing aid transducer in accordance with information provided to said configuration means by speech parameter extraction means acting on said audio information.
14. The aid of any preceding claim wherein said output signal is synthesised by said signal processing means utilising only speech parameters.
15. A method of control of a hearing aid and a cochlear implant by means of the aid of any one of claims 1 to 14.
16. A bimodal aid for the hearing impaired comprising a sound/speech processor electrically connected to a hearing aid transducer adapted to be worn adjacent to or in an ear of a patient and electrically connected to an electrical signal transducer adapted to be located in an ear of a patient; said speech processor receiving and processing audio input information so as to produce an acoustic signal from said hearing aid transducer and an electrical signal from said electrical transducer whereby coherent binaural information is provided to said patient.
17. An electronically configurable sound/speech processor for the hearing impaired, said processor including configuration means and signal processing means; said processor adapted to receive audio information and to process said audio information by said signal processing means in" accordance with parameters set by said configuration means so as to produce an output signal adapted to stimulate a hearing aid transducer; said configuration means adapted to receive one or more of electronic signal input or software input for the purpose of modifying said parameters.
18. The processor of claim 17, wherein said sound/speech processor utilises the speech features FO, Fl, F2, AO, Al, A2, A3, A4, A5, (as defined in the specification) and voiced/voiceless sound decisions to produce said output signal adapted to stimulate a hearing aid transducer.
19. The electronically configurable sound/speech processor of claim 18 wherein said signal processing means includes means for dynamically changing the amplitude of different frequency bands in the defined speech spectrum as a function of the speech features Al, A2, A3, A4, and A5 parameters so that the loudness in these bands is appropriately scaled between the threshold and maximum comfortable levels of the hearing aid user at the Fl and F2 frequencies and in the higher frequency bands.
20. The electronically configurable sound/speech processor of claim 19 wherein said signal processing means includes filter means whose settings are dynamically varied according to speech parameters extracted from said signal processing means whereby said filter means dynamically adapts said output signal to overcome the effects of noise and/or particular deficiencies in the hearing of a user.
21. The electronically configurable sound/speech processor of claim 20, wherein said signal processing means within said processor includes means for reconstructing speech signals in real time whereby the amplitude and/or frequency characteristics of said output signal can be controlled optimally so as to enhance speech recognition in a user.
22. The processor of claim 21 wherein in a first mode of operation of said electronically configurable sound/speech processor, said filter means is set by said configuration means based on measurements made by an audiologist during a hearing aid fitting procedure and remaina fixed thereafter.
23. The electronically configurable sound/speech processor of claim 21 wherein said processor includes filter means whose parameters are changed dynamically while said processor is in use providing said output signal to said hearing aid transducer in accordance with information provided to said configuration means by speech parameter extraction means acting on said audio information.
24. The electronically configurable sound/speech processor of any one of the claims 17 to 23 wherein said output signal is synthesised by said signal processing means utilising only speech parameters.
25. A method of control of both a hearing aid and a cochlear implant by means including the sound/speech processor of any one of claims 17 to 24.
26. A bimodal aid including the sound/speech processor of any one of claims 17 to 24.
PCT/AU1991/000506 1990-11-01 1991-11-01 Bimodal speech processor WO1992008330A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP3517611A JPH06506322A (en) 1990-11-01 1991-11-01 Bimodal audio processing device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AUPK314490 1990-11-01
AUPK3144 1990-11-01

Publications (1)

Publication Number Publication Date
WO1992008330A1 true WO1992008330A1 (en) 1992-05-14

Family

ID=3775048

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU1991/000506 WO1992008330A1 (en) 1990-11-01 1991-11-01 Bimodal speech processor

Country Status (4)

Country Link
EP (1) EP0555278A4 (en)
JP (1) JPH06506322A (en)
CA (1) CA2095344A1 (en)
WO (1) WO1992008330A1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995001709A1 (en) * 1993-07-01 1995-01-12 The University Of Melbourne Cochlear implant devices
WO1995008248A1 (en) * 1993-09-17 1995-03-23 Audiologic, Incorporated Noise reduction system for binaural hearing aid
WO1995020305A1 (en) * 1994-01-21 1995-07-27 Audiologic, Incorporated Dynamic intensity beamforming system for noise reduction in a binaural hearing aid
WO1996039005A1 (en) * 1995-05-31 1996-12-05 Advanced Bionics Corporation Programming of a speech processor for an implantable cochlear stimulator
WO1998006237A1 (en) * 1996-08-07 1998-02-12 St. Croix Medical, Inc. Hearing aid transducer support
WO1998006236A1 (en) * 1996-08-07 1998-02-12 St. Croix Medical, Inc. Middle ear transducer
US5814095A (en) * 1996-09-18 1998-09-29 Implex Gmbh Spezialhorgerate Implantable microphone and implantable hearing aids utilizing same
US6214046B1 (en) 1996-11-25 2001-04-10 St. Croix Medical, Inc. Method of implanting an implantable hearing assistance device with remote electronics unit
WO2002032501A1 (en) * 2000-10-19 2002-04-25 Universite De Sherbrooke Programmable neurostimulator
US6730015B2 (en) 2001-06-01 2004-05-04 Mike Schugt Flexible transducer supports
AT500375A1 (en) * 2003-10-13 2005-12-15 Cochlear Ltd EXTERNAL LANGUAGE PROCESSOR UNIT FOR A HEARING PROTECTION
DE102008060056A1 (en) * 2008-12-02 2010-08-12 Siemens Medical Instruments Pte. Ltd. Bimodal hearing aid system adjusting method, involves adjusting hearing aid using audiometric data of ears and information regarding cochlea implant and/or adjustment of implant, and adjusting base range and/or depth
US8150527B2 (en) 2004-04-02 2012-04-03 Advanced Bionics, Llc Electric and acoustic stimulation fitting systems and methods
US8280087B1 (en) 2008-04-30 2012-10-02 Arizona Board Of Regents For And On Behalf Of Arizona State University Delivering fundamental frequency and amplitude envelope cues to enhance speech understanding
US20130006328A1 (en) * 2004-05-10 2013-01-03 Ibrahim Bouchataoui Simultaneous delivery of electrical and acoustical stimulation in a hearing prosthesis
WO2014114337A1 (en) * 2013-01-24 2014-07-31 Advanced Bionics Ag Hearing system comprising an auditory prosthesis device and a hearing aid
WO2014123890A1 (en) * 2013-02-05 2014-08-14 Med-El Elektromedizinische Geraete Gmbh Fitting unilateral electric acoustic stimulation for binaural hearing
WO2015000528A1 (en) * 2013-07-05 2015-01-08 Advanced Bionics Ag Cochlear implant system
WO2015130318A1 (en) * 2014-02-28 2015-09-03 Advanced Bionics Ag Systems and methods for facilitating post-implant acoustic-only operation of an electro-acoustic stimulation ("eas") sound processor
WO2015170140A1 (en) * 2014-05-06 2015-11-12 Advanced Bionics Ag Systems and methods for cancelling tonal noise in a cochlear implant system
US10721574B2 (en) 2011-11-04 2020-07-21 Med-El Elektromedizinische Geraete Gmbh Fitting unilateral electric acoustic stimulation for binaural hearing

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3818149A (en) * 1973-04-12 1974-06-18 Shalako Int Prosthetic device for providing corrections of auditory deficiencies in aurally handicapped persons
US3894196A (en) * 1974-05-28 1975-07-08 Zenith Radio Corp Binaural hearing aid system
US4187413A (en) * 1977-04-13 1980-02-05 Siemens Aktiengesellschaft Hearing aid with digital processing for: correlation of signals from plural microphones, dynamic range control, or filtering using an erasable memory
AU7848181A (en) * 1980-12-12 1982-06-17 Commonwealth Of Australia, The Speech processor
US4596902A (en) * 1985-07-16 1986-06-24 Samuel Gilman Processor controlled ear responsive hearing aid and method
AU2956289A (en) * 1988-02-03 1989-08-03 Siemens Aktiengesellschaft Hearing aid signal-processing system
WO1990005437A1 (en) * 1988-11-10 1990-05-17 Nicolet Instrument Corporation Adaptive, programmable signal processing and filtering for hearing aids
AU6339290A (en) * 1989-09-08 1991-04-08 Cochlear Pty. Limited Multi-peak speech processor

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3629521A (en) * 1970-01-08 1971-12-21 Intelectron Corp Hearing systems
EP0349599B2 (en) * 1987-05-11 1995-12-06 Jay Management Trust Paradoxical hearing aid

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3818149A (en) * 1973-04-12 1974-06-18 Shalako Int Prosthetic device for providing corrections of auditory deficiencies in aurally handicapped persons
US3894196A (en) * 1974-05-28 1975-07-08 Zenith Radio Corp Binaural hearing aid system
US4187413A (en) * 1977-04-13 1980-02-05 Siemens Aktiengesellschaft Hearing aid with digital processing for: correlation of signals from plural microphones, dynamic range control, or filtering using an erasable memory
AU7848181A (en) * 1980-12-12 1982-06-17 Commonwealth Of Australia, The Speech processor
US4596902A (en) * 1985-07-16 1986-06-24 Samuel Gilman Processor controlled ear responsive hearing aid and method
AU2956289A (en) * 1988-02-03 1989-08-03 Siemens Aktiengesellschaft Hearing aid signal-processing system
WO1990005437A1 (en) * 1988-11-10 1990-05-17 Nicolet Instrument Corporation Adaptive, programmable signal processing and filtering for hearing aids
AU6339290A (en) * 1989-09-08 1991-04-08 Cochlear Pty. Limited Multi-peak speech processor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0555278A4 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995001709A1 (en) * 1993-07-01 1995-01-12 The University Of Melbourne Cochlear implant devices
US7627377B2 (en) 1993-07-01 2009-12-01 Cochlear Limited Cochlear implant devices
US6611717B1 (en) 1993-07-01 2003-08-26 The University Of Melbourne Implant device
WO1995008248A1 (en) * 1993-09-17 1995-03-23 Audiologic, Incorporated Noise reduction system for binaural hearing aid
WO1995020305A1 (en) * 1994-01-21 1995-07-27 Audiologic, Incorporated Dynamic intensity beamforming system for noise reduction in a binaural hearing aid
WO1996039005A1 (en) * 1995-05-31 1996-12-05 Advanced Bionics Corporation Programming of a speech processor for an implantable cochlear stimulator
US5626629A (en) * 1995-05-31 1997-05-06 Advanced Bionics Corporation Programming of a speech processor for an implantable cochlear stimulator
US6488616B1 (en) 1996-08-07 2002-12-03 St. Croix Medical, Inc. Hearing aid transducer support
WO1998006237A1 (en) * 1996-08-07 1998-02-12 St. Croix Medical, Inc. Hearing aid transducer support
WO1998006236A1 (en) * 1996-08-07 1998-02-12 St. Croix Medical, Inc. Middle ear transducer
US6005955A (en) * 1996-08-07 1999-12-21 St. Croix Medical, Inc. Middle ear transducer
US6050933A (en) * 1996-08-07 2000-04-18 St. Croix Medical, Inc. Hearing aid transducer support
US5814095A (en) * 1996-09-18 1998-09-29 Implex Gmbh Spezialhorgerate Implantable microphone and implantable hearing aids utilizing same
US6214046B1 (en) 1996-11-25 2001-04-10 St. Croix Medical, Inc. Method of implanting an implantable hearing assistance device with remote electronics unit
US6235056B1 (en) 1996-11-25 2001-05-22 St. Croix Medical, Inc. Implantable hearing assistance device with remote electronics unit
WO2002032501A1 (en) * 2000-10-19 2002-04-25 Universite De Sherbrooke Programmable neurostimulator
US6730015B2 (en) 2001-06-01 2004-05-04 Mike Schugt Flexible transducer supports
US11147969B2 (en) 2003-10-13 2021-10-19 Cochlear Limited External speech processor unit for an auditory prosthesis
AT500375A1 (en) * 2003-10-13 2005-12-15 Cochlear Ltd EXTERNAL LANGUAGE PROCESSOR UNIT FOR A HEARING PROTECTION
AT500375B1 (en) * 2003-10-13 2012-11-15 Cochlear Ltd EXTERNAL LANGUAGE PROCESSOR UNIT FOR A HEARING PROTECTION
US8315706B2 (en) 2003-10-13 2012-11-20 Cochlear Limited External speech processor unit for an auditory prosthesis
US8700170B2 (en) 2003-10-13 2014-04-15 Cochlear Limited External speech processor unit for an auditory prosthesis
US9700720B2 (en) 2003-10-13 2017-07-11 Cochlear Limited External speech processor unit for an auditory prosthesis
US8150527B2 (en) 2004-04-02 2012-04-03 Advanced Bionics, Llc Electric and acoustic stimulation fitting systems and methods
US8155747B2 (en) 2004-04-02 2012-04-10 Advanced Bionics, Llc Electric and acoustic stimulation fitting systems and methods
US20130006328A1 (en) * 2004-05-10 2013-01-03 Ibrahim Bouchataoui Simultaneous delivery of electrical and acoustical stimulation in a hearing prosthesis
US9008339B1 (en) 2008-04-30 2015-04-14 Arizona Board Of Regents For And On Behalf Of Arizona State University Delivering fundamental frequency and amplitude envelope cues to enhance speech understanding
US8280087B1 (en) 2008-04-30 2012-10-02 Arizona Board Of Regents For And On Behalf Of Arizona State University Delivering fundamental frequency and amplitude envelope cues to enhance speech understanding
DE102008060056A1 (en) * 2008-12-02 2010-08-12 Siemens Medical Instruments Pte. Ltd. Bimodal hearing aid system adjusting method, involves adjusting hearing aid using audiometric data of ears and information regarding cochlea implant and/or adjustment of implant, and adjusting base range and/or depth
DE102008060056B4 (en) * 2008-12-02 2011-12-15 Siemens Medical Instruments Pte. Ltd. Method and hearing aid system for adapting a bimodal supply
US10721574B2 (en) 2011-11-04 2020-07-21 Med-El Elektromedizinische Geraete Gmbh Fitting unilateral electric acoustic stimulation for binaural hearing
US9511225B2 (en) 2013-01-24 2016-12-06 Advanced Bionics Ag Hearing system comprising an auditory prosthesis device and a hearing aid
WO2014114337A1 (en) * 2013-01-24 2014-07-31 Advanced Bionics Ag Hearing system comprising an auditory prosthesis device and a hearing aid
WO2014123890A1 (en) * 2013-02-05 2014-08-14 Med-El Elektromedizinische Geraete Gmbh Fitting unilateral electric acoustic stimulation for binaural hearing
WO2015000528A1 (en) * 2013-07-05 2015-01-08 Advanced Bionics Ag Cochlear implant system
WO2015130318A1 (en) * 2014-02-28 2015-09-03 Advanced Bionics Ag Systems and methods for facilitating post-implant acoustic-only operation of an electro-acoustic stimulation ("eas") sound processor
US9717909B2 (en) 2014-02-28 2017-08-01 Advanced Bionics Ag Systems and methods for facilitating post-implant acoustic-only operation of an electro-acoustic stimulation (“EAS”) sound processor
WO2015170140A1 (en) * 2014-05-06 2015-11-12 Advanced Bionics Ag Systems and methods for cancelling tonal noise in a cochlear implant system

Also Published As

Publication number Publication date
CA2095344A1 (en) 1992-05-02
EP0555278A4 (en) 1994-08-10
JPH06506322A (en) 1994-07-14
EP0555278A1 (en) 1993-08-18

Similar Documents

Publication Publication Date Title
EP0555278A4 (en) Bimodal speech processor
Loizou et al. The effect of parametric variations of cochlear implant processors on speech understanding
US5095904A (en) Multi-peak speech procession
US5271397A (en) Multi-peak speech processor
US4532930A (en) Cochlear implant system for an auditory prosthesis
US7171272B2 (en) Sound-processing strategy for cochlear implants
US9008786B2 (en) Determining stimulation signals for neural stimulation
US8843205B2 (en) Stimulation channel selection for a stimulating medical device
Geurts et al. Enhancing the speech envelope of continuous interleaved sampling processors for cochlear implants
AU2001281585A1 (en) Sound-processing strategy for cochlear implants
EP1210847B1 (en) Improved sound processor for cochlear implants
AU2016285966B2 (en) Selective stimulation with cochlear implants
US10357655B2 (en) Frequency-dependent focusing systems and methods for use in a cochlear implant system
McDermott et al. A portable programmable digital sound processor for cochlear implant research
CN110520188B (en) Dual mode auditory stimulation system
US11478640B2 (en) Fitting device for bimodal hearing stimulation system
Kaiser et al. Using a personal computer to perform real-time signal processing in cochlear implant research
AU2009220707A1 (en) Method for adjusting voice converting processor
EP2931359A2 (en) Systems and methods for controlling a width of an excitation field created by current applied by a cochlear implant system
Yang Design, Fabrication & Evaluation of a Biomimetic Filter-bank Architecture for Low-power Noise-robust Cochlear Implant Processors
Webster et al. Mechanisms of Hearing

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE

WWE Wipo information: entry into national phase

Ref document number: 2095344

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 1991918663

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1991918663

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1991918663

Country of ref document: EP