US20140270316A1 - Sound Induction Ear Speaker for Eye Glasses - Google Patents

Sound Induction Ear Speaker for Eye Glasses Download PDF

Info

Publication number
US20140270316A1
US20140270316A1 US14/180,986 US201414180986A US2014270316A1 US 20140270316 A1 US20140270316 A1 US 20140270316A1 US 201414180986 A US201414180986 A US 201414180986A US 2014270316 A1 US2014270316 A1 US 2014270316A1
Authority
US
United States
Prior art keywords
acoustic
speaker
sound
microphone
eyewear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/180,986
Inventor
Dashen Fan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kopin Corp
Original Assignee
Kopin Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kopin Corp filed Critical Kopin Corp
Priority to US14/180,986 priority Critical patent/US20140270316A1/en
Priority to TW103108575A priority patent/TW201508376A/en
Assigned to KOPIN CORPORATION reassignment KOPIN CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FAN, DASHEN
Publication of US20140270316A1 publication Critical patent/US20140270316A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C11/00Non-optical adjuncts; Attachment thereof
    • G02C11/10Electronic devices other than hearing aids
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/002Devices for damping, suppressing, obstructing or conducting sound in acoustic devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • H04R1/028Casings; Cabinets ; Supports therefor; Mountings therein associated with devices performing functions other than acoustics, e.g. electric candles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • H04R5/0335Earpiece support, e.g. headbands or neckrests
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C11/00Non-optical adjuncts; Attachment thereof
    • G02C11/06Hearing aids
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02166Microphone arrays; Beamforming
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10TTECHNICAL SUBJECTS COVERED BY FORMER US CLASSIFICATION
    • Y10T29/00Metal working
    • Y10T29/49Method of mechanical manufacture
    • Y10T29/49826Assembling or joining

Definitions

  • earphones have been used to present acoustic sounds to an individual when privacy is desired or it is desired not to disturb others.
  • Examples of traditional earphone devices include over-the-head headphones having an ear cup speaker (e.g. Beats® by Dr. Dre headphones), ear bud style earphones (e.g., Apple iPod® earphones and Bluetooth® headsets), bone-conducting speakers (e.g., Google Glass).
  • ear cup speaker e.g. Beats® by Dr. Dre headphones
  • ear bud style earphones e.g., Apple iPod® earphones and Bluetooth® headsets
  • bone-conducting speakers e.g., Google Glass
  • Another known way to achieve the desired privacy or peace and quiet for others is by using directional multi-speaker beam-forming.
  • Also well-known but not conventionally used to present acoustic sounds to an individual that is not hearing-impaired are hearing aids.
  • Such a hearing aid typically includes a clear “hook” that acts as an acoustic duct tube to channel audio speaker (also referred to as a receiver in telephony applications) sound to the inner ear of a user and act as the mechanical support so that the user can wear the hearing aid, the speaker being housed in the behind-the-ear portion of the hearing aid body.
  • audio speaker also referred to as a receiver in telephony applications
  • the aforementioned techniques all have drawbacks, namely, they are either bulky, cumbersome, unreliable, or immature.
  • the present invention related in general to eyewear, and more particularly to eyewear devices and corresponding methods for presenting sound to a user of the eyewear.
  • an eyewear sound induction ear speaker device of the invention includes an eyewear frame, a speaker including an audio channel integrated with the eyewear frame, and an acoustic duct coupled to the speaker and arranged to channel sound emitted by the speaker to an ear of the user wearing the eyewear frame.
  • the invention is an eyewear sound induction ear speaker device that includes means for receiving an audio sound, means for processing and amplifying the audio sound, and means for channeling the amplified and processed audio sound to an ear of a user wearing an eyewear frame.
  • the invention is a method of providing sound for eyewear, including the steps of receiving a processed electrical audio signal at a speaker integrated with an eyewear frame, wherein the speaker includes an audio channel.
  • the speaker is induced to produce acoustic sound at the audio channel, and the acoustic sound is channeled through an acoustic duct to be presented to a user wearing the eyewear frame.
  • the invention is a method of channeling sound from eyewear device that includes the steps of receiving an electrical audio signal from electrical audio source at speaker integrated with an eyewear device, inducing audible sound from the electrical audio signal at the speaker, and channeling the audio sound to an ear of the user of the eyewear using the audio duct, the audio duct not blogging the ear canal of the ear.
  • the eyewear spectacle of the invention is relatively compact, unobtrusive, and durable.
  • the device and method can be integrated with noise cancellation apparatus and methods that are also, optionally, components of the eyewear itself.
  • the noise cancellation apparatus including microphones, electrical circuitry, and software can be integrated with and, optionally, on board the eyewear worn by the user.
  • microphones mounted on board the eyewear can be integrated with the speakers and with circuitry, such as a computer, receiver or transmitter to thereby process signals received from an external source or the microphones, or to process and transmit signals from the microphone, and to selectively transmit those signals, whether processed or unprocessed, to the user of the eyewear through the speakers mounted in the eyewear.
  • human-machine interaction through the use of a speech recognition user interface is becoming increasingly popular.
  • accurate recognition of speech is useful.
  • Such a machine output presentation facilitates hands-free activities of a user, which is increasingly popular.
  • Users also do not have to hold a speaker or device in place, nor do they need to have electronics behind their ear or an ear bud blocking their ear. There are also no flimsy wires, and users do not have to tolerate the skin contact or pressure associated with bone condition speakers.
  • FIG. 1A is a side view of one embodiment of an eyewear sound induction ear speaker device of the invention
  • FIG. 1B is a perspective view of the embodiment of the invention shown in FIG. 1A
  • FIG. 1C is a cross-sectional view of one embodiment of an acoustic duct of the embodiment of the invention shown in FIG. 1A
  • FIG. 2 is a cross-sectional view of an alternative embodiment of an acoustic duct of the invention
  • FIG. 3 is an illustration of an embodiment of an eyewear and sound induction ear speaker device of the invention that includes two remote microphones that are electronically linked with the eyewear frame of the eyewear sound induction ear speaker device.
  • FIG. 4 is an illustration of another embodiment of eyewear of the invention that includes three remote microphones.
  • FIG. 5A is an exploded view of a rubber boot and microphone according to one embodiment of the invention.
  • FIG. 5B is a perspective view of the assembled rubber boot shown in FIG. 5A .
  • FIG. 6 is a representation of another embodiment of the invention showing alternate and optional positions of placements of the microphones.
  • FIG. 7 is an embodiment of a noise cancellation circuit employed in one embodiment of the eyewear sound induction user speaker device of the invention.
  • FIG. 8 is an illustration of a beam-forming module suitable for use in the embodiment of the invention illustrated in FIG. 8 .
  • FIG. 9 is a block diagram illustrating an example embodiment of a desired voice activity detection module employed in another embodiment of the eyewear sound induction ear speaker device of the invention.
  • FIG. 10 is a block diagram illustrating an example embodiment of a noise cancellation circuit employed in an embodiment of the eyewear sound induction ear speaker device of the invention.
  • FIG. 11 is an example embodiment of a boom tube housing three microphones, in an arrangement of one embodiment of the eyewear sound induction ear speaker device of the invention.
  • FIG. 12 is an example embodiment of a boom tube housing four microphones in an arrangement of another embodiment of the eyewear sound induction ear speaker device of the invention.
  • FIG. 13 is a block diagram illustrating an example embodiment of a beam-forming module accepting three signals and another embodiment of the eyewear sound induction ear speaker device of the invention.
  • FIG. 14 is a block diagram illustrating an example embodiment of a desired voice activation detection module of yet another embodiment of the eyewear sound induction ear speaker device of the invention.
  • speaker and “audio speaker” are used interchangeably throughout the present application and are used to refer to a small, relative to the size of a human ear, narrow band (e.g., voice-band, for example 300 Hz-20 KHz) speaker receiver or that converts electrical signals at audio frequencies into acoustic signals.
  • narrow band e.g., voice-band, for example 300 Hz-20 KHz
  • eyewear 10 for example, a pair of prescription glasses, includes eyewear frame 14 .
  • Audio speaker 16 is integrated with eyewear frame 14 , and acoustic duct 18 is coupled to audio speaker 16 and arranged to channel sound emitted by speaker 16 to an ear of a user wearing eyewear frame 14 .
  • Speaker 16 is operatively linked, for example, to embedded receiver 28 .
  • “Operatively coupled,” as that term as used herein, means electronically linked, such as by a wireless or hardwire connection.
  • Acoustic duct 18 can be made from a pliable material and further arranged such that acoustic duct 18 does not block the ear canal of the user 20 .
  • Acoustic duct 18 can include point 22 and be horn-shaped, as shown in FIG. 1A .
  • a cross-section of acoustic duct 18 shown in FIG. 1C , can be oval-shaped or, as shown in FIG. 2 , acoustic duct 20 can be rectangular shaped.
  • Acoustic duct 18 does not have to be weight-bearing and is not designed to be weight-bearing since eyewear frame 14 is used to support the weight.
  • eyewear 10 further includes a second, acoustic duct 22 and second audio speaker 24 coupled to audio duct 26 and, like speaker 16 , operatively coupled to an electrical audio source, such as receiver 28 , which is integrated within eyewear frame 14 .
  • Second acoustic duct 22 is coupled to second audio speaker 24 and is arranged to channel sound emitted by audio speaker 24 to a second ear of a user wearing eyewear frame 14 .
  • speaker 24 can further include a second audio channel, the second audio channel being coupled through second speaker 24 to second acoustic duct 22 to provide stereo sound to the user wearing eyewear frame 14 .
  • Receiver 28 is operatively coupled to first speaker 16 , either alone or in conjunction with second audio speaker 24 .
  • Receiver 28 can be a wired or a wireless receiver, and receive an electrical audio signal from any electrical audio source.
  • the receiver can be operatively coupled to a 3.5 mm audio jack, Bluetooth wireless radio, memory storage device or other such source.
  • the wireless receiver can include an audio codec, digital signal processor, and amplifiers, the audio codec can be coupled to the audio speaker and coupled to at least one microphone.
  • the microphone can be an analog microphone coupled to an analog-to-digital (A/D) converter, which can in turn be coupled to a DSP.
  • the audio microphone can be a micro-electro-mechanical system (MEMS) microphone.
  • MEMS micro-electro-mechanical system
  • FIG. 1 For example embodiments, can include a digital microphone, such as a digital MEMs microphone, coupled to an all-digital voice processing chip, obviating the need for a CODEC all together.
  • the speaker can be driven by a digital-to-analog (D/A) driver, or can be driven by a pulse width modulation (PWM) digital signal.
  • D/A digital-to-analog
  • PWM pulse width modulation
  • eyewear 10 of the invention can include a transmitter, whereby sounds captured electronically by microphones of eyewear that are thereby processed for transmission to an extend receiver or to at least one of audio speakers 16 and 24 .
  • An example method of the present invention includes channeling sound from an eyewear device.
  • the method includes receiving an electrical audio signal from an electrical audio source at a speaker integrated with an eyewear device, inducing audio sound from the electrical audio signal at the speaker, and channeling the audio sound to an ear of a user of the eyewear using an audio duct, the audio duct not blocking an ear canal of the ear.
  • the electrical audio signal can be supplied from any electrical audio source, for example, a 3.5 mm audio-jack, Bluetooth® wireless radio, and a media storage device, such as a hard disk or solid-state memory device.
  • a corresponding example method of providing sound for eyewear 10 can include: receiving and electrically processing sound at at least one of speakers 16 and 24 , integrated with eyewear frame 14 , speakers 16 and 24 including audio channels; inducing the electrically processed sound at audio speakers 16 and 24 integrated within eyewear frame 14 to produce acoustic processed sound; and, channeling the acoustic processed sound through acoustic ducts 18 , 22 to be presented to a user wearing eyewear 10 .
  • Example methods can further include arranging at least one of acoustic ducts 18 , 30 such that at least one of acoustic ducts 18 , 22 do not block the ear canal of the user, acoustic ducts 18 , 22 being comprised of a pliable non-load bearing material.
  • the processing can include preamplifying sound received from a wired or wireless receiver 28 using a pre-amplifier (not shown), further processing the amplified sound using a digital signal processor (not shown), converting the further processed sound into an analog signal, and postamplifying the analog signal to produce the electrically processed sound.
  • the processing can further include processing sound in a second audio channel, inducing the electrically processed sound of the second audio channel at second audio speaker 24 integrated with eyewear frame 14 to produce stereo acoustic processed sound and, channeling the second acoustic processed sound through second acoustic duct 30 to present stereo sound to a user wearing eyewear frame 14 .
  • an eyewear sound induction ear speaker device can include a means for receiving an audio sound, a means for amplifying and processing the audio sound, and a means for channeling the audio sound to an ear of a user wearing an eyewear frame.
  • FIG. 3 is an illustration of an example embodiment of an eyewear sound induction speaker device of the invention 300 .
  • eyewear sound induction speaker device of the invention includes eye-glasses 302 having embedded pressure-gradient microphones.
  • Each pressure-gradient microphone element can, optionally, and independently, be replaced with two omni-directional microphones at the location of each acoustic port, resulting in four total microphones.
  • the signal from these two omni-directional microphone can be processed by electronic or digital beam-forming circuitry described above to produce a pressure gradient beam pattern. This pressure gradient beam pattern replaces the equivalent pressure-gradient microphone.
  • a long boom dual-microphone headset can look like a conventional close-talk boom microphone, but is a big boom with two-microphones in parallel.
  • An end microphone of the boom is placed in front of user's mouth.
  • the close-talk long boom dual-microphone design targets heavy noise usage in military, aviation, industrial and has unparalleled noise cancellation performance.
  • one main microphone can be positioned directly in front of mouth.
  • a second microphone can be positioned at the side of the mouth.
  • the two microphones can be identical with identical casing.
  • the two microphones can be placed in parallel, perpendicular to the boom. Each microphone has front and back openings.
  • DSP circuitry can be in the housing between the two microphones.
  • Microphone is housed in a rubber or silicon holder (e.g., the rubber boot) with an air duct extending to the acoustic ports as needed.
  • the housing keeps the microphone in an air-tight container and provides shock absorption.
  • the microphone front and back ports are covered with a wind-screen layer made of woven fabric layers to reduce wind noise or wind-screen foam material.
  • the outlet holes on the microphone plastic housing can be covered with water-resistant thin film material or special water-resistant coating.
  • a conference gooseneck microphone can provide noise cancellation.
  • echoes can be a problem for sound recording. Echoes recorded by a microphone can cause howling. Severe echo prevents the user from tuning up speaker volume and causes limited audibility.
  • Conference hall and conference room can be decorated with expensive sound absorbing materials on their walls to reduce echo to achieve higher speaker volume and provide an even distribution of sound field across the entire audience.
  • Electronic echo cancellation equipment is used to reduce echo and increase speaker volume, but such equipment is expensive, can be difficult to setup and often requires an acoustic expert.
  • a dual-microphone noise cancellation conference microphone can provide an inexpensive, easy to implement solution to the problem of echo in a conference hall or conference room.
  • the dual-microphone system described above can be placed in a desktop gooseneck microphone.
  • Each microphone in the tube is a pressure-gradient bi-directional, uni-directional, or super-directional microphone.
  • a user can desire a noise-canceling close-talk microphone without a boom microphone in front of his or her mouth.
  • the microphone in front of the user's mouth can be viewed as annoying.
  • moisture from the user's mouth can condense on the surface of the Electret Condenser Microphone (ECM) membrane, which after long usage can deteriorate microphone sensitivity.
  • ECM Electret Condenser Microphone
  • a short tube boom headset can solve these problems by shortening the boom, moving the ECM away from the user's mouth and using a rubber boot to extend the acoustic port of the noise-canceling microphone. This can extend the effective close-talk range of the ECM. This maintains the noise-canceling ECM property for far away noises.
  • the boom tube can be lined with wind-screen form material. This solution further allows the headset computer to be suitable for enterprise call center, industrial, and general mobile usage.
  • the respective rubber boots of each microphone can also be identical.
  • the short tube boom headset can be a wired or wireless headset.
  • the headset includes the short microphone (e.g., and ECM) tube boom.
  • the tube boom can extend from the housing of the headset along the user's cheek, where the tube boom is either straight or curved.
  • the tube boom can extend the length of the cheek to the side of the user's mouth, for instance.
  • the tube boom can include a single noise-cancelling microphone on its inside.
  • the tube boom can further include a dual microphone inside of the tube.
  • a dual microphone can be more effective in cancelling out non-stationary noise, human noise, music, and high frequency noises.
  • a dual microphone can be more suitable for mobile communication, speech recognition, or a Bluetooth headset.
  • the two microphones can be identical, however a person of ordinary skill in the art can also design a tube boom having microphones of different models.
  • the two microphones enclosed in their respective rubber boats are placed in series along the inside of the tube.
  • the tube can have a cylindrical shape, although other shapes are possible (e.g., a rectangular prism, etc.).
  • the short tube boom can have two openings, one at the tip, and a second at the back.
  • the tube surface can be covered with a pattern of one or more holes or slits to allow sound to reach the microphone inside the tube boom.
  • the short tube boom can have three openings, one at the tip, another in the middle, and another in the back.
  • the openings can be equally spaced, however, other a person of ordinary skill in the art can design other spacings.
  • the microphone in the tube boom is a bi-directional noise-cancelling microphone having pressure-gradient microphone elements.
  • the microphone can be enclosed in a rubber boot extending acoustic port on the front and the back side of the microphone with acoustic ducts. Inside of the boot, the microphone element is sealed in the air-tight rubber boot.
  • the microphone with the rubber boot is placed along the inside of the tube.
  • An acoustic port at the tube tip aligns with the boom opening, and an acoustic port at the tube back aligns with boom opening.
  • the rubber boot can be offset from the tube ends to allow for spacing between the tube ends and the rubber boot. The spacing further allows breathing room and for room to place a wind-screen of appropriate thickness.
  • the rubber boot and inner wall of the tube remain air-tight, however.
  • a wind-screen foam material e.g., wind guard sleeves over the rubber boot fills the air-duct and the open space between acoustic port and tube interior/opening.
  • the eye-glasses 302 have two microphones 304 and 306 , a first microphone 304 being arranged in the middle of the eye-glasses 302 frame and a second microphone 306 being arranged on the side of the eye-glasses 302 frame.
  • the microphones 304 and 306 can be pressure-gradient microphone elements, either bi- or uni-directional.
  • Each microphone 304 and 306 is a microphone assembly that includes a microphone (not shown) within a rubber boot, as further described infra with reference to FIGS. 5A-5B .
  • the rubber boot provides an acoustic port on the front and the back side of the microphone with acoustic ducts.
  • the two microphones 304 and 306 and their respective boots can be identical.
  • the microphone elements 304 and 306 can be sealed air-tight (e.g., hermetically sealed) inside the rubber boots.
  • the acoustic ducts are filled with wind-screen material.
  • the ports are sealed with woven fabric layers.
  • the lower and upper acoustic ports are sealed with a water-proof membrane.
  • the microphones can be built into the structure of the eye glasses frame. Each microphone has top and bottom holes, being acoustic ports.
  • the two microphones 304 and 306 which can be pressure-gradient microphone elements, can each be replaced by two omni-directional microphones.
  • FIG. 4 is an illustration of another embodiment of an eyewear sound induction ear speaker device 450 of the invention.
  • eyewear sound induction ear speaker 450 includes eye-glasses 452 having three embedded microphones.
  • the eye-glasses 452 of FIG. 4 are similar to the eye-glasses 302 of FIG. 3 , but instead employs three microphones instead of two.
  • the eye-glasses 452 of FIG. 4 have a first microphone 454 arranged in the middle of the eye-glasses 452 , a second microphone 456 arranged on the left side of the eye-glasses 4 , and a third microphone 458 arranged on the right side of the eye-glasses 452 .
  • the three microphones can be employed in the three-microphone embodiment described above.
  • FIG. 5A is an exploded view of a microphone assembly 500 of the invention.
  • the rubber boot 502 a - b is separated into a first half of the rubber boot 502 a and a second half of the rubber boot 502 b .
  • Microphone 501 is between the rubber boot halves.
  • Each rubber boot 502 a - b is lined by a wind-screen 508 material, however FIG. 5 shows the wind-screen in the second half of the rubber boot 502 b .
  • the air-duct and the open space between acoustic port and boom interior is filled with wind-screen foam material, such as wind guard sleeves over the rubber boots.
  • a microphone 504 is arranged to be played between the two halves of the rubber boot 502 a - b .
  • the microphone 504 and rubber boot 502 a - b are sized such that the microphone 504 fits in a cavity within the halves of the rubber boot 502 a - b .
  • the microphone is coupled with a wire 506 , that extends out of the rubber boot 502 a - b and can be connected to, for instance, the noise cancellation circuit described above.
  • FIG. 5B is a perspective view of microphone assembly 500 when assembled.
  • the rubber boot 552 of FIG. 5 is shown to have both halves 502 a - b joined together, where a microphone (not shown) is inside.
  • a wire 556 coupled to the microphone exist the rubber boot 552 such that it can be connected to, for instance, the noise cancellation circuit described below with reference to FIGS. 7 through 10 .
  • FIG. 6 is an illustration of an embodiment of the invention 600 showing various optional positions of placement of the microphones 604 a - e .
  • the microphones are pressure-gradient.
  • microphones can be placed in any of the locations shown in FIG. 6 , or any combination of the locations shown in FIG. 6 .
  • the microphone closest to the user's mouth is referred to as MIC1
  • the microphone further from the user's mouth is referred to as MIC2.
  • both MIC1 & MIC2 can be inline at position 1 604 a .
  • the microphones can be positioned as follows:
  • position 4 604 d has a microphone, it is employed within a pendant.
  • the microphones can also be employed at other combinations of positions 604 a - e , or at positions not shown in FIG. 6 .
  • FIG. 7 is a block diagram 700 illustrating an example embodiment of a noise cancellation circuit employed in the present invention.
  • Signals 710 and 712 from two microphones are digitized and fed into the noise cancelling circuit 701 .
  • the noise cancelling circuit 701 can be a digital signal processing (DSP) unit (e.g., software executing on a processor, hardware block, or multiple hardware blocks).
  • the noise cancellation circuit 701 can be a digital signal processing (DSP) chip, a system-on-a-chip (SOC), a Bluetooth chip, a voice CODEC with DSP chip, etc.
  • the noise cancellation circuit 701 can be located in a Bluetooth headset near the user's ear, in an inline control case with battery, or inside the connector, etc.
  • the noise cancellation circuit 701 can be powered by a battery or by a power source of the device that the headset is connected to, such as the device's batter, or power from a USB, micro-USB, or Lightening connector.
  • the noise cancellation circuit 701 includes four functional blocks all of which are electronically linked, either wirelessly or by hardwire: a beam-forming (BF) module 702 , a Desired Voice Activity Detection (VAD) Module 708 , an adaptive noise cancellation (ANC) module 704 and a single signal noise reduction (NR) module 706 .
  • the two signals 710 and 712 are fed into the BF module 702 , which generates a main signal 730 and a reference signal 732 to the ANC module 704 .
  • a closer (i.e., relatively close to the desired sound) microphone signal 710 is collected from a microphone closer to the user's mouth and a further (i.e., relatively distant from the desired sound) microphone signal is collected from a microphone further from the user's mouth, relatively.
  • the BF module 702 also generates a main signal 720 and reference signal 722 for the desired VAD module 708 .
  • the main signal 720 and reference signal 722 can, in certain embodiments, be different from the main signal 730 and reference signal 732 generated for the for ANC module 704 .
  • the ANC module 704 processes the main signal 730 and the reference signal 732 to cancel out noises from the two signals and output a noise cancelled signal 742 to the single channel NR module 706 .
  • the single signal NR module 706 post-processes the noise cancelled signal 742 from the ANC module 704 to remove any further residue noises.
  • the VAD module 708 derives, from the main signal 720 and reference signal 722 , a desired voice activity detection (DVAD) signal 740 that indicates the presence or absence of speech in the main signal 720 and reference signal 722 .
  • the DVAD signal 740 can then be used to control the ANC module 704 and the NR module 706 from the result of BF module 702 .
  • the DVAD signal 740 indicates to the ANC module 704 and the Single Channel NR module 706 which sections of the signal have voice data to analyze, which can increase the efficiency of processing of the ANC module 704 and single channel NR module 706 by ignoring sections of the signal without voice data. Desired speech signal 744 is generated by single channel NR module 706 .
  • the BF module 702 , ANC module 704 , single NR reduction module 706 , and desired VAD module 708 employ linear processing (e.g., linear filters).
  • linear processing e.g., linear filters.
  • a linear system (which employs linear processing) satisfies the properties of superposition and scaling or homogeneity.
  • the property of superposition means that the output of the system is directly proportional to the input.
  • a function F(x) is a linear system if:
  • F ( x 1 +x 2 + . . . ) F ( x 1 )+ F ( x 2 )+ . . .
  • A satisfies the property of scaling or homogeneity of degree one if the output scales proportional to the input.
  • a function F(x) satisfies the properties of scaling or homogeneity if, for a scalar ⁇ :
  • Prior noise cancellation systems employ non-linear processing.
  • linear processing By using linear processing, increasing the input changes the output proportionally.
  • non-linear processing increasing the input changes the output non-proportionally.
  • Using linear processing provides an advantage for speech recognition by improving feature extraction.
  • Speaker recognition algorithm is developed based on noiseless voice recorded in quiet environment with no distortion.
  • a linear noise cancellation algorithm does not introduce nonlinear distortion to noise cancelled speech.
  • Speech recognition can deal with linear distortion on speech, but not non-linear distortion of speech.
  • Linear noise cancellation algorithm is “transparent” to the speech recognition engine. Training speech recognition on the variations of nonlinear distorted noise is impossible. Non-linear distortion can disrupt the feature extraction necessary for speech recognition.
  • Wiener Filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant filtering an observed noisy process, assuming known stationary signal, noise spectra, and additive noise.
  • the Wiener filter minimizes the mean square error between the estimated random process and the desired process.
  • FIG. 8 is a block diagram 800 illustrating an example embodiment of a beam-forming module 802 that can be employed in the noise cancelling circuit 701 of FIG. 7 .
  • the BF module 802 receives the closer microphone signal 810 and further microphone signal 812 .
  • a further microphone signal 812 is inputted to a frequency response matching filter 804 .
  • the frequency response matching filter 804 adjusts gain, phase, and shapes the frequency response of the further microphone signal 812 .
  • the frequency response matching filter 804 can adjust the signal for the distance between the two microphones, such that an outputted reference signal 832 representative of the further microphone signal 812 can be processed with the main signal 830 , representative of the closer microphone signal 810 .
  • the main signal 830 and reference signal 832 are sent to the ANC module.
  • a closer microphone signal 810 is outputted to the ANC module as a main signal 830 .
  • the closer microphone signal 810 is also inputted to a low-pass filter 806 .
  • the reference signal 832 is inputted to a low-pass filter 808 to create a reference signal 822 sent to the Desired VAD module.
  • the low-pass filters 806 and 808 adjust the signal for a “close talk case” by, for example, having a gradual low off from 2 kHz to 4 kHz, in one embodiment. Other frequencies can be used for different designs and distances of the microphones to the user's mouth, however.
  • FIG. 9 is a block diagram illustrating an example embodiment of a Desired Voice Activity Detection Module 902 .
  • the DVAD module 902 receives a main signal 920 and a reference signal 922 from the beam-forming module.
  • the main signal 920 and reference signal 922 are processed by respective short-time power modules 904 and 906 .
  • the short-time power modules 904 and 906 can include a root mean square (RMS) detector, a power (PWR) detector, or an energy detector.
  • the short-time power modules 904 and 906 output signals to respective amplifiers 908 and 910 .
  • the amplifiers can be logarithmic converters (or log/logarithmic amplifiers).
  • the logarithmic converters 908 and 910 output to a combiner 912 .
  • the combiner 912 is configured to combine signals, such as the main signal and one of the at least one reference signals, to produce a voice activity difference signal by subtracting the detection(s) of the reference signal from the main signal (or vice-versa).
  • the voice activity difference signal is inputted into a single channel VAD module 914 .
  • the single channel VAD module can be a conventional VAD module.
  • the single channel VAD 914 outputs the desired voice activity signal.
  • FIG. 10 is a block diagram 1000 illustrating an example embodiment of a noise cancellation circuit 1001 employed to receive a closer microphone signal 1010 and a first and second further microphone signal 1012 and 1014 , respectively.
  • the noise cancellation circuit 1001 is similar to the noise cancellation circuit 701 described in relation to FIG. 7 , however, the noise cancellation circuit 1001 is employed to receive three signals instead of two.
  • a beam-forming (BF) module 1002 is arranged to receive the signals 1010 , 1012 and 1014 and output a main signal 1030 , a first reference signal 1032 and second reference signal 1034 to an adaptive noise cancellation module 1004 .
  • the beam-forming module is further configured to output a main signal 1022 , first reference signal 1020 and second reference signal 1024 to a voice activity detection (VAD) module 1008 .
  • VAD voice activity detection
  • the ANC module 1004 produces a noise cancelled signal 1042 to a Single Channel Noise Reduction (NR) module 1006 , similar to the ANC module 1004 of FIG. 7 .
  • the single NR module 1006 then outputs desired speech 1044 .
  • the VAD module 1008 outputs the DVAD signal to the ANC module 1004 and the single channel NR module 1006 .
  • FIG. 11 is an example embodiment of beam-forming from a boom tube 1102 housing three microphones 1106 , 1108 , and 1110 .
  • a first microphone 1106 is arranged closest to a tip 1104 of the boom tube 1102
  • a second microphone 1108 is arranged in the boom tube 1102 further away from the tip 1104
  • a third microphone 1110 is arranged in the boom tube 1102 even further away from the tip 1104 .
  • the first microphone 1106 and second microphone 1108 are arranged to provide data to output a left signal 1126 .
  • the first microphone is arranged to output its signal to a gain module 1112 and a delay module 1114 , which is outputted to a combiner 1122 .
  • the second microphone is connected directly to the combiner 1122 .
  • the combiner 1122 subtracts the two provided signals to cancel noise, which creates the left signal 1126 .
  • the second microphone 1108 is connected to a gain module 1116 and a delay module 1118 , which is outputted to a combiner 1120 .
  • the third microphone 1110 is connected directly to the combiner 1120 .
  • the combiner 1120 subtracts the two provided signals to cancel noise, which creates the right signal 1120 .
  • FIG. 12 is an example embodiment of beam-forming from a boom tube 1252 housing four microphones 1256 , 1258 , 1260 and 1262 .
  • a first microphone 1256 is arranged closest to a tip 1254 of the boom tube 1252
  • a second microphone 1258 is arranged in the boom tube 1252 further away from the tip 1254
  • a third microphone 1260 is arranged in the boom tube 1252 even further away from the tip 1254
  • a fourth microphone 1262 is arranged in the boom tube 1252 away from the tip 1254 .
  • the first microphone 1256 and second microphone 1258 are arranged to provide data to output a left signal 1286 .
  • the first microphone is arranged to output its signal to a gain module 1272 and a delay module 1274 , which is outputted to a combiner 1282 .
  • the second microphone is connected directly to the combiner 1258 .
  • the combiner 1282 subtracts the two provided signals to cancel noise, which creates the left signal 1286 .
  • the third microphone 1260 is connected to a gain module 1276 and a delay module 1278 , which is outputted to a combiner 1280 .
  • the fourth microphone 1262 is connected directly to the combiner 1280 .
  • the combiner 1280 subtracts the two provided signals to cancel noise, which creates the right signal 1284 .
  • FIG. 13 is a block diagram 1300 illustrating an example embodiment of a beam-forming module 1302 accepting three signals 1310 , 1312 and 1314 .
  • a closer microphone signal 1310 is output as a main signal 1330 to the ANC module and also inputted to a low-pass filter 1317 , to be outputted as a main signal 1320 to the VAD module.
  • a first further microphone signal 1312 and second closer microphone signal 1314 are inputted to respective frequency response matching filters 1306 and 1304 , the outputs of which are outputted to be a first reference signal 1332 and second reference signal 1334 to the ANC module.
  • the outputs of the frequency response matching filters 1306 and 1304 are also outputted to low-pass filters 1316 and 1318 , respectively, which output a first reference signal 1322 and second reference signal 1324 , respectively.
  • FIG. 14 is a block diagram 1400 illustrating an example embodiment of a desired voice activity detection (VAD) module 1402 accepting three signals 1420 , 1422 and 1424 .
  • the VAD module 1402 receives a main signal 1420 , a first reference signal 1422 and a second reference signal 1424 at short-time power modules 1404 , 1405 and 1406 , respectively.
  • the short-time power modules 1404 , 1405 , and 1406 are similar to the short-time power modules described in relation to FIG. 9 .
  • the short-time power modules 1404 , 1405 , and 1406 output to respective amplifiers 1408 , 1409 and 1410 , which can each be a logarithmic converter.
  • Amplifiers 1408 and 1409 output to a combiner module 1411 , which subtracts the two signals and outputs the difference to a single channel VAD module 1414 .
  • Amplifiers 1410 and 1408 output to a combiner module 1412 , which subtracts the two signals and outputs the difference to a single channel VAD module 1416 .
  • the single channel VAD modules 14114 and 1416 output to a logical OR-gate 1418 , which outputs a DVAD signal 1440 .
  • Further example embodiments of the present invention may be configured using a computer program product; for example, controls may be programmed in software for implementing example embodiments of the present invention. Further example embodiments of the present invention may include a non-transitory computer readable medium containing instruction that may be executed by a processor, and, when executed, cause the processor to complete methods described herein. It should be understood that elements of the block and flow diagrams described herein may be implemented in software, hardware, firmware, or other similar implementation determined in the future. In addition, the elements of the block and flow diagrams described herein may be combined or divided in any manner in software, hardware, or firmware. If implemented in software, the software may be written in any language that can support the example embodiments disclosed herein.
  • the software may be stored in any form of computer readable medium, such as random access memory (RAM), read only memory (ROM), compact disk read only memory (CD-ROM), and so forth.
  • RAM random access memory
  • ROM read only memory
  • CD-ROM compact disk read only memory
  • a general purpose or application specific processor loads and executes software in a manner well understood in the art.
  • the block and flow diagrams may include more or fewer elements, be arranged or oriented differently, or be represented differently. It should be understood that implementation may dictate the block, flow, and/or network diagrams and the number of block and flow diagrams illustrating the execution of embodiments of the invention.

Abstract

An eyewear sound induction ear speaker device includes an eyewear frame, at least one speaker including an audio channel integrated with the eyewear frame and an acoustic duct coupled to the speaker and arranged to channel sound emitted by the speaker to an ear of the user wearing the eyewear frame. A method of providing sound for eyewear includes receiving an audio signal at a speaker integrated with an eyewear frame, inducing the speaker to produce an acoustic sound, and channeling the sound through an acoustic duct to be presented to a user wearing the eyewear frame. The eyewear can include microphones and at least one of a receiver and a transmitter that are integral to the eyewear frame and electronically linked to the at least one speaker. The microphones can be employed in noise cancellation.

Description

    RELATED APPLICATIONS
  • This application also claims the benefit of U.S. Provisional Application No. 61/839,227, filed on Jun. 25, 2013. This application claims the benefit of U.S. Provisional Application No. 61/780,108, filed on Mar. 13, 2013. This application also claims the benefit of U.S. Provisional Application No. 61/839,211, filed on Jun. 25, 2013. This application also claims the benefit of U.S. Provisional Application No. 61/912,844, filed on Dec. 6, 2013.
  • This application is being co-filed on the same day, Feb. 14, 2014, with “Eye Glasses With Microphone Array” by Dashen Fan, Attorney Docket No. 0717.2220-001. This application is being co-filed on the same day, Feb. 14, 2014, with “Eyewear Spectacle With Audio Speaker In The Temple” by Kenny W. Y. Chow, et al., Attorney Docket No. 0717.2229-001. This application is being co-filed on the same day, Feb. 14, 2014, with “Noise Cancelling Microphone Apparatus” by Dashen Fan, Attorney Docket No. 0717.2216-001.
  • The entire teachings of the above applications are incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • Traditionally, earphones have been used to present acoustic sounds to an individual when privacy is desired or it is desired not to disturb others. Examples of traditional earphone devices include over-the-head headphones having an ear cup speaker (e.g. Beats® by Dr. Dre headphones), ear bud style earphones (e.g., Apple iPod® earphones and Bluetooth® headsets), bone-conducting speakers (e.g., Google Glass). Another known way to achieve the desired privacy or peace and quiet for others is by using directional multi-speaker beam-forming. Also well-known but not conventionally used to present acoustic sounds to an individual that is not hearing-impaired are hearing aids. An example of which is the open ear mini-Behind-the-Ear (BTE) with Receiver-In-The-Aid (RITA) device. Such a hearing aid typically includes a clear “hook” that acts as an acoustic duct tube to channel audio speaker (also referred to as a receiver in telephony applications) sound to the inner ear of a user and act as the mechanical support so that the user can wear the hearing aid, the speaker being housed in the behind-the-ear portion of the hearing aid body. However, the aforementioned techniques all have drawbacks, namely, they are either bulky, cumbersome, unreliable, or immature.
  • Therefore, a need exists for earphones that overcome or minimize the above-referenced problem.
  • SUMMARY OF THE INVENTION
  • The present invention related in general to eyewear, and more particularly to eyewear devices and corresponding methods for presenting sound to a user of the eyewear.
  • In one embodiment, an eyewear sound induction ear speaker device of the invention includes an eyewear frame, a speaker including an audio channel integrated with the eyewear frame, and an acoustic duct coupled to the speaker and arranged to channel sound emitted by the speaker to an ear of the user wearing the eyewear frame.
  • In another embodiment, the invention is an eyewear sound induction ear speaker device that includes means for receiving an audio sound, means for processing and amplifying the audio sound, and means for channeling the amplified and processed audio sound to an ear of a user wearing an eyewear frame.
  • In still another embodiment, the invention is a method of providing sound for eyewear, including the steps of receiving a processed electrical audio signal at a speaker integrated with an eyewear frame, wherein the speaker includes an audio channel. The speaker is induced to produce acoustic sound at the audio channel, and the acoustic sound is channeled through an acoustic duct to be presented to a user wearing the eyewear frame.
  • In yet another embodiment, the invention is a method of channeling sound from eyewear device that includes the steps of receiving an electrical audio signal from electrical audio source at speaker integrated with an eyewear device, inducing audible sound from the electrical audio signal at the speaker, and channeling the audio sound to an ear of the user of the eyewear using the audio duct, the audio duct not blogging the ear canal of the ear.
  • The present invention has many advantages. For example, the eyewear spectacle of the invention is relatively compact, unobtrusive, and durable. Further, the device and method can be integrated with noise cancellation apparatus and methods that are also, optionally, components of the eyewear itself. In one embodiment, the noise cancellation apparatus, including microphones, electrical circuitry, and software can be integrated with and, optionally, on board the eyewear worn by the user. In another embodiment, microphones mounted on board the eyewear can be integrated with the speakers and with circuitry, such as a computer, receiver or transmitter to thereby process signals received from an external source or the microphones, or to process and transmit signals from the microphone, and to selectively transmit those signals, whether processed or unprocessed, to the user of the eyewear through the speakers mounted in the eyewear. For example, human-machine interaction through the use of a speech recognition user interface is becoming increasingly popular. To facilitate such human-machine interaction, accurate recognition of speech is useful. It is also useful as a machine that can present information to the user through spoken words, for example by reading a text to the user. Such a machine output presentation facilitates hands-free activities of a user, which is increasingly popular. Users also do not have to hold a speaker or device in place, nor do they need to have electronics behind their ear or an ear bud blocking their ear. There are also no flimsy wires, and users do not have to tolerate the skin contact or pressure associated with bone condition speakers.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1A is a side view of one embodiment of an eyewear sound induction ear speaker device of the invention
  • FIG. 1B is a perspective view of the embodiment of the invention shown in FIG. 1A
  • FIG. 1C is a cross-sectional view of one embodiment of an acoustic duct of the embodiment of the invention shown in FIG. 1A
  • FIG. 2 is a cross-sectional view of an alternative embodiment of an acoustic duct of the invention
  • FIG. 3 is an illustration of an embodiment of an eyewear and sound induction ear speaker device of the invention that includes two remote microphones that are electronically linked with the eyewear frame of the eyewear sound induction ear speaker device.
  • FIG. 4 is an illustration of another embodiment of eyewear of the invention that includes three remote microphones.
  • FIG. 5A is an exploded view of a rubber boot and microphone according to one embodiment of the invention.
  • FIG. 5B is a perspective view of the assembled rubber boot shown in FIG. 5A.
  • FIG. 6 is a representation of another embodiment of the invention showing alternate and optional positions of placements of the microphones.
  • FIG. 7 is an embodiment of a noise cancellation circuit employed in one embodiment of the eyewear sound induction user speaker device of the invention.
  • FIG. 8 is an illustration of a beam-forming module suitable for use in the embodiment of the invention illustrated in FIG. 8.
  • FIG. 9 is a block diagram illustrating an example embodiment of a desired voice activity detection module employed in another embodiment of the eyewear sound induction ear speaker device of the invention.
  • FIG. 10 is a block diagram illustrating an example embodiment of a noise cancellation circuit employed in an embodiment of the eyewear sound induction ear speaker device of the invention.
  • FIG. 11 is an example embodiment of a boom tube housing three microphones, in an arrangement of one embodiment of the eyewear sound induction ear speaker device of the invention.
  • FIG. 12 is an example embodiment of a boom tube housing four microphones in an arrangement of another embodiment of the eyewear sound induction ear speaker device of the invention.
  • FIG. 13 is a block diagram illustrating an example embodiment of a beam-forming module accepting three signals and another embodiment of the eyewear sound induction ear speaker device of the invention.
  • FIG. 14 is a block diagram illustrating an example embodiment of a desired voice activation detection module of yet another embodiment of the eyewear sound induction ear speaker device of the invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The foregoing will be apparent from the following more particular description of example embodiments invention, as illustrated in the accompanying drawings in which like references refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
  • The terms “speaker” and “audio speaker” are used interchangeably throughout the present application and are used to refer to a small, relative to the size of a human ear, narrow band (e.g., voice-band, for example 300 Hz-20 KHz) speaker receiver or that converts electrical signals at audio frequencies into acoustic signals.
  • In one embodiment of the present invention, shown in FIGS. 1A and 1B eyewear 10, for example, a pair of prescription glasses, includes eyewear frame 14. Audio speaker 16 is integrated with eyewear frame 14, and acoustic duct 18 is coupled to audio speaker 16 and arranged to channel sound emitted by speaker 16 to an ear of a user wearing eyewear frame 14. Speaker 16 is operatively linked, for example, to embedded receiver 28. “Operatively coupled,” as that term as used herein, means electronically linked, such as by a wireless or hardwire connection.
  • Acoustic duct 18 can be made from a pliable material and further arranged such that acoustic duct 18 does not block the ear canal of the user 20. Acoustic duct 18 can include point 22 and be horn-shaped, as shown in FIG. 1A. A cross-section of acoustic duct 18, shown in FIG. 1C, can be oval-shaped or, as shown in FIG. 2, acoustic duct 20 can be rectangular shaped. Acoustic duct 18 does not have to be weight-bearing and is not designed to be weight-bearing since eyewear frame 14 is used to support the weight.
  • As shown in FIG. 1B, eyewear 10 further includes a second, acoustic duct 22 and second audio speaker 24 coupled to audio duct 26 and, like speaker 16, operatively coupled to an electrical audio source, such as receiver 28, which is integrated within eyewear frame 14. Second acoustic duct 22 is coupled to second audio speaker 24 and is arranged to channel sound emitted by audio speaker 24 to a second ear of a user wearing eyewear frame 14. In an alternative example embodiment, speaker 24 can further include a second audio channel, the second audio channel being coupled through second speaker 24 to second acoustic duct 22 to provide stereo sound to the user wearing eyewear frame 14.
  • Receiver 28 is operatively coupled to first speaker 16, either alone or in conjunction with second audio speaker 24. Receiver 28 can be a wired or a wireless receiver, and receive an electrical audio signal from any electrical audio source. For example, the receiver can be operatively coupled to a 3.5 mm audio jack, Bluetooth wireless radio, memory storage device or other such source. The wireless receiver can include an audio codec, digital signal processor, and amplifiers, the audio codec can be coupled to the audio speaker and coupled to at least one microphone. The microphone can be an analog microphone coupled to an analog-to-digital (A/D) converter, which can in turn be coupled to a DSP. The audio microphone can be a micro-electro-mechanical system (MEMS) microphone. Further example embodiments can include a digital microphone, such as a digital MEMs microphone, coupled to an all-digital voice processing chip, obviating the need for a CODEC all together. The speaker can be driven by a digital-to-analog (D/A) driver, or can be driven by a pulse width modulation (PWM) digital signal.
  • Alternatively, or in addition to receiver 26, eyewear 10 of the invention can include a transmitter, whereby sounds captured electronically by microphones of eyewear that are thereby processed for transmission to an extend receiver or to at least one of audio speakers 16 and 24.
  • An example method of the present invention includes channeling sound from an eyewear device. The method includes receiving an electrical audio signal from an electrical audio source at a speaker integrated with an eyewear device, inducing audio sound from the electrical audio signal at the speaker, and channeling the audio sound to an ear of a user of the eyewear using an audio duct, the audio duct not blocking an ear canal of the ear. The electrical audio signal can be supplied from any electrical audio source, for example, a 3.5 mm audio-jack, Bluetooth® wireless radio, and a media storage device, such as a hard disk or solid-state memory device.
  • A corresponding example method of providing sound for eyewear 10 can include: receiving and electrically processing sound at at least one of speakers 16 and 24, integrated with eyewear frame 14, speakers 16 and 24 including audio channels; inducing the electrically processed sound at audio speakers 16 and 24 integrated within eyewear frame 14 to produce acoustic processed sound; and, channeling the acoustic processed sound through acoustic ducts 18, 22 to be presented to a user wearing eyewear 10.
  • Example methods can further include arranging at least one of acoustic ducts 18, 30 such that at least one of acoustic ducts 18, 22 do not block the ear canal of the user, acoustic ducts 18, 22 being comprised of a pliable non-load bearing material.
  • The processing can include preamplifying sound received from a wired or wireless receiver 28 using a pre-amplifier (not shown), further processing the amplified sound using a digital signal processor (not shown), converting the further processed sound into an analog signal, and postamplifying the analog signal to produce the electrically processed sound. The processing can further include processing sound in a second audio channel, inducing the electrically processed sound of the second audio channel at second audio speaker 24 integrated with eyewear frame 14 to produce stereo acoustic processed sound and, channeling the second acoustic processed sound through second acoustic duct 30 to present stereo sound to a user wearing eyewear frame 14.
  • In a further example embodiment, an eyewear sound induction ear speaker device can include a means for receiving an audio sound, a means for amplifying and processing the audio sound, and a means for channeling the audio sound to an ear of a user wearing an eyewear frame.
  • Horn-shaped acoustic ducts 18, 22 will amplify the sound coming out of respective speakers 16, 24, thereby bringing the sound to the users ear and increasing the effective sound volume. The larger acoustic port can be oval shaped, thinner in the thickness dimension of the acoustic duct wall, to fit it better to ear, or act as a clamp shaped position holder.
  • FIG. 3 is an illustration of an example embodiment of an eyewear sound induction speaker device of the invention 300. As shown in FIG. 3, eyewear sound induction speaker device of the invention includes eye-glasses 302 having embedded pressure-gradient microphones. Each pressure-gradient microphone element can, optionally, and independently, be replaced with two omni-directional microphones at the location of each acoustic port, resulting in four total microphones. The signal from these two omni-directional microphone can be processed by electronic or digital beam-forming circuitry described above to produce a pressure gradient beam pattern. This pressure gradient beam pattern replaces the equivalent pressure-gradient microphone.
  • In an embodiment of the present invention, if a pressure-gradient microphone is employed, each microphone is within a rubber boot that extends an acoustic port on the front and the back side of the microphone with acoustic ducts. At the end of rubber boot, the new acoustic port is aligned with the opening in the tube, where empty space is filled with wind-screen material. If two omni-directional microphones are employed in place of one pressure-gradient microphone, then the acoustic port of each microphone is aligned with the opening.
  • In an embodiment, a long boom dual-microphone headset can look like a conventional close-talk boom microphone, but is a big boom with two-microphones in parallel. An end microphone of the boom is placed in front of user's mouth. The close-talk long boom dual-microphone design targets heavy noise usage in military, aviation, industrial and has unparalleled noise cancellation performance. For example, one main microphone can be positioned directly in front of mouth. A second microphone can be positioned at the side of the mouth. The two microphones can be identical with identical casing. The two microphones can be placed in parallel, perpendicular to the boom. Each microphone has front and back openings. DSP circuitry can be in the housing between the two microphones.
  • Microphone is housed in a rubber or silicon holder (e.g., the rubber boot) with an air duct extending to the acoustic ports as needed. The housing keeps the microphone in an air-tight container and provides shock absorption. The microphone front and back ports are covered with a wind-screen layer made of woven fabric layers to reduce wind noise or wind-screen foam material. The outlet holes on the microphone plastic housing can be covered with water-resistant thin film material or special water-resistant coating.
  • In another embodiment, a conference gooseneck microphone can provide noise cancellation. In large conference hall, echoes can be a problem for sound recording. Echoes recorded by a microphone can cause howling. Severe echo prevents the user from tuning up speaker volume and causes limited audibility. Conference hall and conference room can be decorated with expensive sound absorbing materials on their walls to reduce echo to achieve higher speaker volume and provide an even distribution of sound field across the entire audience. Electronic echo cancellation equipment is used to reduce echo and increase speaker volume, but such equipment is expensive, can be difficult to setup and often requires an acoustic expert.
  • In an embodiment, a dual-microphone noise cancellation conference microphone can provide an inexpensive, easy to implement solution to the problem of echo in a conference hall or conference room. The dual-microphone system described above can be placed in a desktop gooseneck microphone. Each microphone in the tube is a pressure-gradient bi-directional, uni-directional, or super-directional microphone.
  • In a head mounted computer, a user can desire a noise-canceling close-talk microphone without a boom microphone in front of his or her mouth. The microphone in front of the user's mouth can be viewed as annoying. In addition, moisture from the user's mouth can condense on the surface of the Electret Condenser Microphone (ECM) membrane, which after long usage can deteriorate microphone sensitivity.
  • In an embodiment, a short tube boom headset can solve these problems by shortening the boom, moving the ECM away from the user's mouth and using a rubber boot to extend the acoustic port of the noise-canceling microphone. This can extend the effective close-talk range of the ECM. This maintains the noise-canceling ECM property for far away noises. In addition, the boom tube can be lined with wind-screen form material. This solution further allows the headset computer to be suitable for enterprise call center, industrial, and general mobile usage. In an embodiment with identical dual-microphones within the tube boom, the respective rubber boots of each microphone can also be identical.
  • In an embodiment, the short tube boom headset can be a wired or wireless headset. The headset includes the short microphone (e.g., and ECM) tube boom. The tube boom can extend from the housing of the headset along the user's cheek, where the tube boom is either straight or curved. The tube boom can extend the length of the cheek to the side of the user's mouth, for instance. The tube boom can include a single noise-cancelling microphone on its inside.
  • The tube boom can further include a dual microphone inside of the tube. A dual microphone can be more effective in cancelling out non-stationary noise, human noise, music, and high frequency noises. A dual microphone can be more suitable for mobile communication, speech recognition, or a Bluetooth headset. The two microphones can be identical, however a person of ordinary skill in the art can also design a tube boom having microphones of different models.
  • In an embodiment having dual-microphones, the two microphones enclosed in their respective rubber boats are placed in series along the inside of the tube.
  • The tube can have a cylindrical shape, although other shapes are possible (e.g., a rectangular prism, etc.). The short tube boom can have two openings, one at the tip, and a second at the back. The tube surface can be covered with a pattern of one or more holes or slits to allow sound to reach the microphone inside the tube boom. In another embodiment, the short tube boom can have three openings, one at the tip, another in the middle, and another in the back. The openings can be equally spaced, however, other a person of ordinary skill in the art can design other spacings.
  • The microphone in the tube boom is a bi-directional noise-cancelling microphone having pressure-gradient microphone elements. The microphone can be enclosed in a rubber boot extending acoustic port on the front and the back side of the microphone with acoustic ducts. Inside of the boot, the microphone element is sealed in the air-tight rubber boot.
  • Within the tube, the microphone with the rubber boot is placed along the inside of the tube. An acoustic port at the tube tip aligns with the boom opening, and an acoustic port at the tube back aligns with boom opening. The rubber boot can be offset from the tube ends to allow for spacing between the tube ends and the rubber boot. The spacing further allows breathing room and for room to place a wind-screen of appropriate thickness. The rubber boot and inner wall of the tube remain air-tight, however. A wind-screen foam material (e.g., wind guard sleeves over the rubber boot) fills the air-duct and the open space between acoustic port and tube interior/opening.
  • Referring back to FIG. 3 the eye-glasses 302 have two microphones 304 and 306, a first microphone 304 being arranged in the middle of the eye-glasses 302 frame and a second microphone 306 being arranged on the side of the eye-glasses 302 frame. The microphones 304 and 306 can be pressure-gradient microphone elements, either bi- or uni-directional. Each microphone 304 and 306 is a microphone assembly that includes a microphone (not shown) within a rubber boot, as further described infra with reference to FIGS. 5A-5B. The rubber boot provides an acoustic port on the front and the back side of the microphone with acoustic ducts. The two microphones 304 and 306 and their respective boots can be identical. The microphone elements 304 and 306 can be sealed air-tight (e.g., hermetically sealed) inside the rubber boots. The acoustic ducts are filled with wind-screen material. The ports are sealed with woven fabric layers. The lower and upper acoustic ports are sealed with a water-proof membrane. The microphones can be built into the structure of the eye glasses frame. Each microphone has top and bottom holes, being acoustic ports. In an embodiment, the two microphones 304 and 306, which can be pressure-gradient microphone elements, can each be replaced by two omni-directional microphones.
  • FIG. 4 is an illustration of another embodiment of an eyewear sound induction ear speaker device 450 of the invention. As shown therein, eyewear sound induction ear speaker 450 includes eye-glasses 452 having three embedded microphones. The eye-glasses 452 of FIG. 4 are similar to the eye-glasses 302 of FIG. 3, but instead employs three microphones instead of two. The eye-glasses 452 of FIG. 4 have a first microphone 454 arranged in the middle of the eye-glasses 452, a second microphone 456 arranged on the left side of the eye-glasses 4, and a third microphone 458 arranged on the right side of the eye-glasses 452. The three microphones can be employed in the three-microphone embodiment described above.
  • FIG. 5A is an exploded view of a microphone assembly 500 of the invention. As shown therein, the rubber boot 502 a-b is separated into a first half of the rubber boot 502 a and a second half of the rubber boot 502 b. Microphone 501 is between the rubber boot halves. Each rubber boot 502 a-b is lined by a wind-screen 508 material, however FIG. 5 shows the wind-screen in the second half of the rubber boot 502 b. In the case of a pressure-gradient microphone, the air-duct and the open space between acoustic port and boom interior is filled with wind-screen foam material, such as wind guard sleeves over the rubber boots.
  • A microphone 504 is arranged to be played between the two halves of the rubber boot 502 a-b. The microphone 504 and rubber boot 502 a-b are sized such that the microphone 504 fits in a cavity within the halves of the rubber boot 502 a-b. The microphone is coupled with a wire 506, that extends out of the rubber boot 502 a-b and can be connected to, for instance, the noise cancellation circuit described above.
  • FIG. 5B is a perspective view of microphone assembly 500 when assembled. The rubber boot 552 of FIG. 5 is shown to have both halves 502 a-b joined together, where a microphone (not shown) is inside. A wire 556 coupled to the microphone exist the rubber boot 552 such that it can be connected to, for instance, the noise cancellation circuit described below with reference to FIGS. 7 through 10.
  • FIG. 6 is an illustration of an embodiment of the invention 600 showing various optional positions of placement of the microphones 604 a-e. As described above, the microphones are pressure-gradient. In an embodiment, microphones can be placed in any of the locations shown in FIG. 6, or any combination of the locations shown in FIG. 6. In a two-microphone system, the microphone closest to the user's mouth is referred to as MIC1, the microphone further from the user's mouth is referred to as MIC2. In an embodiment, both MIC1 & MIC2 can be inline at position 1 604 a. In other embodiments, the microphones can be positioned as follows:
      • MIC1 at position 1 604 a and MIC2 at position 2 604 b;
      • MIC1 at position 1 604 a and MIC2 at position 3 604 c;
      • MIC1 at position 1 604 a and MIC2 at position 4 604 d;
      • MIC1 at position 4 604 d and MIC2 at position 5 604 e;
      • Both MIC1 and MIC2 at position 4 604 d.
  • If position 4 604 d has a microphone, it is employed within a pendant.
  • The microphones can also be employed at other combinations of positions 604 a-e, or at positions not shown in FIG. 6.
  • FIG. 7 is a block diagram 700 illustrating an example embodiment of a noise cancellation circuit employed in the present invention. Signals 710 and 712 from two microphones are digitized and fed into the noise cancelling circuit 701. The noise cancelling circuit 701 can be a digital signal processing (DSP) unit (e.g., software executing on a processor, hardware block, or multiple hardware blocks). In an embodiment, the noise cancellation circuit 701 can be a digital signal processing (DSP) chip, a system-on-a-chip (SOC), a Bluetooth chip, a voice CODEC with DSP chip, etc. The noise cancellation circuit 701 can be located in a Bluetooth headset near the user's ear, in an inline control case with battery, or inside the connector, etc. The noise cancellation circuit 701 can be powered by a battery or by a power source of the device that the headset is connected to, such as the device's batter, or power from a USB, micro-USB, or Lightening connector.
  • The noise cancellation circuit 701 includes four functional blocks all of which are electronically linked, either wirelessly or by hardwire: a beam-forming (BF) module 702, a Desired Voice Activity Detection (VAD) Module 708, an adaptive noise cancellation (ANC) module 704 and a single signal noise reduction (NR) module 706. The two signals 710 and 712 are fed into the BF module 702, which generates a main signal 730 and a reference signal 732 to the ANC module 704. A closer (i.e., relatively close to the desired sound) microphone signal 710 is collected from a microphone closer to the user's mouth and a further (i.e., relatively distant from the desired sound) microphone signal is collected from a microphone further from the user's mouth, relatively. The BF module 702 also generates a main signal 720 and reference signal 722 for the desired VAD module 708. The main signal 720 and reference signal 722 can, in certain embodiments, be different from the main signal 730 and reference signal 732 generated for the for ANC module 704.
  • The ANC module 704 processes the main signal 730 and the reference signal 732 to cancel out noises from the two signals and output a noise cancelled signal 742 to the single channel NR module 706. The single signal NR module 706 post-processes the noise cancelled signal 742 from the ANC module 704 to remove any further residue noises. Meanwhile, the VAD module 708 derives, from the main signal 720 and reference signal 722, a desired voice activity detection (DVAD) signal 740 that indicates the presence or absence of speech in the main signal 720 and reference signal 722. The DVAD signal 740 can then be used to control the ANC module 704 and the NR module 706 from the result of BF module 702. The DVAD signal 740 indicates to the ANC module 704 and the Single Channel NR module 706 which sections of the signal have voice data to analyze, which can increase the efficiency of processing of the ANC module 704 and single channel NR module 706 by ignoring sections of the signal without voice data. Desired speech signal 744 is generated by single channel NR module 706.
  • In an embodiment, the BF module 702, ANC module 704, single NR reduction module 706, and desired VAD module 708 employ linear processing (e.g., linear filters). A linear system (which employs linear processing) satisfies the properties of superposition and scaling or homogeneity. The property of superposition means that the output of the system is directly proportional to the input. For example, a function F(x) is a linear system if:

  • F(x 1 +x 2+ . . . )=F(x 1)+F(x 2)+ . . .
  • A satisfies the property of scaling or homogeneity of degree one if the output scales proportional to the input. For example, a function F(x) satisfies the properties of scaling or homogeneity if, for a scalar α:

  • Fx)=αF(x)
  • In contract, a non-linear function does not satisfy both of these conditions.
  • Prior noise cancellation systems employ non-linear processing. By using linear processing, increasing the input changes the output proportionally. However, in non-linear processing, increasing the input changes the output non-proportionally. Using linear processing provides an advantage for speech recognition by improving feature extraction. Speaker recognition algorithm is developed based on noiseless voice recorded in quiet environment with no distortion. A linear noise cancellation algorithm does not introduce nonlinear distortion to noise cancelled speech. Speech recognition can deal with linear distortion on speech, but not non-linear distortion of speech. Linear noise cancellation algorithm is “transparent” to the speech recognition engine. Training speech recognition on the variations of nonlinear distorted noise is impossible. Non-linear distortion can disrupt the feature extraction necessary for speech recognition.
  • An example of a linear system is a Weiner Filter, which is a linear single channel noise removal filter. The Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant filtering an observed noisy process, assuming known stationary signal, noise spectra, and additive noise. The Wiener filter minimizes the mean square error between the estimated random process and the desired process.
  • FIG. 8 is a block diagram 800 illustrating an example embodiment of a beam-forming module 802 that can be employed in the noise cancelling circuit 701 of FIG. 7. The BF module 802 receives the closer microphone signal 810 and further microphone signal 812.
  • A further microphone signal 812 is inputted to a frequency response matching filter 804. The frequency response matching filter 804 adjusts gain, phase, and shapes the frequency response of the further microphone signal 812. For example, the frequency response matching filter 804 can adjust the signal for the distance between the two microphones, such that an outputted reference signal 832 representative of the further microphone signal 812 can be processed with the main signal 830, representative of the closer microphone signal 810. The main signal 830 and reference signal 832 are sent to the ANC module.
  • A closer microphone signal 810 is outputted to the ANC module as a main signal 830. The closer microphone signal 810 is also inputted to a low-pass filter 806. The reference signal 832 is inputted to a low-pass filter 808 to create a reference signal 822 sent to the Desired VAD module. The low- pass filters 806 and 808 adjust the signal for a “close talk case” by, for example, having a gradual low off from 2 kHz to 4 kHz, in one embodiment. Other frequencies can be used for different designs and distances of the microphones to the user's mouth, however.
  • FIG. 9 is a block diagram illustrating an example embodiment of a Desired Voice Activity Detection Module 902. The DVAD module 902 receives a main signal 920 and a reference signal 922 from the beam-forming module. The main signal 920 and reference signal 922 are processed by respective short- time power modules 904 and 906. The short- time power modules 904 and 906 can include a root mean square (RMS) detector, a power (PWR) detector, or an energy detector. The short- time power modules 904 and 906 output signals to respective amplifiers 908 and 910. The amplifiers can be logarithmic converters (or log/logarithmic amplifiers). The logarithmic converters 908 and 910 output to a combiner 912. The combiner 912 is configured to combine signals, such as the main signal and one of the at least one reference signals, to produce a voice activity difference signal by subtracting the detection(s) of the reference signal from the main signal (or vice-versa). The voice activity difference signal is inputted into a single channel VAD module 914. The single channel VAD module can be a conventional VAD module. The single channel VAD 914 outputs the desired voice activity signal.
  • FIG. 10 is a block diagram 1000 illustrating an example embodiment of a noise cancellation circuit 1001 employed to receive a closer microphone signal 1010 and a first and second further microphone signal 1012 and 1014, respectively. The noise cancellation circuit 1001 is similar to the noise cancellation circuit 701 described in relation to FIG. 7, however, the noise cancellation circuit 1001 is employed to receive three signals instead of two. A beam-forming (BF) module 1002 is arranged to receive the signals 1010, 1012 and 1014 and output a main signal 1030, a first reference signal 1032 and second reference signal 1034 to an adaptive noise cancellation module 1004. The beam-forming module is further configured to output a main signal 1022, first reference signal 1020 and second reference signal 1024 to a voice activity detection (VAD) module 1008.
  • The ANC module 1004 produces a noise cancelled signal 1042 to a Single Channel Noise Reduction (NR) module 1006, similar to the ANC module 1004 of FIG. 7. The single NR module 1006 then outputs desired speech 1044. The VAD module 1008 outputs the DVAD signal to the ANC module 1004 and the single channel NR module 1006.
  • FIG. 11 is an example embodiment of beam-forming from a boom tube 1102 housing three microphones 1106, 1108, and 1110. A first microphone 1106 is arranged closest to a tip 1104 of the boom tube 1102, a second microphone 1108 is arranged in the boom tube 1102 further away from the tip 1104, and a third microphone 1110 is arranged in the boom tube 1102 even further away from the tip 1104. The first microphone 1106 and second microphone 1108 are arranged to provide data to output a left signal 1126. The first microphone is arranged to output its signal to a gain module 1112 and a delay module 1114, which is outputted to a combiner 1122. The second microphone is connected directly to the combiner 1122. The combiner 1122 subtracts the two provided signals to cancel noise, which creates the left signal 1126.
  • Likewise, the second microphone 1108 is connected to a gain module 1116 and a delay module 1118, which is outputted to a combiner 1120. The third microphone 1110 is connected directly to the combiner 1120. The combiner 1120 subtracts the two provided signals to cancel noise, which creates the right signal 1120.
  • FIG. 12 is an example embodiment of beam-forming from a boom tube 1252 housing four microphones 1256, 1258, 1260 and 1262. A first microphone 1256 is arranged closest to a tip 1254 of the boom tube 1252, a second microphone 1258 is arranged in the boom tube 1252 further away from the tip 1254, a third microphone 1260 is arranged in the boom tube 1252 even further away from the tip 1254, and a fourth microphone 1262 is arranged in the boom tube 1252 away from the tip 1254. The first microphone 1256 and second microphone 1258 are arranged to provide data to output a left signal 1286. The first microphone is arranged to output its signal to a gain module 1272 and a delay module 1274, which is outputted to a combiner 1282. The second microphone is connected directly to the combiner 1258. The combiner 1282 subtracts the two provided signals to cancel noise, which creates the left signal 1286.
  • Likewise, the third microphone 1260 is connected to a gain module 1276 and a delay module 1278, which is outputted to a combiner 1280. The fourth microphone 1262 is connected directly to the combiner 1280. The combiner 1280 subtracts the two provided signals to cancel noise, which creates the right signal 1284.
  • FIG. 13 is a block diagram 1300 illustrating an example embodiment of a beam-forming module 1302 accepting three signals 1310, 1312 and 1314. A closer microphone signal 1310 is output as a main signal 1330 to the ANC module and also inputted to a low-pass filter 1317, to be outputted as a main signal 1320 to the VAD module. A first further microphone signal 1312 and second closer microphone signal 1314 are inputted to respective frequency response matching filters 1306 and 1304, the outputs of which are outputted to be a first reference signal 1332 and second reference signal 1334 to the ANC module. The outputs of the frequency response matching filters 1306 and 1304 are also outputted to low- pass filters 1316 and 1318, respectively, which output a first reference signal 1322 and second reference signal 1324, respectively.
  • FIG. 14 is a block diagram 1400 illustrating an example embodiment of a desired voice activity detection (VAD) module 1402 accepting three signals 1420, 1422 and 1424. The VAD module 1402 receives a main signal 1420, a first reference signal 1422 and a second reference signal 1424 at short- time power modules 1404, 1405 and 1406, respectively. The short- time power modules 1404, 1405, and 1406 are similar to the short-time power modules described in relation to FIG. 9. The short- time power modules 1404, 1405, and 1406 output to respective amplifiers 1408, 1409 and 1410, which can each be a logarithmic converter. Amplifiers 1408 and 1409 output to a combiner module 1411, which subtracts the two signals and outputs the difference to a single channel VAD module 1414. Amplifiers 1410 and 1408 output to a combiner module 1412, which subtracts the two signals and outputs the difference to a single channel VAD module 1416. The single channel VAD modules 14114 and 1416 output to a logical OR-gate 1418, which outputs a DVAD signal 1440.
  • Further example embodiments of the present invention may be configured using a computer program product; for example, controls may be programmed in software for implementing example embodiments of the present invention. Further example embodiments of the present invention may include a non-transitory computer readable medium containing instruction that may be executed by a processor, and, when executed, cause the processor to complete methods described herein. It should be understood that elements of the block and flow diagrams described herein may be implemented in software, hardware, firmware, or other similar implementation determined in the future. In addition, the elements of the block and flow diagrams described herein may be combined or divided in any manner in software, hardware, or firmware. If implemented in software, the software may be written in any language that can support the example embodiments disclosed herein. The software may be stored in any form of computer readable medium, such as random access memory (RAM), read only memory (ROM), compact disk read only memory (CD-ROM), and so forth. In operation, a general purpose or application specific processor loads and executes software in a manner well understood in the art. It should be understood further that the block and flow diagrams may include more or fewer elements, be arranged or oriented differently, or be represented differently. It should be understood that implementation may dictate the block, flow, and/or network diagrams and the number of block and flow diagrams illustrating the execution of embodiments of the invention.
  • The relevant teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
  • While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.

Claims (31)

What is claimed is:
1. An eyewear sound induction ear speaker device, comprising:
a) an eyewear frame;
b) a speaker including an audio channel integrated with the eyewear frame; and
c) an acoustic duct coupled to the speaker and arranged to channel sound emitted by the speaker to an ear of a user wearing the eyewear frame.
2. The device of claim 1, wherein the acoustic duct is comprised of a pliable non-load bearing material and further arranged such that the acoustic duct is not blocking the ear canal of the user.
3. The device of claim 1, wherein the acoustic duct is pointed.
4. The device of claim 1, wherein the acoustic duct is horn-shaped and includes an acoustic port.
5. The device of claim 4, wherein the acoustic port is oval-shaped or rectangular-shaped.
6. The device of claim 1, further including a wireless receiver operatively coupled to the speaker.
7. The device of claim 6, wherein the wireless receiver includes an audio codec, digital signal processor, and amplifiers, the audio codec being coupled to an audio speaker and coupled to at least one microphone.
8. The device of claim 6, further including an all-digital voice processing integrated circuit coupled to a digital micro-electro-mechanical system (MEMs) microphone, and the speaker is modulated by a pulse-width modulation (PWM) digital signal.
9. The device of claim 1, further including:
a) a second speaker integrated with the eyewear frame; and
b) a second acoustic duct coupled to the second speaker and arranged to channel sound emitted by the second speaker to a second ear of a user wearing the eyewear frame.
10. The device of claim 1, wherein the speaker further includes a second audio channel, the second audio channel being coupled to a second acoustic port to provide stereo sound to the user wearing the eyewear frame.
11. The device of claim 1, further including a noise-cancelling digital signal processor integrated into the eyewear frame.
12. The eyewear device of claim 1, further including an array of microphones coupled to at least one of the front frame and the at least one side frame member, the array of microphones including at least a first and second microphone, the first microphone coupled to the eyewear at a temple region, the temple region being located approximately between a top corner of a lens opening defined by the front frame and having an inner edge, and the at least one side frame member, and the second microphone at an inner edge of the lens opening, and a first and second audio channel output from the first and second microphones, respectively.
13. The eyewear device of claim 20, further including a digital signal processor having:
a) a beam-former electronically linked to the first and second microphones, for receiving at least the first and second audio channels and outputting a main channel and one or more reference channels;
b) a voice activity detector electronically linked to the beam-former, for receiving the main and reference channels and outputting a desired voice activity channel;
c) an adaptive noise canceller electronically linked to the beam-former and the voice activity detector for receiving the main, reference, and desired voice activity channels and outputting an adaptive noise cancellation channel; and
d) a noise reducer electronically linked to the voice activity detector and the adaptive noise canceller for receiving the desired voice activity and adaptive noise cancellation channels and outputting a desired speech channel.
14. The device of claim 13, wherein the array of microphones includes two pressure-gradient microphone elements, each pressure-gradient microphone element including two acoustic ports.
15. The device of claim 14, wherein the two pressure-gradient microphone elements are bi-directional and identical.
16. The device of claim 15, wherein the two pressure-gradient microphone elements are each sealed within an acoustic extension, the acoustic extension including an acoustic duct for each acoustic port, the acoustic duct extending a range of each acoustic port, respectively.
17. The device of claim 16, wherein the two acoustic extension sealed pressure-gradient microphone elements are mounted air-tight in series within a substantially cylindrical tube, the tube further including:
a) three or more acoustic openings, being longitudinally equally spaced at a distance equal to or greater than the range of each acoustic port; and
b) a wind-screen material, filling the tube interior between the acoustic openings and the acoustic ports.
18. The device of claim 13, wherein at least a first and a second microphone of the array of microphones is located within a housing of an eyeglasses frame, the first microphone being located proximate to a bridge support and having a first top and a first bottom acoustic port, and the second microphone being located proximate to an end piece between a lens and a support arm and having a second top and a second bottom acoustic port.
19. The device of claim 18, further including a third microphone located within the housing proximate to an opposite end piece and having a third top and third bottom acoustic port.
20. The device of claim 13, wherein the array of microphones includes three or more omni-directional microphone elements and the beam-former is further configured to receive an audio channel for each respective microphone element.
21. The device of claim 20, wherein the beam-former further includes splitters, combiners, amplifiers, and phase shifters.
22. The device of claim 20, wherein the beam-former is further arranged such that adjacent audio channels are combined to produce two or more audio difference channels, wherein the two or more audio difference channels have equivalent phase lengths.
23. The device of claim 12, wherein the DSP is a system on a chip (SoC), a Bluetooth chip, a DSP chip, or codec with DSP integrated circuit.
24. An eyewear sound induction ear speaker device, comprising:
a) means for receiving an audio sound;
b) means for processing and amplifying the audio sound; and
c) means for channeling the amplified and processed audio sound to an ear of a user wearing an eyewear frame.
25. A method of providing sound for eyewear, comprising the steps of:
a) receiving a processed electrical audio signal at a speaker integrated with an eyewear frame, the speaker including an audio channel;
b) inducing the speaker to produce acoustic sound at the audio channel; and
c) channeling the acoustic sound through an acoustic duct to be presented to a user wearing the eyewear frame.
26. The method of claim 25, further including arranging the acoustic duct such that the acoustic duct does not block the ear canal of the user, the acoustic duct being comprised of a pliable non-load bearing material.
27. The method of claim 25, further including the steps of receiving an unprocessed electrical audio signal at a receiver integrated with the eyewear frame, processing the electrical audio signal at the receiver to produce a processed electrical audio signal, and transmitting the processed electrical audio signal to the speaker.
28. The method of claim 25, further including the steps of:
a) preamplifying sound received at the speaker using a pre-amplifier to produce an amplified electrical audio signal;
b) further processing the amplified electrical audio signal using a digital signal processor to produce a digital electrical audio signal;
c) converting the digital electrical audio signal into an analog signal; and
d) postamplifying the analog signal to produce the processed electrical audio signal.
29. The method of claim 25, wherein the acoustic signal is a first acoustic sound and the processing step further includes the steps of:
a) receiving a second processed electrical audio signal at a second speaker integrated with the eyewear frame, the second speaker including an audio channel;
b) inducing the second speaker to produce a second acoustic sound independent of and distinct from the first acoustic sound;
c) channeling the second acoustic sound through a second acoustic duct to present stereo sound to a user wearing the eyewear frame.
30. The method of claim 25, further including the steps of:
a) forming beams at a beam-former, the beam-former receiving at least two audio channels and outputting a main channel and one or more reference channels;
b) detecting voice activity at a voice activity detector, the voice activity detector receiving the main and reference channels and outputting a desired voice activity channel;
c) adaptively cancelling noise at an adaptive noise canceller, adaptive noise canceller receiving the main, reference, and desired voice activity channels and outputting an adaptive noise cancellation channel; and
d) reducing noise at a noise reducer receiving the desired voice activity and adaptive noise cancellation channels and outputting a desired speech channel.
31. A method of channeling sound from an eyewear device, comprising the steps of:
a) receiving an electrical audio signal from an electrical audio source at a speaker integrated with an eyewear device;
b) inducing audible sound from the electrical audio signal at the speaker; and
c) channeling the audio sound to an ear of a user of the eyewear using an audio duct, the audio duct not blocking an ear canal of the ear.
US14/180,986 2013-03-13 2014-02-14 Sound Induction Ear Speaker for Eye Glasses Abandoned US20140270316A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/180,986 US20140270316A1 (en) 2013-03-13 2014-02-14 Sound Induction Ear Speaker for Eye Glasses
TW103108575A TW201508376A (en) 2013-06-25 2014-03-12 Sound induction ear speaker for eye glasses

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201361780108P 2013-03-13 2013-03-13
US201361839227P 2013-06-25 2013-06-25
US201361839211P 2013-06-25 2013-06-25
US201361912844P 2013-12-06 2013-12-06
US14/180,986 US20140270316A1 (en) 2013-03-13 2014-02-14 Sound Induction Ear Speaker for Eye Glasses

Publications (1)

Publication Number Publication Date
US20140270316A1 true US20140270316A1 (en) 2014-09-18

Family

ID=50179966

Family Applications (5)

Application Number Title Priority Date Filing Date
US14/180,994 Active 2034-04-02 US9753311B2 (en) 2013-03-13 2014-02-14 Eye glasses with microphone array
US14/180,986 Abandoned US20140270316A1 (en) 2013-03-13 2014-02-14 Sound Induction Ear Speaker for Eye Glasses
US14/181,059 Active 2034-05-20 US9810925B2 (en) 2013-03-13 2014-02-14 Noise cancelling microphone apparatus
US14/181,037 Abandoned US20140268016A1 (en) 2013-03-13 2014-02-14 Eyewear spectacle with audio speaker in the temple
US15/726,620 Active US10379386B2 (en) 2013-03-13 2017-10-06 Noise cancelling microphone apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/180,994 Active 2034-04-02 US9753311B2 (en) 2013-03-13 2014-02-14 Eye glasses with microphone array

Family Applications After (3)

Application Number Title Priority Date Filing Date
US14/181,059 Active 2034-05-20 US9810925B2 (en) 2013-03-13 2014-02-14 Noise cancelling microphone apparatus
US14/181,037 Abandoned US20140268016A1 (en) 2013-03-13 2014-02-14 Eyewear spectacle with audio speaker in the temple
US15/726,620 Active US10379386B2 (en) 2013-03-13 2017-10-06 Noise cancelling microphone apparatus

Country Status (6)

Country Link
US (5) US9753311B2 (en)
EP (1) EP2973556B1 (en)
JP (1) JP6375362B2 (en)
CN (1) CN105229737B (en)
TW (1) TWI624829B (en)
WO (4) WO2014158426A1 (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160134958A1 (en) * 2014-11-07 2016-05-12 Microsoft Technology Licensing, Llc Sound transmission systems and devices having earpieces
US9451068B2 (en) 2001-06-21 2016-09-20 Oakley, Inc. Eyeglasses with electronic components
US9494807B2 (en) 2006-12-14 2016-11-15 Oakley, Inc. Wearable high resolution audio visual interface
CN106200009A (en) * 2016-07-14 2016-12-07 深圳前海零距物联网科技有限公司 Audio frequency output intelligent glasses
US9619201B2 (en) 2000-06-02 2017-04-11 Oakley, Inc. Eyewear with detachable adjustable electronics module
US9720260B2 (en) 2013-06-12 2017-08-01 Oakley, Inc. Modular heads-up display system
US9720258B2 (en) 2013-03-15 2017-08-01 Oakley, Inc. Electronic ornamentation for eyewear
WO2017144269A1 (en) 2016-02-26 2017-08-31 USound GmbH Audio system with beamforming loudspeakers and spectacles with such an audio system
US9753311B2 (en) 2013-03-13 2017-09-05 Kopin Corporation Eye glasses with microphone array
WO2017158507A1 (en) * 2016-03-16 2017-09-21 Radhear Ltd. Hearing aid
US10063958B2 (en) 2014-11-07 2018-08-28 Microsoft Technology Licensing, Llc Earpiece attachment devices
US10222617B2 (en) 2004-12-22 2019-03-05 Oakley, Inc. Wearable electronically enabled interface system
US10306389B2 (en) 2013-03-13 2019-05-28 Kopin Corporation Head wearable acoustic system with noise canceling microphone geometry apparatuses and methods
US10339952B2 (en) 2013-03-13 2019-07-02 Kopin Corporation Apparatuses and systems for acoustic channel auto-balancing during multi-channel signal extraction
US10567888B2 (en) 2018-02-08 2020-02-18 Nuance Hearing Ltd. Directional hearing aid
US10627633B2 (en) * 2016-06-28 2020-04-21 Hiscene Information Technology Co., Ltd Wearable smart glasses
US10638248B1 (en) * 2019-01-29 2020-04-28 Facebook Technologies, Llc Generating a modified audio experience for an audio system
US10904667B1 (en) * 2018-03-19 2021-01-26 Amazon Technologies, Inc. Compact audio module for head-mounted wearable device
US11036052B1 (en) * 2018-05-30 2021-06-15 Facebook Technologies, Llc Head-mounted display systems with audio delivery conduits
USD933635S1 (en) * 2020-04-17 2021-10-19 Bose Corporation Audio accessory
KR20210145215A (en) * 2019-03-29 2021-12-01 스냅 인코포레이티드 Head-wearable device generating binaural audio
US20210407513A1 (en) * 2020-06-29 2021-12-30 Innovega, Inc. Display eyewear with auditory enhancement
EP4009662A1 (en) 2020-12-04 2022-06-08 USound GmbH Spectacles with parametric audio unit
US11412318B2 (en) * 2020-04-28 2022-08-09 Pegatron Corporation Virtual reality head-mounted display device
US11418875B2 (en) 2019-10-14 2022-08-16 VULAI Inc End-fire array microphone arrangements inside a vehicle
US11631421B2 (en) 2015-10-18 2023-04-18 Solos Technology Limited Apparatuses and methods for enhanced speech recognition in variable environments
US11711645B1 (en) * 2019-12-31 2023-07-25 Meta Platforms Technologies, Llc Headset sound leakage mitigation
US11743640B2 (en) 2019-12-31 2023-08-29 Meta Platforms Technologies, Llc Privacy setting for sound leakage control
US11765522B2 (en) 2019-07-21 2023-09-19 Nuance Hearing Ltd. Speech-tracking listening device

Families Citing this family (82)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11750965B2 (en) 2007-03-07 2023-09-05 Staton Techiya, Llc Acoustic dampening compensation system
US8653354B1 (en) * 2011-08-02 2014-02-18 Sonivoz, L.P. Audio synthesizing systems and methods
CN102820032B (en) * 2012-08-15 2014-08-13 歌尔声学股份有限公司 Speech recognition system and method
WO2014210530A1 (en) * 2013-06-28 2014-12-31 Kopin Corporation Digital voice processing method and system for headset computer
US9392353B2 (en) * 2013-10-18 2016-07-12 Plantronics, Inc. Headset interview mode
ITMI20131797A1 (en) * 2013-10-29 2015-04-30 Buhel S R L ELECTROMAGNETIC TRANSDUCER TO GENERATE VIBRATIONS FOR BONE CONDUCTION OF SOUNDS AND / OR WORDS
JP6411780B2 (en) * 2014-06-09 2018-10-24 ローム株式会社 Audio signal processing circuit, method thereof, and electronic device using the same
CN105575397B (en) * 2014-10-08 2020-02-21 展讯通信(上海)有限公司 Voice noise reduction method and voice acquisition equipment
WO2016063587A1 (en) * 2014-10-20 2016-04-28 ソニー株式会社 Voice processing system
US20160132285A1 (en) * 2014-11-12 2016-05-12 Blackberry Limited Portable electronic device including touch-sensitive display and method of controlling audio output
KR102369124B1 (en) * 2014-12-26 2022-03-03 삼성디스플레이 주식회사 Image display apparatus
IL236506A0 (en) 2014-12-29 2015-04-30 Netanel Eyal Wearable noise cancellation deivce
US9781499B2 (en) * 2015-03-27 2017-10-03 Intel Corporation Electronic device with wind resistant audio
US9554207B2 (en) * 2015-04-30 2017-01-24 Shure Acquisition Holdings, Inc. Offset cartridge microphones
US20180130482A1 (en) * 2015-05-15 2018-05-10 Harman International Industries, Incorporated Acoustic echo cancelling system and method
US9596536B2 (en) * 2015-07-22 2017-03-14 Google Inc. Microphone arranged in cavity for enhanced voice isolation
TWI671737B (en) * 2015-08-07 2019-09-11 圓剛科技股份有限公司 Echo-cancelling apparatus and echo-cancelling method
US20180224673A1 (en) * 2015-09-16 2018-08-09 Robert Therrien Relaxation and meditation eyewear
WO2017065092A1 (en) * 2015-10-13 2017-04-20 ソニー株式会社 Information processing device
EP3364663B1 (en) * 2015-10-13 2020-12-02 Sony Corporation Information processing device
DE102015224382A1 (en) * 2015-12-07 2017-06-08 Bayerische Motoren Werke Aktiengesellschaft System and method for active noise compensation in motorcycles and motorcycle with a system for active noise compensation
WO2017163286A1 (en) * 2016-03-25 2017-09-28 パナソニックIpマネジメント株式会社 Sound pickup apparatus
EP3236672B1 (en) * 2016-04-08 2019-08-07 Oticon A/s A hearing device comprising a beamformer filtering unit
WO2017183003A1 (en) * 2016-04-22 2017-10-26 Cochlear Limited Microphone placement
CN105892099B (en) * 2016-05-27 2020-10-27 北京云视智通科技有限公司 Intelligent glasses
CN106448697A (en) * 2016-09-28 2017-02-22 惠州Tcl移动通信有限公司 Double-microphone noise elimination implementation method and system and smart glasses
CN107896122B (en) * 2016-09-30 2020-10-20 电信科学技术研究院 Beam scanning and searching tracking method and device
US11526034B1 (en) * 2017-02-01 2022-12-13 Ram Pattikonda Eyewear with flexible audio and advanced functions
US10194225B2 (en) * 2017-03-05 2019-01-29 Facebook Technologies, Llc Strap arm of head-mounted display with integrated audio port
WO2018165201A1 (en) 2017-03-06 2018-09-13 Snap Inc. Wearable device antenna system
US10311889B2 (en) * 2017-03-20 2019-06-04 Bose Corporation Audio signal processing for noise reduction
US10424315B1 (en) 2017-03-20 2019-09-24 Bose Corporation Audio signal processing for noise reduction
US10499139B2 (en) 2017-03-20 2019-12-03 Bose Corporation Audio signal processing for noise reduction
US10366708B2 (en) 2017-03-20 2019-07-30 Bose Corporation Systems and methods of detecting speech activity of headphone user
USD828822S1 (en) 2017-05-12 2018-09-18 Oculus Vr, Llc Strap holder
CN107220021B (en) * 2017-05-16 2021-03-23 北京小鸟看看科技有限公司 Voice input recognition method and device and head-mounted equipment
US10249323B2 (en) 2017-05-31 2019-04-02 Bose Corporation Voice activity detection for communication headset
CN111133770B (en) * 2017-06-26 2022-07-26 高等工艺学校 System, audio wearable device and method for evaluating fitting quality of headphones
TWI639154B (en) 2017-06-28 2018-10-21 驊訊電子企業股份有限公司 Voice apparatus and dual-microphone voice system with noise cancellation
EP3422736B1 (en) * 2017-06-30 2020-07-29 GN Audio A/S Pop noise reduction in headsets having multiple microphones
EP3425923A1 (en) * 2017-07-06 2019-01-09 GN Audio A/S Headset with reduction of ambient noise
CN107426643B (en) * 2017-07-31 2019-08-23 歌尔股份有限公司 Uplink noise cancelling headphone
US10178457B1 (en) * 2017-08-03 2019-01-08 Facebook Technologies, Llc Audio output assembly for a head-mounted display
US11209306B2 (en) 2017-11-02 2021-12-28 Fluke Corporation Portable acoustic imaging tool with scanning and analysis capability
CN108109617B (en) * 2018-01-08 2020-12-15 深圳市声菲特科技技术有限公司 Remote pickup method
CN108419172A (en) * 2018-01-27 2018-08-17 朝阳聚声泰(信丰)科技有限公司 Wave beam forming directional microphone
DK3522568T3 (en) * 2018-01-31 2021-05-03 Oticon As HEARING AID WHICH INCLUDES A VIBRATOR TOUCHING AN EAR MUSSEL
USD864283S1 (en) * 2018-03-05 2019-10-22 Bose Corporation Audio eyeglasses
WO2019178557A1 (en) * 2018-03-15 2019-09-19 Vizzario, Inc. Modular display and sensor system for attaching to eyeglass frames and capturing physiological data
US10438605B1 (en) 2018-03-19 2019-10-08 Bose Corporation Echo control in binaural adaptive noise cancellation systems in headsets
CN110364167A (en) * 2018-04-07 2019-10-22 深圳市原素盾科技有限公司 A kind of intelligent sound identification module
CN108521872B (en) * 2018-04-12 2021-03-19 深圳市汇顶科技股份有限公司 Earphone control device and wired earphone
US10847178B2 (en) * 2018-05-18 2020-11-24 Sonos, Inc. Linear filtering for noise-suppressed speech detection
EP3811360A4 (en) 2018-06-21 2021-11-24 Magic Leap, Inc. Wearable system speech processing
US10938994B2 (en) * 2018-06-25 2021-03-02 Cypress Semiconductor Corporation Beamformer and acoustic echo canceller (AEC) system
CN109104683B (en) * 2018-07-13 2021-02-02 深圳市小瑞科技股份有限公司 Method and system for correcting phase measurement of double microphones
CN108696785B (en) * 2018-07-24 2019-10-29 歌尔股份有限公司 More wheat noise cancelling headphones and method
KR20210034661A (en) * 2018-07-24 2021-03-30 플루커 코포레이션 System and method for tagging and linking acoustic images
USD865041S1 (en) * 2018-07-31 2019-10-29 Bose Corporation Audio eyeglasses
USD865040S1 (en) * 2018-07-31 2019-10-29 Bose Corporation Audio eyeglasses
US20210044888A1 (en) * 2019-08-07 2021-02-11 Bose Corporation Microphone Placement in Open Ear Hearing Assistance Devices
CN109120790B (en) * 2018-08-30 2021-01-15 Oppo广东移动通信有限公司 Call control method and device, storage medium and wearable device
US10553196B1 (en) 2018-11-06 2020-02-04 Michael A. Stewart Directional noise-cancelling and sound detection system and method for sound targeted hearing and imaging
US11170798B2 (en) * 2018-12-12 2021-11-09 Bby Solutions, Inc. Remote audio pickup and noise cancellation system and method
KR102569365B1 (en) * 2018-12-27 2023-08-22 삼성전자주식회사 Home appliance and method for voice recognition thereof
US10789935B2 (en) 2019-01-08 2020-09-29 Cisco Technology, Inc. Mechanical touch noise control
EP3931827A4 (en) 2019-03-01 2022-11-02 Magic Leap, Inc. Determining input for speech processing engine
US11277692B2 (en) * 2019-03-27 2022-03-15 Panasonic Corporation Speech input method, recording medium, and speech input device
CN110164440B (en) * 2019-06-03 2022-08-09 交互未来(北京)科技有限公司 Voice interaction awakening electronic device, method and medium based on mouth covering action recognition
CN112071311A (en) 2019-06-10 2020-12-11 Oppo广东移动通信有限公司 Control method, control device, wearable device and storage medium
KR20230146666A (en) * 2019-06-28 2023-10-19 스냅 인코포레이티드 Dynamic beamforming to improve signal-to-noise ratio of signals captured using a head-wearable apparatus
US11197083B2 (en) 2019-08-07 2021-12-07 Bose Corporation Active noise reduction in open ear directional acoustic devices
US11328740B2 (en) * 2019-08-07 2022-05-10 Magic Leap, Inc. Voice onset detection
CN110568633A (en) * 2019-08-14 2019-12-13 歌尔股份有限公司 Intelligent head-mounted equipment
US11653144B2 (en) * 2019-08-28 2023-05-16 Bose Corporation Open audio device
WO2021101071A1 (en) * 2019-11-22 2021-05-27 주식회사 아이아이컴바인드 Slim-type smart eyewear
WO2021101072A1 (en) * 2019-11-22 2021-05-27 주식회사 아이아이컴바인드 Slim-type smart eyewear
US11200908B2 (en) * 2020-03-27 2021-12-14 Fortemedia, Inc. Method and device for improving voice quality
US11917384B2 (en) 2020-03-27 2024-02-27 Magic Leap, Inc. Method of waking a device using spoken voice commands
CN111770413B (en) * 2020-06-30 2021-08-27 浙江大华技术股份有限公司 Multi-sound-source sound mixing method and device and storage medium
WO2023280383A1 (en) * 2021-07-06 2023-01-12 Huawei Technologies Co., Ltd. Wearable apparatus comprising audio device
US11805360B2 (en) * 2021-07-21 2023-10-31 Qualcomm Incorporated Noise suppression using tandem networks

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3378649A (en) * 1964-09-04 1968-04-16 Electro Voice Pressure gradient directional microphone
US3946168A (en) * 1974-09-16 1976-03-23 Maico Hearing Instruments Inc. Directional hearing aids
US6091546A (en) * 1997-10-30 2000-07-18 The Microoptical Corporation Eyeglass interface system
US20050248717A1 (en) * 2003-10-09 2005-11-10 Howell Thomas A Eyeglasses with hearing enhanced and other audio signal-generating capabilities
US7929714B2 (en) * 2004-08-11 2011-04-19 Qualcomm Incorporated Integrated audio codec with silicon audio transducer
US20110091057A1 (en) * 2009-10-16 2011-04-21 Nxp B.V. Eyeglasses with a planar array of microphones for assisting hearing
US20120075168A1 (en) * 2010-09-14 2012-03-29 Osterhout Group, Inc. Eyepiece with uniformly illuminated reflective display
US8184983B1 (en) * 2010-11-12 2012-05-22 Google Inc. Wireless directional identification and subsequent communication between wearable electronic devices
US20120282976A1 (en) * 2011-05-03 2012-11-08 Suhami Associates Ltd Cellphone managed Hearing Eyeglasses
US20130314280A1 (en) * 2012-05-23 2013-11-28 Alexander Maltsev Multi-element antenna beam forming configurations for millimeter wave systems
US8744113B1 (en) * 2012-12-13 2014-06-03 Energy Telecom, Inc. Communication eyewear assembly with zone of safety capability

Family Cites Families (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3789163A (en) * 1972-07-31 1974-01-29 A Dunlavy Hearing aid construction
AT383428B (en) 1984-03-22 1987-07-10 Goerike Rudolf EYEGLASSES TO IMPROVE NATURAL HEARING
DE8529458U1 (en) 1985-10-16 1987-05-07 Siemens Ag, 1000 Berlin Und 8000 Muenchen, De
JPS6375362A (en) 1986-09-19 1988-04-05 Hitachi Ltd Francis turbine
US4966252A (en) * 1989-08-28 1990-10-30 Drever Leslie C Microphone windscreen and method of fabricating the same
JP3601900B2 (en) * 1996-03-18 2004-12-15 三菱電機株式会社 Transmitter for mobile phone radio
WO2000002419A1 (en) 1998-07-01 2000-01-13 Resound Corporation External microphone protective membrane
US7150526B2 (en) 2000-06-02 2006-12-19 Oakley, Inc. Wireless interactive headset
US6325507B1 (en) 2000-06-02 2001-12-04 Oakley, Inc. Eyewear retention system extending across the top of a wearer's head
US7461936B2 (en) 2000-06-02 2008-12-09 Oakley, Inc. Eyeglasses with detachable adjustable electronics module
US20120105740A1 (en) 2000-06-02 2012-05-03 Oakley, Inc. Eyewear with detachable adjustable electronics module
US7278734B2 (en) 2000-06-02 2007-10-09 Oakley, Inc. Wireless interactive headset
US8482488B2 (en) 2004-12-22 2013-07-09 Oakley, Inc. Data input management system for wearable electronically enabled interface
JP2002032212A (en) 2000-07-14 2002-01-31 Toshiba Corp Computer system and headset type display device
WO2002077972A1 (en) 2001-03-27 2002-10-03 Rast Associates, Llc Head-worn, trimodal device to increase transcription accuracy in a voice recognition system and to process unvocalized speech
CN1535555B (en) 2001-08-01 2011-05-25 樊大申 Acoustic devices, system and method for cardioid beam with desired null
CA2354858A1 (en) * 2001-08-08 2003-02-08 Dspfactory Ltd. Subband directional audio signal processing using an oversampled filterbank
US7313246B2 (en) 2001-10-06 2007-12-25 Stryker Corporation Information system using eyewear for communication
US7035091B2 (en) 2002-02-28 2006-04-25 Accenture Global Services Gmbh Wearable computer system and modes of operating the system
US7852369B2 (en) 2002-06-27 2010-12-14 Microsoft Corp. Integrated design for omni-directional camera and microphone array
US7494216B2 (en) 2002-07-26 2009-02-24 Oakely, Inc. Electronic eyewear with hands-free operation
US7774075B2 (en) 2002-11-06 2010-08-10 Lin Julius J Y Audio-visual three-dimensional input/output
US7174022B1 (en) 2002-11-15 2007-02-06 Fortemedia, Inc. Small array microphone for beam-forming and noise suppression
US7359504B1 (en) * 2002-12-03 2008-04-15 Plantronics, Inc. Method and apparatus for reducing echo and noise
US7162041B2 (en) * 2003-09-30 2007-01-09 Etymotic Research, Inc. Noise canceling microphone with acoustically tuned ports
US6996647B2 (en) 2003-12-17 2006-02-07 International Business Machines Corporation Token swapping for hot spot management
JP2007531029A (en) 2004-03-31 2007-11-01 スイスコム モービル アーゲー Method and system for acoustic communication
US7976480B2 (en) 2004-12-09 2011-07-12 Motorola Solutions, Inc. Wearable auscultation system and method
JP4532305B2 (en) 2005-02-18 2010-08-25 株式会社オーディオテクニカ Narrow directional microphone
US20070081123A1 (en) 2005-10-07 2007-04-12 Lewis Scott W Digital eyewear
JP2009514312A (en) 2005-11-01 2009-04-02 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Hearing aid with acoustic tracking means
US20070195968A1 (en) * 2006-02-07 2007-08-23 Jaber Associates, L.L.C. Noise suppression method and system with single microphone
US8068619B2 (en) 2006-05-09 2011-11-29 Fortemedia, Inc. Method and apparatus for noise suppression in a small array microphone system
US7798638B2 (en) 2007-01-02 2010-09-21 Hind-Sight Industries, Inc. Eyeglasses with integrated video display
US7547101B2 (en) 2007-01-02 2009-06-16 Hind-Sight Industries, Inc. Eyeglasses with integrated telescoping video display
US20080175408A1 (en) * 2007-01-20 2008-07-24 Shridhar Mukund Proximity filter
FR2915049A1 (en) * 2007-04-10 2008-10-17 Richard Chene ELEMENT FOR THE EARLY TRANSMISSION OF THE SOUND OF A SPEAKER AND EQUIPMENT PROVIDED WITH SUCH A ELEMENT
US20100020229A1 (en) 2007-04-30 2010-01-28 General Electric Company Wearable personal video/audio device method and system
US8767975B2 (en) * 2007-06-21 2014-07-01 Bose Corporation Sound discrimination method and apparatus
US8520860B2 (en) 2007-12-13 2013-08-27 Symbol Technologies, Inc. Modular mobile computing headset
US7959084B2 (en) 2008-07-01 2011-06-14 Symbol Technologies, Inc. Multi-functional mobile computing device utilizing a removable processor module
TW201006265A (en) 2008-07-29 2010-02-01 Neovictory Technology Co Ltd Low-background-sound bone-skin vibration microphone and glasses containing the same
JP2010034990A (en) * 2008-07-30 2010-02-12 Funai Electric Co Ltd Differential microphone unit
EP2427812A4 (en) 2009-05-08 2016-06-08 Kopin Corp Remote control of host application using motion and voice commands
US9307326B2 (en) 2009-12-22 2016-04-05 Mh Acoustics Llc Surface-mounted microphone arrays on flexible printed circuit boards
US20110214082A1 (en) 2010-02-28 2011-09-01 Osterhout Group, Inc. Projection triggering through an external marker in an augmented reality eyepiece
EP2539759A1 (en) 2010-02-28 2013-01-02 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
US20120056846A1 (en) 2010-03-01 2012-03-08 Lester F. Ludwig Touch-based user interfaces employing artificial neural networks for hdtp parameter and symbol derivation
US20110288860A1 (en) * 2010-05-20 2011-11-24 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for processing of speech signals using head-mounted microphone pair
CN202102188U (en) 2010-06-21 2012-01-04 杨华强 Glasses leg, glasses frame and glasses
BR112012031656A2 (en) * 2010-08-25 2016-11-08 Asahi Chemical Ind device, and method of separating sound sources, and program
WO2012040386A1 (en) 2010-09-21 2012-03-29 4Iiii Innovations Inc. Head-mounted peripheral vision display systems and methods
CN103168391B (en) * 2010-10-21 2016-06-15 洛克达股份有限公司 For the formation of method and the device of remote beam
WO2012074503A1 (en) * 2010-11-29 2012-06-07 Nuance Communications, Inc. Dynamic microphone signal mixer
JP2012133250A (en) 2010-12-24 2012-07-12 Sony Corp Sound information display apparatus, method and program
US10230346B2 (en) * 2011-01-10 2019-03-12 Zhinian Jing Acoustic voice activity detection
WO2014158426A1 (en) 2013-03-13 2014-10-02 Kopin Corporation Eye glasses with microphone array
TWI624709B (en) 2013-06-25 2018-05-21 寇平公司 Eye glasses with microphone array and method of reducing noise thereof

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3378649A (en) * 1964-09-04 1968-04-16 Electro Voice Pressure gradient directional microphone
US3946168A (en) * 1974-09-16 1976-03-23 Maico Hearing Instruments Inc. Directional hearing aids
US6091546A (en) * 1997-10-30 2000-07-18 The Microoptical Corporation Eyeglass interface system
US6349001B1 (en) * 1997-10-30 2002-02-19 The Microoptical Corporation Eyeglass interface system
US20050248717A1 (en) * 2003-10-09 2005-11-10 Howell Thomas A Eyeglasses with hearing enhanced and other audio signal-generating capabilities
US7929714B2 (en) * 2004-08-11 2011-04-19 Qualcomm Incorporated Integrated audio codec with silicon audio transducer
US20110091057A1 (en) * 2009-10-16 2011-04-21 Nxp B.V. Eyeglasses with a planar array of microphones for assisting hearing
US20120075168A1 (en) * 2010-09-14 2012-03-29 Osterhout Group, Inc. Eyepiece with uniformly illuminated reflective display
US8184983B1 (en) * 2010-11-12 2012-05-22 Google Inc. Wireless directional identification and subsequent communication between wearable electronic devices
US20120282976A1 (en) * 2011-05-03 2012-11-08 Suhami Associates Ltd Cellphone managed Hearing Eyeglasses
US8543061B2 (en) * 2011-05-03 2013-09-24 Suhami Associates Ltd Cellphone managed hearing eyeglasses
US20130314280A1 (en) * 2012-05-23 2013-11-28 Alexander Maltsev Multi-element antenna beam forming configurations for millimeter wave systems
US8744113B1 (en) * 2012-12-13 2014-06-03 Energy Telecom, Inc. Communication eyewear assembly with zone of safety capability

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9619201B2 (en) 2000-06-02 2017-04-11 Oakley, Inc. Eyewear with detachable adjustable electronics module
US9451068B2 (en) 2001-06-21 2016-09-20 Oakley, Inc. Eyeglasses with electronic components
US10222617B2 (en) 2004-12-22 2019-03-05 Oakley, Inc. Wearable electronically enabled interface system
US10120646B2 (en) 2005-02-11 2018-11-06 Oakley, Inc. Eyewear with detachable adjustable electronics module
US9494807B2 (en) 2006-12-14 2016-11-15 Oakley, Inc. Wearable high resolution audio visual interface
US10288886B2 (en) 2006-12-14 2019-05-14 Oakley, Inc. Wearable high resolution audio visual interface
US9720240B2 (en) 2006-12-14 2017-08-01 Oakley, Inc. Wearable high resolution audio visual interface
US10379386B2 (en) 2013-03-13 2019-08-13 Kopin Corporation Noise cancelling microphone apparatus
US10339952B2 (en) 2013-03-13 2019-07-02 Kopin Corporation Apparatuses and systems for acoustic channel auto-balancing during multi-channel signal extraction
US9753311B2 (en) 2013-03-13 2017-09-05 Kopin Corporation Eye glasses with microphone array
US10306389B2 (en) 2013-03-13 2019-05-28 Kopin Corporation Head wearable acoustic system with noise canceling microphone geometry apparatuses and methods
US9810925B2 (en) 2013-03-13 2017-11-07 Kopin Corporation Noise cancelling microphone apparatus
US9720258B2 (en) 2013-03-15 2017-08-01 Oakley, Inc. Electronic ornamentation for eyewear
US9720260B2 (en) 2013-06-12 2017-08-01 Oakley, Inc. Modular heads-up display system
US10288908B2 (en) 2013-06-12 2019-05-14 Oakley, Inc. Modular heads-up display system
US20160134958A1 (en) * 2014-11-07 2016-05-12 Microsoft Technology Licensing, Llc Sound transmission systems and devices having earpieces
US10063958B2 (en) 2014-11-07 2018-08-28 Microsoft Technology Licensing, Llc Earpiece attachment devices
US11631421B2 (en) 2015-10-18 2023-04-18 Solos Technology Limited Apparatuses and methods for enhanced speech recognition in variable environments
WO2017144269A1 (en) 2016-02-26 2017-08-31 USound GmbH Audio system with beamforming loudspeakers and spectacles with such an audio system
DE102016103477A1 (en) 2016-02-26 2017-08-31 USound GmbH Audio system with beam-forming speakers and glasses with such an audio system
US10728651B2 (en) 2016-02-26 2020-07-28 USound GmbH Audio system having beam-shaping speakers and eyewear having such an audio system
EP3757987A1 (en) 2016-02-26 2020-12-30 Usound GmbH Audio system with beamforming loudspeakers and spectacles with such an audio system
WO2017158507A1 (en) * 2016-03-16 2017-09-21 Radhear Ltd. Hearing aid
US10627633B2 (en) * 2016-06-28 2020-04-21 Hiscene Information Technology Co., Ltd Wearable smart glasses
CN106200009A (en) * 2016-07-14 2016-12-07 深圳前海零距物联网科技有限公司 Audio frequency output intelligent glasses
US10567888B2 (en) 2018-02-08 2020-02-18 Nuance Hearing Ltd. Directional hearing aid
US10904667B1 (en) * 2018-03-19 2021-01-26 Amazon Technologies, Inc. Compact audio module for head-mounted wearable device
US11036052B1 (en) * 2018-05-30 2021-06-15 Facebook Technologies, Llc Head-mounted display systems with audio delivery conduits
CN113366864A (en) * 2019-01-29 2021-09-07 脸谱科技有限责任公司 Generating a modified audio experience for an audio system
US10638248B1 (en) * 2019-01-29 2020-04-28 Facebook Technologies, Llc Generating a modified audio experience for an audio system
KR102506593B1 (en) 2019-03-29 2023-03-07 스냅 인코포레이티드 A head-wearable device that creates binaural audio
KR20210145215A (en) * 2019-03-29 2021-12-01 스냅 인코포레이티드 Head-wearable device generating binaural audio
US11765522B2 (en) 2019-07-21 2023-09-19 Nuance Hearing Ltd. Speech-tracking listening device
US11418875B2 (en) 2019-10-14 2022-08-16 VULAI Inc End-fire array microphone arrangements inside a vehicle
US11743640B2 (en) 2019-12-31 2023-08-29 Meta Platforms Technologies, Llc Privacy setting for sound leakage control
US11711645B1 (en) * 2019-12-31 2023-07-25 Meta Platforms Technologies, Llc Headset sound leakage mitigation
USD933635S1 (en) * 2020-04-17 2021-10-19 Bose Corporation Audio accessory
US11412318B2 (en) * 2020-04-28 2022-08-09 Pegatron Corporation Virtual reality head-mounted display device
US20210407513A1 (en) * 2020-06-29 2021-12-30 Innovega, Inc. Display eyewear with auditory enhancement
US11668959B2 (en) 2020-12-04 2023-06-06 USound GmbH Eyewear with parametric audio unit
DE102020132254A1 (en) 2020-12-04 2022-06-09 USound GmbH Glasses with parametric audio unit
EP4009662A1 (en) 2020-12-04 2022-06-08 USound GmbH Spectacles with parametric audio unit

Also Published As

Publication number Publication date
WO2014163794A3 (en) 2015-06-11
EP2973556A1 (en) 2016-01-20
JP6375362B2 (en) 2018-08-15
WO2014163797A1 (en) 2014-10-09
JP2016516343A (en) 2016-06-02
WO2014163794A2 (en) 2014-10-09
US9753311B2 (en) 2017-09-05
US20140270244A1 (en) 2014-09-18
WO2014158426A1 (en) 2014-10-02
TW201510990A (en) 2015-03-16
EP2973556B1 (en) 2018-07-11
US20140278385A1 (en) 2014-09-18
US9810925B2 (en) 2017-11-07
US20140268016A1 (en) 2014-09-18
WO2014163796A1 (en) 2014-10-09
US10379386B2 (en) 2019-08-13
TWI624829B (en) 2018-05-21
CN105229737B (en) 2019-05-17
CN105229737A (en) 2016-01-06
US20180045982A1 (en) 2018-02-15

Similar Documents

Publication Publication Date Title
US20140270316A1 (en) Sound Induction Ear Speaker for Eye Glasses
US9949048B2 (en) Controlling own-voice experience of talker with occluded ear
JP6675414B2 (en) Speech sensing using multiple microphones
EP3057337B1 (en) A hearing system comprising a separate microphone unit for picking up a users own voice
US10803857B2 (en) System and method for relative enhancement of vocal utterances in an acoustically cluttered environment
US20160227332A1 (en) Binaural hearing system
EP2339867A2 (en) Stand-alone ear bud for active noise reduction
US10262676B2 (en) Multi-microphone pop noise control
EP3883266A1 (en) A hearing device adapted to provide an estimate of a user's own voice
WO2004016037A1 (en) Method of increasing speech intelligibility and device therefor
US11330375B2 (en) Method of adaptive mixing of uncorrelated or correlated noisy signals, and a hearing device
US11862138B2 (en) Hearing device comprising an active emission canceller
TW201523064A (en) Eyewear spectacle with audio speaker in the temple
TW201508376A (en) Sound induction ear speaker for eye glasses
EP4297436A1 (en) A hearing aid comprising an active occlusion cancellation system and corresponding method
EP4199541A1 (en) A hearing device comprising a low complexity beamformer
CA2485475A1 (en) External hearing aids

Legal Events

Date Code Title Description
AS Assignment

Owner name: KOPIN CORPORATION, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FAN, DASHEN;REEL/FRAME:032560/0259

Effective date: 20140325

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION