US8391501B2 - Method and apparatus for mixing priority and non-priority audio signals - Google Patents

Method and apparatus for mixing priority and non-priority audio signals Download PDF

Info

Publication number
US8391501B2
US8391501B2 US11/610,155 US61015506A US8391501B2 US 8391501 B2 US8391501 B2 US 8391501B2 US 61015506 A US61015506 A US 61015506A US 8391501 B2 US8391501 B2 US 8391501B2
Authority
US
United States
Prior art keywords
priority
audio
audio signal
signal
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US11/610,155
Other versions
US20080144858A1 (en
Inventor
Charbel Khawand
Mikhail U. Yagunov
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google Technology Holdings LLC
Original Assignee
Motorola Mobility LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Mobility LLC filed Critical Motorola Mobility LLC
Priority to US11/610,155 priority Critical patent/US8391501B2/en
Assigned to MOTOROLA, INC. reassignment MOTOROLA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KHAWAND, CHARBEL, YAGUNOV, MIKHAIL U.
Priority to PCT/US2007/085992 priority patent/WO2008076607A2/en
Publication of US20080144858A1 publication Critical patent/US20080144858A1/en
Assigned to Motorola Mobility, Inc reassignment Motorola Mobility, Inc ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MOTOROLA, INC
Assigned to MOTOROLA MOBILITY LLC reassignment MOTOROLA MOBILITY LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MOTOROLA MOBILITY, INC.
Application granted granted Critical
Publication of US8391501B2 publication Critical patent/US8391501B2/en
Assigned to Google Technology Holdings LLC reassignment Google Technology Holdings LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MOTOROLA MOBILITY LLC
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M19/00Current supply arrangements for telephone systems
    • H04M19/02Current supply arrangements for telephone systems providing ringing current or supervisory tones, e.g. dialling tone or busy tone
    • H04M19/04Current supply arrangements for telephone systems providing ringing current or supervisory tones, e.g. dialling tone or busy tone the ringing-current being generated at the substations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6016Substation equipment, e.g. for use by subscribers including speech amplifiers in the receiver circuit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72442User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for playing music files

Definitions

  • the invention relates generally to audio mixing, and more particularly to mixing multiple audio signals into one signal where one of audio signals is a priority signal that needs to be heard over other signals.
  • Personal electronic devices are used for many different functions, and are very popular. Examples of such devices include personal digital assistants, palmtop computers, cellular telephones, digital medial players, and so on. The functions of these devices are often combined into single, multi-purpose devices. Thus, there is a convergence of function in the marketplace with respect to the design of personal electronics. As a result, these devices may often perform several tasks at a time.
  • audio playback and video with audio playback One task for which such devices are increasingly used is audio playback and video with audio playback. That is, audio and video files may be stored on the device, and played back for the user to watch, hear or both.
  • the audio processing involved has become sophisticated to the point that pre-processing of audio signals regularly includes audio shaping and other audio effects and enhancements. Pre-processing may emphasize certain frequency content of a signal to achieve a desired effect.
  • another function for which the device is designed may be triggered, and which may also generate an audio signal. For example, while listening to an music file, the device may receive a wireless telephone call. To alert the user to the call, the device will cut off the music and play an audio alert, or try and play the audio alert over the music.
  • the present invention discloses in one embodiment a method of mixing audio signals at an audio processor.
  • the method commences by receiving, at the audio processor, a priority audio signal and at least one non-priority audio signal.
  • the priority audio signal occupies at least one frequency band of the audio spectrum.
  • the audio processor commences filtering the non-priority audio signals to suppress frequency content of the non-priority audio signals in the frequency band occupied by the priority audio signal.
  • the priority audio signal and the filtered non-priority signal is then combined, producing an output signal that is to be played over an audio transducer.
  • the method may further include determining the location of the frequency band upon receiving the priority audio signal, or alternatively the frequency band parameters may be provided to the audio processor.
  • the priority audio signal may be an alert tone for alerting a user of the mobile communication device.
  • the non-priority audio signal may be an audio playback signal which may be derived from an audio file.
  • the method may further include pre-processing the non-priority audio signal to enhance the audio content of the non-priority audio signal.
  • a mobile communication device which includes an audio processor having a plurality of audio input channels.
  • the device further includes at least one non-priority audio signal source operatively coupled to the audio processor which provides at least one non-priority audio signal.
  • the device further includes a priority audio signal source operatively coupled to the audio processor for providing a priority audio signal.
  • the priority audio signal occupies at least one frequency band.
  • the audio processor filters the at least one non-priority signal by suppressing frequency content of the at least one non-priority audio signal in the at least one frequency band to provide at least one filtered non-priority audio signal.
  • the audio processor further combines the at least one filtered non-priority audio signal and the priority audio signal to provide an output signal.
  • the priority audio signal source may be an alert signal source.
  • the at least one non-priority signal source may be at least one audio playback signal source.
  • the at least one audio playback signal source may be an audio file stored on the mobile communication device.
  • the audio processor may determine the at least one frequency band of the priority audio signal when it is received at the audio processor. Alternatively, the at least one frequency band may be characterized prior to receiving the priority audio signal at the audio processor.
  • FIG. 1 shows a block schematic diagram of an electronic device exemplifying a communication device, in accordance with an embodiment of the invention
  • FIG. 2 shows a block schematic diagram of an audio processor and associated functions for use in an electronic device, in accordance with an embodiment of the invention
  • FIG. 3 shows a flow chart diagram of a method of mixing audio signals, in accordance with an embodiment of the invention.
  • FIG. 4 shows a frequency chart diagram of a priority audio signal, and unfiltered non-priority audio signal, and a filtered non-priority audio signal, in accordance with the invention.
  • FIG. 1 there is shown a block schematic diagram of an electronic device 100 exemplifying a communication device, in accordance with an embodiment of the invention.
  • the device includes a processor core 102 which includes one or more processors for performing various tasks and function according to instruction code executed by the processor core. The relevant functions will be explained herein.
  • Operatively coupled to the processor core is a transceiver 104 .
  • the transceiver processes signals to be transmitted and received by an antenna 106 by a radio frequency carrier.
  • the transceiver includes components such as oscillators, mixers, filters, modulators, demodulators, and so on, as is known.
  • the transceiver may also include a digital signal processor for performing digital filtering, voice coding and decoding, error correction, and other well known transceiver functions.
  • the transceiver and processor core are operatively coupled to an audio processor 108 .
  • the audio processor receives digital audio signals and converts them to analog signals to be played over a speaker or headphone transducer 110 .
  • the audio processor receives analog signals from a microphone 112 , and converts them to digital signals and routes the digitals signals to either the transceiver for transmission or to the processor core, or both.
  • the audio processor may also perform audio pre-processing and mixing, and include digital signal processing elements in both hardware and software.
  • the pre-processing may include, for example, audio shaping, sample rate conversion, gain change, filtering, and so on. It is contemplated that there may be several signal sources fed to the audio processor for playing over the speaker. Each of these signal sources is provided a channel, which is pre-processed and mixed with the other channels, if any, for playing.
  • the processor core and transceiver may further be coupled to a memory 114 .
  • the memory may include a variety of digital memory elements including read-only memory, reprogrammable memory, storage memory data memory, execution memory, volatile and non-volatile memory.
  • the memory is used for storing instruction code to be executed by the processor core and other elements of the device, as well as data, and further provides executable memory for instantiating instruction code to establish a software operating environment, applications, and other software elements.
  • the memory contains instruction code for controlling the mixing of audio signals in accordance with the invention.
  • the memory may be used to store audio signal sources such as audio files or other audio playback sources for generating alert tones and ring tones.
  • the software elements stored in the memory facilitate the operation of a user interface 116 by the processor core 102 .
  • the user interface combines software elements with hardware elements for presenting information to a user and receiving information from the user.
  • Information may be presented to the user via a graphical display 118 and associated driver circuitry.
  • there may be two displays: a main display on an inside region of the device and a smaller display on the outside that can be used to present limited information such as, for example, caller identification information upon receiving a call.
  • the user interface may also include a keypad 120 and other buttons for entering information and commands into the device.
  • Other interface elements 122 may include a vibratory motor for providing tactile responses for “silent” alerts, and may further include audio elements for presenting information in an audibly perceptible manner.
  • the audio processor mixes audio signals from various sources within the device.
  • the device may be receiving a voice signal over the transceiver, as when engaged in a call.
  • the device may have audio files stored in the memory for playback, such as audio files in the well known MP3 format, commonly used for compressing music.
  • the device may need to alert the user audibly, such as to alert the user of an incoming call, or a second incoming call while engaged in a first call.
  • These audio alerts are example of priority audio signals, while the other audio signals are non-priority audio signals.
  • the audio processor filters the non-priority audio signals to suppress audio content in the frequency band corresponding to the priority audio signal's frequency content.
  • the audio processor includes an audio pre-processor 204 and a combiner 206 .
  • the audio processor has a plurality of input channels for receiving audio signals from an audio abstraction layer 216 .
  • the audio abstraction layer represent sources of audio signals within the device that are to be played by the audio processor over an audio transducer. Audio signal sources may include audio playback signals received from a playback engine 214 , which generates the audio playback signal from an audio file or audio parameters stored in memory as indicated by arrow 215 .
  • the audio processor has four input audio channels. These channels feed into the pre-processor 204 .
  • the signals fed into each channel may be operated on by various processes. For example, there may be an audio shaping (AS) process 218 , and sampling rate conversion (SRC) process 220 , a gain adjustment process 222 , and a three dimensional (3D) audio process 224 .
  • AS audio shaping
  • SRC sampling rate conversion
  • a gain adjustment process 222 a gain adjustment process 222
  • 3D three dimensional
  • Each of these processes are controlled by a control process 208 which sets the parameters and dimensions for each process, as directed by an audio manager function 210 .
  • the audio manager function 210 receives information from the audio abstraction layer 216 and from other sources in the device regarding each of the audio signals being processed so that the audio manager can appropriately control the pre-processing of each audio signal.
  • the audio signals may further be filtered by a filter process 225 .
  • non-priority audio signals may be selectively filtered to suppress frequency content in bands occupied by the frequency content of priority audio signals on other channels.
  • the audio manager may be informed that the signal on a particular channel is a priority signal.
  • the audio manager will cause other channels to be filtered accordingly, suppressing frequency content in spectral regions used by the priority signals.
  • the audio signals are combined in a combiner or mixer 206 , resulting in an output signal 226 .
  • the output signal may itself have multiple channels, such as in the case of stereo output.
  • FIG. 3 there is shown a a flow chart diagram 300 of a method of mixing audio signals, in accordance with an embodiment of the invention.
  • the diagram shows two audio signal paths or channels involved in the invention, a non-priority channel 302 and a priority channel 304 . Additional priority channels and non-priority channels may be provided, but for the sake of clarity, only one of each is presented here.
  • the non-priority channel is an audio channel in which a non-priority audio signal is received and processed.
  • the priority channel is an audio channel in which a priority audio signal is received and processed.
  • a priority audio signal is any signal which is meant to be heard over other, non-priority audio signals presently being played for the user of the device to hear.
  • the user may be listening to music produced by playback of an audio file when the device receives an incoming call page from a communication network.
  • the device will combine an alert tone with the music signal, giving the alert tone a priority status and the music a non-priority status to ensure the user hears the alert tone, and that it isn't perceptively lost in the music.
  • the invention doesn't simply stop the music to play the alert tone, but rather processes the signals in a way that allows the user to hear both.
  • the signals are typically buffered 306 , 308 at the audio processor.
  • the audio processor determines if a priority filter is to be applied to the non-priority audio signal 310 . That is, if a priority audio signal is present, the priority filter will be applied.
  • the filter will preferably be in the form of one or more stop-band filters that correspond to frequency band occupied by the priority signal. If the filter is not enabled, the channel data or signal is produced 312 without the priority filter.
  • the audio processor When the priority audio signal is present, the audio processor must determine the frequency band or bands occupied by the priority signal 314 . More than one frequency band may be occupied if, for example, the priority signal is a dual tone multi-frequency (DTMF) signal.
  • the frequency band or bands occupied by the priority signal may be determined by the equivalent of a Fourier transform performed on the signal while it is provided to the audio processor, as indicated in block 318 , or alternatively the function providing the priority signal may simply inform the audio manager of the frequency parameters.
  • the device Upon receiving the priority signal, the device implements the priority filter in the non-priority signals, as indicated by box 316 . When the filter is to be enable, a stop-band filter is applied to the non-priority signal, as indicated by box 320 .
  • the stop-band of the filter corresponds to the band or bands occupied by the priority signal.
  • the filter continues to be applied as long as the priority signal is present, as indicated by decision box 321 . Once the priority signal ceases, the priority channel closes, as determined at box 322 . Accordingly, the stop-band filter is faded out of the non-priority signal, as indicated at box 324 . While the priority signal is present, the channel data or signal is produced by the pre-processor, as indicated at box 326 .
  • the non-priority and priority signals may be combined or summed 328 , and then the resulting summed signal may be scaled 330 and buffered 332 for output to a digital to analog converter, in the case of digital signals. It should be noted that the process shown here for the non-priority signal does not include any additional processing such as audio shaping or other audio effects that may occur in the channel.
  • FIG. 4 there is shown a frequency chart diagram 400 of a priority audio signal frequency graph 402 , and unfiltered non-priority audio signal frequency graph 404 , and a filtered non-priority audio signal frequency graph 406 , in accordance with the invention.
  • the charts indicate the frequency spectral content of the signals at a given point in time.
  • the priority signal 402 occupies a frequency band 408 .
  • the priority signal may have more than one frequency band, or it may alternate frequency bands.
  • the unfiltered non-priority signal frequency graph show the frequency spectrum 410 occupied by the non-priority audio signal. The example shown is only representative, and not intended to show any specific signal.
  • the curve shown characterizes an envelope of the signal more than the precise frequency content, magnitude of harmonics, and so on. It is expected that the non-priority signal occupies a much wider region of the frequency spectrum.
  • the audio processor upon determining the frequency band or bands occupied by the priority signal, generates a stop band filter to apply to the non-priority signal or signals.
  • the stop-band filter will suppress frequency content in the non-priority signal in the spectral region corresponding the band or bands occupied by the priority signal.
  • the stop-band filter is preferably faded in over a brief period of time to avoid transients.
  • the result of applying the stop-band filter to the non-priority signal is illustrated in chart 406 .
  • the filtered non-priority signal 412 has a notch in the region corresponding the band occupied by the priority signal.
  • the priority signal of 402 is then combined with the filtered non-priority signal of 406 to produce the desired output signal where the priority signal can be heard without audible interference from the non-priority signal. It is contemplated that more than one priority signal may be present at a given time. Priority audio signals are not filtered or suppressed, even when they overlap in frequency. Only non-priority audio signals are filtered.

Abstract

An audio processor (202) receives a non-priority audio signal (302) and a priority audio signal (304). The priority audio signal occupies a frequency band (408). The audio processor filters (320) the non-priority audio signal by suppressing frequency content in the same frequency region occupied by the priority signal, creating a filtered non-priority signal (412). The filtered non-priority signal and the priority signal are combined (328) and played over an audio transducer (110).

Description

FIELD OF THE INVENTION
The invention relates generally to audio mixing, and more particularly to mixing multiple audio signals into one signal where one of audio signals is a priority signal that needs to be heard over other signals.
BACKGROUND OF THE INVENTION
Personal electronic devices are used for many different functions, and are very popular. Examples of such devices include personal digital assistants, palmtop computers, cellular telephones, digital medial players, and so on. The functions of these devices are often combined into single, multi-purpose devices. Thus, there is a convergence of function in the marketplace with respect to the design of personal electronics. As a result, these devices may often perform several tasks at a time.
One task for which such devices are increasingly used is audio playback and video with audio playback. That is, audio and video files may be stored on the device, and played back for the user to watch, hear or both. The audio processing involved has become sophisticated to the point that pre-processing of audio signals regularly includes audio shaping and other audio effects and enhancements. Pre-processing may emphasize certain frequency content of a signal to achieve a desired effect. Furthermore, while the device is engaged in audio playback, another function for which the device is designed may be triggered, and which may also generate an audio signal. For example, while listening to an music file, the device may receive a wireless telephone call. To alert the user to the call, the device will cut off the music and play an audio alert, or try and play the audio alert over the music. However, if the music is playing and has been enhanced in a frequency band used by the alert, the user may not perceive the alert. There is, therefore a problem with playing priority audio signals when non-priority audio signals are also being played, especially when pre-processing is used to enhance certain frequency content of the non-priority signals.
SUMMARY OF THE INVENTION
The present invention discloses in one embodiment a method of mixing audio signals at an audio processor. The method commences by receiving, at the audio processor, a priority audio signal and at least one non-priority audio signal. The priority audio signal occupies at least one frequency band of the audio spectrum. The audio processor commences filtering the non-priority audio signals to suppress frequency content of the non-priority audio signals in the frequency band occupied by the priority audio signal. The priority audio signal and the filtered non-priority signal is then combined, producing an output signal that is to be played over an audio transducer. The method may further include determining the location of the frequency band upon receiving the priority audio signal, or alternatively the frequency band parameters may be provided to the audio processor. In one embodiment the priority audio signal may be an alert tone for alerting a user of the mobile communication device. The non-priority audio signal may be an audio playback signal which may be derived from an audio file. The method may further include pre-processing the non-priority audio signal to enhance the audio content of the non-priority audio signal.
In another embodiment of the invention, there is provided a mobile communication device which includes an audio processor having a plurality of audio input channels. The device further includes at least one non-priority audio signal source operatively coupled to the audio processor which provides at least one non-priority audio signal. The device further includes a priority audio signal source operatively coupled to the audio processor for providing a priority audio signal. The priority audio signal occupies at least one frequency band. The audio processor filters the at least one non-priority signal by suppressing frequency content of the at least one non-priority audio signal in the at least one frequency band to provide at least one filtered non-priority audio signal. The audio processor further combines the at least one filtered non-priority audio signal and the priority audio signal to provide an output signal. The priority audio signal source may be an alert signal source. The at least one non-priority signal source may be at least one audio playback signal source. The at least one audio playback signal source may be an audio file stored on the mobile communication device. The audio processor may determine the at least one frequency band of the priority audio signal when it is received at the audio processor. Alternatively, the at least one frequency band may be characterized prior to receiving the priority audio signal at the audio processor.
BRIEF DESCRIPTION OF THE DRAWINGS
There are shown in the drawings, embodiments which are presently preferred, it being understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown.
FIG. 1 shows a block schematic diagram of an electronic device exemplifying a communication device, in accordance with an embodiment of the invention;
FIG. 2 shows a block schematic diagram of an audio processor and associated functions for use in an electronic device, in accordance with an embodiment of the invention;
FIG. 3 shows a flow chart diagram of a method of mixing audio signals, in accordance with an embodiment of the invention; and
FIG. 4 shows a frequency chart diagram of a priority audio signal, and unfiltered non-priority audio signal, and a filtered non-priority audio signal, in accordance with the invention.
DETAILED DESCRIPTION OF THE INVENTION
While the specification concludes with claims defining features of the invention that are regarded as novel, it is believed that the invention will be better understood from a consideration of the description in conjunction with the drawings. As required, detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention, which can be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present invention in virtually any appropriately detailed structure. Further, the terms and phrases used herein are not intended to be limiting but rather to provide an understandable description of the invention.
Referring now to FIG. 1, there is shown a block schematic diagram of an electronic device 100 exemplifying a communication device, in accordance with an embodiment of the invention. The device includes a processor core 102 which includes one or more processors for performing various tasks and function according to instruction code executed by the processor core. The relevant functions will be explained herein. Operatively coupled to the processor core is a transceiver 104. The transceiver processes signals to be transmitted and received by an antenna 106 by a radio frequency carrier. Thus, the transceiver includes components such as oscillators, mixers, filters, modulators, demodulators, and so on, as is known. The transceiver may also include a digital signal processor for performing digital filtering, voice coding and decoding, error correction, and other well known transceiver functions. The transceiver and processor core are operatively coupled to an audio processor 108. The audio processor receives digital audio signals and converts them to analog signals to be played over a speaker or headphone transducer 110. Likewise, the audio processor receives analog signals from a microphone 112, and converts them to digital signals and routes the digitals signals to either the transceiver for transmission or to the processor core, or both. The audio processor may also perform audio pre-processing and mixing, and include digital signal processing elements in both hardware and software. The pre-processing may include, for example, audio shaping, sample rate conversion, gain change, filtering, and so on. It is contemplated that there may be several signal sources fed to the audio processor for playing over the speaker. Each of these signal sources is provided a channel, which is pre-processed and mixed with the other channels, if any, for playing.
The processor core and transceiver may further be coupled to a memory 114. The memory may include a variety of digital memory elements including read-only memory, reprogrammable memory, storage memory data memory, execution memory, volatile and non-volatile memory. Generally, the memory is used for storing instruction code to be executed by the processor core and other elements of the device, as well as data, and further provides executable memory for instantiating instruction code to establish a software operating environment, applications, and other software elements. In particular, according to one embodiment of the invention, the memory contains instruction code for controlling the mixing of audio signals in accordance with the invention. Furthermore, the memory may be used to store audio signal sources such as audio files or other audio playback sources for generating alert tones and ring tones.
The software elements stored in the memory facilitate the operation of a user interface 116 by the processor core 102. The user interface combines software elements with hardware elements for presenting information to a user and receiving information from the user. Information may be presented to the user via a graphical display 118 and associated driver circuitry. In some designs, such as communication devices that fold, there may be two displays: a main display on an inside region of the device and a smaller display on the outside that can be used to present limited information such as, for example, caller identification information upon receiving a call. The user interface may also include a keypad 120 and other buttons for entering information and commands into the device. Other interface elements 122 may include a vibratory motor for providing tactile responses for “silent” alerts, and may further include audio elements for presenting information in an audibly perceptible manner.
According to the invention, the audio processor, as may be facilitated by other elements of the device, mixes audio signals from various sources within the device. For example, the device may be receiving a voice signal over the transceiver, as when engaged in a call. The device may have audio files stored in the memory for playback, such as audio files in the well known MP3 format, commonly used for compressing music. In addition, while other audio signals are being played over the speaker, the device may need to alert the user audibly, such as to alert the user of an incoming call, or a second incoming call while engaged in a first call. These audio alerts are example of priority audio signals, while the other audio signals are non-priority audio signals. The audio processor filters the non-priority audio signals to suppress audio content in the frequency band corresponding to the priority audio signal's frequency content.
Referring now to FIG. 2, there is shown a block schematic diagram 200 of an audio processor 202 and associated functions for use in an electronic device, in accordance with an embodiment of the invention. The audio processor includes an audio pre-processor 204 and a combiner 206. The audio processor has a plurality of input channels for receiving audio signals from an audio abstraction layer 216. The audio abstraction layer represent sources of audio signals within the device that are to be played by the audio processor over an audio transducer. Audio signal sources may include audio playback signals received from a playback engine 214, which generates the audio playback signal from an audio file or audio parameters stored in memory as indicated by arrow 215.
In the present example, the audio processor has four input audio channels. These channels feed into the pre-processor 204. The signals fed into each channel may be operated on by various processes. For example, there may be an audio shaping (AS) process 218, and sampling rate conversion (SRC) process 220, a gain adjustment process 222, and a three dimensional (3D) audio process 224. Such processes, and others, are known, and may be applied to either analog or digital audio signals. Each of these processes are controlled by a control process 208 which sets the parameters and dimensions for each process, as directed by an audio manager function 210. The audio manager function 210 receives information from the audio abstraction layer 216 and from other sources in the device regarding each of the audio signals being processed so that the audio manager can appropriately control the pre-processing of each audio signal.
The audio signals may further be filtered by a filter process 225. Specifically, non-priority audio signals may be selectively filtered to suppress frequency content in bands occupied by the frequency content of priority audio signals on other channels. When an audio signal is presented to the audio processor, the audio manager may be informed that the signal on a particular channel is a priority signal. In accordance with the invention, the audio manager will cause other channels to be filtered accordingly, suppressing frequency content in spectral regions used by the priority signals. Subsequent to the preprocessing, the audio signals are combined in a combiner or mixer 206, resulting in an output signal 226. The output signal may itself have multiple channels, such as in the case of stereo output.
Referring now to FIG. 3, there is shown a a flow chart diagram 300 of a method of mixing audio signals, in accordance with an embodiment of the invention. The diagram shows two audio signal paths or channels involved in the invention, a non-priority channel 302 and a priority channel 304. Additional priority channels and non-priority channels may be provided, but for the sake of clarity, only one of each is presented here. The non-priority channel is an audio channel in which a non-priority audio signal is received and processed. The priority channel is an audio channel in which a priority audio signal is received and processed. A priority audio signal is any signal which is meant to be heard over other, non-priority audio signals presently being played for the user of the device to hear. For example, the user may be listening to music produced by playback of an audio file when the device receives an incoming call page from a communication network. To alert the user to the incoming call, the device will combine an alert tone with the music signal, giving the alert tone a priority status and the music a non-priority status to ensure the user hears the alert tone, and that it isn't perceptively lost in the music. At the same time, however, the invention doesn't simply stop the music to play the alert tone, but rather processes the signals in a way that allows the user to hear both.
Upon receiving both the priority and non-priority audio signals, the signals are typically buffered 306, 308 at the audio processor. Upon processing the non-priority audio signal, the audio processor determines if a priority filter is to be applied to the non-priority audio signal 310. That is, if a priority audio signal is present, the priority filter will be applied. The filter will preferably be in the form of one or more stop-band filters that correspond to frequency band occupied by the priority signal. If the filter is not enabled, the channel data or signal is produced 312 without the priority filter.
When the priority audio signal is present, the audio processor must determine the frequency band or bands occupied by the priority signal 314. More than one frequency band may be occupied if, for example, the priority signal is a dual tone multi-frequency (DTMF) signal. The frequency band or bands occupied by the priority signal may be determined by the equivalent of a Fourier transform performed on the signal while it is provided to the audio processor, as indicated in block 318, or alternatively the function providing the priority signal may simply inform the audio manager of the frequency parameters. Upon receiving the priority signal, the device implements the priority filter in the non-priority signals, as indicated by box 316. When the filter is to be enable, a stop-band filter is applied to the non-priority signal, as indicated by box 320. The stop-band of the filter corresponds to the band or bands occupied by the priority signal. The filter continues to be applied as long as the priority signal is present, as indicated by decision box 321. Once the priority signal ceases, the priority channel closes, as determined at box 322. Accordingly, the stop-band filter is faded out of the non-priority signal, as indicated at box 324. While the priority signal is present, the channel data or signal is produced by the pre-processor, as indicated at box 326. The non-priority and priority signals may be combined or summed 328, and then the resulting summed signal may be scaled 330 and buffered 332 for output to a digital to analog converter, in the case of digital signals. It should be noted that the process shown here for the non-priority signal does not include any additional processing such as audio shaping or other audio effects that may occur in the channel.
Referring now to FIG. 4, there is shown a frequency chart diagram 400 of a priority audio signal frequency graph 402, and unfiltered non-priority audio signal frequency graph 404, and a filtered non-priority audio signal frequency graph 406, in accordance with the invention. The charts indicate the frequency spectral content of the signals at a given point in time. Thus, it can be seen that the priority signal 402 occupies a frequency band 408. As noted herein, however, the priority signal may have more than one frequency band, or it may alternate frequency bands. The unfiltered non-priority signal frequency graph show the frequency spectrum 410 occupied by the non-priority audio signal. The example shown is only representative, and not intended to show any specific signal. In that regard the curve shown characterizes an envelope of the signal more than the precise frequency content, magnitude of harmonics, and so on. It is expected that the non-priority signal occupies a much wider region of the frequency spectrum. The audio processor, upon determining the frequency band or bands occupied by the priority signal, generates a stop band filter to apply to the non-priority signal or signals. The stop-band filter will suppress frequency content in the non-priority signal in the spectral region corresponding the band or bands occupied by the priority signal. The stop-band filter is preferably faded in over a brief period of time to avoid transients. The result of applying the stop-band filter to the non-priority signal is illustrated in chart 406. The filtered non-priority signal 412 has a notch in the region corresponding the band occupied by the priority signal. According to the invention, the priority signal of 402 is then combined with the filtered non-priority signal of 406 to produce the desired output signal where the priority signal can be heard without audible interference from the non-priority signal. It is contemplated that more than one priority signal may be present at a given time. Priority audio signals are not filtered or suppressed, even when they overlap in frequency. Only non-priority audio signals are filtered.
This invention can be embodied in other forms without departing from the spirit or essential attributes thereof. Accordingly, reference should be made to the following claims, rather than to the foregoing specification, as indicating the scope of the invention.

Claims (16)

1. A method of mixing audio signals at an audio processor, comprising:
receiving at the audio processor a priority audio signal occupying at least one frequency band;
receiving at the audio processor at least one non-priority audio signal having a bandwidth greater than the at least one frequency band;
stop-band filtering the at least one non-priority audio signal to provide at least one filtered non-priority audio signal, wherein the at least one frequency band is suppressed in the non-priority audio signal by the stop-band filtering and frequencies outside the at least one frequency band are not suppressed by the stop-band filtering; and
combining the priority audio signal and the at least one filtered non-priority audio signal to provide an output signal.
2. A method of mixing audio signals as defined in claim 1, further comprising:
determining the at least one frequency band from the priority audio signal upon receiving the priority audio signal.
3. A method of mixing audio signals as defined in claim 1, wherein the at least one frequency band of the priority audio signal is known at the audio processor prior to receiving the priority audio signal at the audio processor.
4. A method of mixing audio signals as defined in claim 1, further comprising:
providing the output signal to an audio transducer.
5. A method of mixing audio signals as defined in claim 1, wherein the audio processor is disposed in a mobile communication device, and receiving at the audio processor the priority audio signal comprises:
receiving an alert tone for alerting a user of the mobile communication device.
6. A method of mixing audio signals as defined in claim 1, wherein receiving at the audio processor at least one non-priority audio signal comprises:
receiving at least one audio playback signal.
7. A method of mixing audio signals as defined in claim 6, wherein the at least one audio playback signal is derived from an audio file.
8. A method of mixing audio signals as defined in claim 1, further comprising:
pre-processing at least one of the non-priority audio signals to enhance the audio content of the at least one non-priority audio signal.
9. A method of mixing audio signals as defined in claim 1, wherein the priority audio signal is a first priority audio signal, the method further comprising:
receiving at the audio processor a second priority audio signal concurrently with receiving the first priority audio signal, the second priority audio signal occupying at least one frequency band; and
filtering the at least one non-priority audio signal to suppress frequency content of the at least one non-priority audio signal in the at least one frequency band of the second priority audio signal to provide at least one filtered non-priority audio signal.
10. A mobile communication device, comprising:
an audio processor having a plurality of audio input channels;
a speaker coupled to the audio processor;
at least one non-priority audio signal source operatively coupled to the audio processor for providing at least one non-priority audio signal; and
a priority audio signal source operatively coupled to the audio processor for providing a priority audio signal, the priority audio signal occupying at least one frequency band that is less than a bandwidth of the at least one non-priority audio signal;
wherein the audio processor stop-band filters the at least one non-priority audio signal to provide at least one filtered non-priority audio signal, wherein the at least one frequency band is suppressed in the non-priority audio signal by the stop-band filtering and frequencies outside the at least one frequency band are not suppressed by the stop-band filtering; and
wherein the audio processor combines the at least one filtered non-priority audio signal and the priority audio signal to provide an output signal to the speaker.
11. A mobile communication device as defined in claim 10, wherein the priority audio signal source is an alert signal source.
12. A mobile communication device as defined in claim 10, wherein the at least one non-priority audio signal source is at least one audio playback signal source.
13. A mobile communication device as defined in claim 12, wherein the at least one audio playback signal source is an audio file stored on the mobile communication device.
14. A mobile communication device as defined in claim 10, wherein the audio processor determines the at least one frequency band of the priority audio signal from the priority audio signal when it is received at the audio processor.
15. A mobile communication device as defined in claim 10, wherein the at least one frequency band is characterized prior to receiving the priority audio signal at the audio processor.
16. A mobile communication device as defined in claim 10 wherein the priority audio signal source is a first priority audio signal source, the mobile communication device further comprises:
a second priority audio signal source operatively coupled to the audio processor for providing a second priority audio signal, the priority audio signal occupying at least one frequency band;
wherein the audio processor filters the at least one non-priority audio signal by suppressing frequency content of the at least one non-priority audio signal in the at least one frequency band of the second priority audio signal to provide at least one filtered non-priority audio signal.
US11/610,155 2006-12-13 2006-12-13 Method and apparatus for mixing priority and non-priority audio signals Active 2030-06-03 US8391501B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/610,155 US8391501B2 (en) 2006-12-13 2006-12-13 Method and apparatus for mixing priority and non-priority audio signals
PCT/US2007/085992 WO2008076607A2 (en) 2006-12-13 2007-11-30 Method and apparatus for mixing priority and non-priority audio signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/610,155 US8391501B2 (en) 2006-12-13 2006-12-13 Method and apparatus for mixing priority and non-priority audio signals

Publications (2)

Publication Number Publication Date
US20080144858A1 US20080144858A1 (en) 2008-06-19
US8391501B2 true US8391501B2 (en) 2013-03-05

Family

ID=39400634

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/610,155 Active 2030-06-03 US8391501B2 (en) 2006-12-13 2006-12-13 Method and apparatus for mixing priority and non-priority audio signals

Country Status (2)

Country Link
US (1) US8391501B2 (en)
WO (1) WO2008076607A2 (en)

Cited By (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110154509A1 (en) * 2006-06-13 2011-06-23 Roman Mostinski Method and device for providing a security breach indicative audio alert
US9204214B2 (en) 2007-04-13 2015-12-01 Personics Holdings, Llc Method and device for voice operated control
US9264839B2 (en) 2014-03-17 2016-02-16 Sonos, Inc. Playback device configuration based on proximity detection
US9270244B2 (en) 2013-03-13 2016-02-23 Personics Holdings, Llc System and method to detect close voice sources and automatically enhance situation awareness
US9271077B2 (en) 2013-12-17 2016-02-23 Personics Holdings, Llc Method and system for directional enhancement of sound using small microphone arrays
US9363601B2 (en) 2014-02-06 2016-06-07 Sonos, Inc. Audio output balancing
US9367283B2 (en) 2014-07-22 2016-06-14 Sonos, Inc. Audio settings
US9369104B2 (en) 2014-02-06 2016-06-14 Sonos, Inc. Audio output balancing
US9419575B2 (en) 2014-03-17 2016-08-16 Sonos, Inc. Audio settings based on environment
US9456277B2 (en) 2011-12-21 2016-09-27 Sonos, Inc. Systems, methods, and apparatus to filter audio
US9519454B2 (en) 2012-08-07 2016-12-13 Sonos, Inc. Acoustic signatures
US9525931B2 (en) 2012-08-31 2016-12-20 Sonos, Inc. Playback based on received sound waves
US9524098B2 (en) 2012-05-08 2016-12-20 Sonos, Inc. Methods and systems for subwoofer calibration
US9538305B2 (en) 2015-07-28 2017-01-03 Sonos, Inc. Calibration error conditions
US9648422B2 (en) 2012-06-28 2017-05-09 Sonos, Inc. Concurrent multi-loudspeaker calibration with a single measurement
US9668049B2 (en) 2012-06-28 2017-05-30 Sonos, Inc. Playback device calibration user interfaces
US9690271B2 (en) 2012-06-28 2017-06-27 Sonos, Inc. Speaker calibration
US9690539B2 (en) 2012-06-28 2017-06-27 Sonos, Inc. Speaker calibration user interface
US9693165B2 (en) 2015-09-17 2017-06-27 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US9706323B2 (en) 2014-09-09 2017-07-11 Sonos, Inc. Playback device calibration
US9706280B2 (en) 2007-04-13 2017-07-11 Personics Holdings, Llc Method and device for voice operated control
US9712912B2 (en) 2015-08-21 2017-07-18 Sonos, Inc. Manipulation of playback device response using an acoustic filter
US9729115B2 (en) 2012-04-27 2017-08-08 Sonos, Inc. Intelligently increasing the sound level of player
US9729118B2 (en) 2015-07-24 2017-08-08 Sonos, Inc. Loudness matching
US9734243B2 (en) 2010-10-13 2017-08-15 Sonos, Inc. Adjusting a playback device
US9736610B2 (en) 2015-08-21 2017-08-15 Sonos, Inc. Manipulation of playback device response using signal processing
US9743207B1 (en) 2016-01-18 2017-08-22 Sonos, Inc. Calibration using multiple recording devices
US9749760B2 (en) 2006-09-12 2017-08-29 Sonos, Inc. Updating zone configuration in a multi-zone media system
US9749763B2 (en) 2014-09-09 2017-08-29 Sonos, Inc. Playback device calibration
US9748646B2 (en) 2011-07-19 2017-08-29 Sonos, Inc. Configuration based on speaker orientation
US9756424B2 (en) 2006-09-12 2017-09-05 Sonos, Inc. Multi-channel pairing in a media system
US9763018B1 (en) 2016-04-12 2017-09-12 Sonos, Inc. Calibration of audio playback devices
US9766853B2 (en) 2006-09-12 2017-09-19 Sonos, Inc. Pair volume control
US9794710B1 (en) 2016-07-15 2017-10-17 Sonos, Inc. Spatial audio correction
US9860670B1 (en) 2016-07-15 2018-01-02 Sonos, Inc. Spectral correction using spatial calibration
US9860662B2 (en) 2016-04-01 2018-01-02 Sonos, Inc. Updating playback device configuration information based on calibration data
US9864574B2 (en) 2016-04-01 2018-01-09 Sonos, Inc. Playback device calibration based on representation spectral characteristics
US9886234B2 (en) 2016-01-28 2018-02-06 Sonos, Inc. Systems and methods of distributing audio to one or more playback devices
US9891881B2 (en) 2014-09-09 2018-02-13 Sonos, Inc. Audio processing algorithm database
US9930470B2 (en) 2011-12-29 2018-03-27 Sonos, Inc. Sound field calibration using listener localization
US9952825B2 (en) 2014-09-09 2018-04-24 Sonos, Inc. Audio processing algorithms
US9973851B2 (en) 2014-12-01 2018-05-15 Sonos, Inc. Multi-channel playback of audio content
US10003899B2 (en) 2016-01-25 2018-06-19 Sonos, Inc. Calibration with particular locations
USD827671S1 (en) 2016-09-30 2018-09-04 Sonos, Inc. Media playback device
USD829687S1 (en) 2013-02-25 2018-10-02 Sonos, Inc. Playback device
US10108393B2 (en) 2011-04-18 2018-10-23 Sonos, Inc. Leaving group and smart line-in processing
US10127006B2 (en) 2014-09-09 2018-11-13 Sonos, Inc. Facilitating calibration of an audio playback device
USD842271S1 (en) 2012-06-19 2019-03-05 Sonos, Inc. Playback device
US10284983B2 (en) 2015-04-24 2019-05-07 Sonos, Inc. Playback device calibration user interfaces
US10299061B1 (en) 2018-08-28 2019-05-21 Sonos, Inc. Playback device calibration
US10306364B2 (en) 2012-09-28 2019-05-28 Sonos, Inc. Audio processing adjustments for playback devices based on determined characteristics of audio content
USD851057S1 (en) 2016-09-30 2019-06-11 Sonos, Inc. Speaker grill with graduated hole sizing over a transition area for a media device
USD855587S1 (en) 2015-04-25 2019-08-06 Sonos, Inc. Playback device
US10372406B2 (en) 2016-07-22 2019-08-06 Sonos, Inc. Calibration interface
US10405082B2 (en) 2017-10-23 2019-09-03 Staton Techiya, Llc Automatic keyword pass-through system
US10412473B2 (en) 2016-09-30 2019-09-10 Sonos, Inc. Speaker grill with graduated hole sizing over a transition area for a media device
US10459684B2 (en) 2016-08-05 2019-10-29 Sonos, Inc. Calibration of a playback device based on an estimated frequency response
US10585639B2 (en) 2015-09-17 2020-03-10 Sonos, Inc. Facilitating calibration of an audio playback device
US10664224B2 (en) 2015-04-24 2020-05-26 Sonos, Inc. Speaker calibration user interface
USD886765S1 (en) 2017-03-13 2020-06-09 Sonos, Inc. Media playback device
US10734965B1 (en) 2019-08-12 2020-08-04 Sonos, Inc. Audio calibration of a portable playback device
USD906278S1 (en) 2015-04-25 2020-12-29 Sonos, Inc. Media player device
USD920278S1 (en) 2017-03-13 2021-05-25 Sonos, Inc. Media playback device with lights
USD921611S1 (en) 2015-09-17 2021-06-08 Sonos, Inc. Media player
US11106423B2 (en) 2016-01-25 2021-08-31 Sonos, Inc. Evaluating calibration of a playback device
US11206484B2 (en) 2018-08-28 2021-12-21 Sonos, Inc. Passive speaker authentication
US11217237B2 (en) 2008-04-14 2022-01-04 Staton Techiya, Llc Method and device for voice operated control
US11265652B2 (en) 2011-01-25 2022-03-01 Sonos, Inc. Playback device pairing
US11317202B2 (en) 2007-04-13 2022-04-26 Staton Techiya, Llc Method and device for voice operated control
US11403062B2 (en) 2015-06-11 2022-08-02 Sonos, Inc. Multiple groupings in a playback system
US11429343B2 (en) 2011-01-25 2022-08-30 Sonos, Inc. Stereo playback configuration and control
US11481182B2 (en) 2016-10-17 2022-10-25 Sonos, Inc. Room association based on name
US11610587B2 (en) 2008-09-22 2023-03-21 Staton Techiya Llc Personalized sound management and method
USD988294S1 (en) 2014-08-13 2023-06-06 Sonos, Inc. Playback device with icon

Families Citing this family (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8200169B2 (en) * 2007-02-28 2012-06-12 Ntt Docomo, Inc. Transmitter apparatus, mobile communication system, base station and communication enable signal transmitter apparatus
US8929561B2 (en) 2011-03-16 2015-01-06 Apple Inc. System and method for automated audio mix equalization and mix visualization
US10264030B2 (en) 2016-02-22 2019-04-16 Sonos, Inc. Networked microphone device control
US9965247B2 (en) 2016-02-22 2018-05-08 Sonos, Inc. Voice controlled media playback system based on user profile
US9947316B2 (en) 2016-02-22 2018-04-17 Sonos, Inc. Voice control of a media playback system
US10743101B2 (en) 2016-02-22 2020-08-11 Sonos, Inc. Content mixing
US9978390B2 (en) 2016-06-09 2018-05-22 Sonos, Inc. Dynamic player selection for audio signal processing
US10115400B2 (en) 2016-08-05 2018-10-30 Sonos, Inc. Multiple voice services
US9942678B1 (en) 2016-09-27 2018-04-10 Sonos, Inc. Audio playback settings for voice interaction
US10181323B2 (en) 2016-10-19 2019-01-15 Sonos, Inc. Arbitration-based voice recognition
US10475449B2 (en) 2017-08-07 2019-11-12 Sonos, Inc. Wake-word detection suppression
US10048930B1 (en) 2017-09-08 2018-08-14 Sonos, Inc. Dynamic computation of system response volume
US10446165B2 (en) 2017-09-27 2019-10-15 Sonos, Inc. Robust short-time fourier transform acoustic echo cancellation during audio playback
US10621981B2 (en) 2017-09-28 2020-04-14 Sonos, Inc. Tone interference cancellation
US10482868B2 (en) 2017-09-28 2019-11-19 Sonos, Inc. Multi-channel acoustic echo cancellation
US10466962B2 (en) 2017-09-29 2019-11-05 Sonos, Inc. Media playback system with voice assistance
EP3713373B1 (en) * 2018-01-11 2021-11-10 Honor Device Co., Ltd. Terminal device and dsd audio playback method
US11343614B2 (en) 2018-01-31 2022-05-24 Sonos, Inc. Device designation of playback and network microphone device arrangements
US11175880B2 (en) 2018-05-10 2021-11-16 Sonos, Inc. Systems and methods for voice-assisted media content selection
US10959029B2 (en) 2018-05-25 2021-03-23 Sonos, Inc. Determining and adapting to changes in microphone performance of playback devices
US10681460B2 (en) 2018-06-28 2020-06-09 Sonos, Inc. Systems and methods for associating playback devices with voice assistant services
US10461710B1 (en) 2018-08-28 2019-10-29 Sonos, Inc. Media playback system with maximum volume setting
US11076035B2 (en) 2018-08-28 2021-07-27 Sonos, Inc. Do not disturb feature for audio notifications
US10587430B1 (en) 2018-09-14 2020-03-10 Sonos, Inc. Networked devices, systems, and methods for associating playback devices based on sound codes
US11024331B2 (en) 2018-09-21 2021-06-01 Sonos, Inc. Voice detection optimization using sound metadata
US11100923B2 (en) 2018-09-28 2021-08-24 Sonos, Inc. Systems and methods for selective wake word detection using neural network models
US11899519B2 (en) 2018-10-23 2024-02-13 Sonos, Inc. Multiple stage network microphone device with reduced power consumption and processing load
EP3654249A1 (en) 2018-11-15 2020-05-20 Snips Dilated convolutions and gating for efficient keyword spotting
US11183183B2 (en) 2018-12-07 2021-11-23 Sonos, Inc. Systems and methods of operating media playback systems having multiple voice assistant services
US11132989B2 (en) 2018-12-13 2021-09-28 Sonos, Inc. Networked microphone devices, systems, and methods of localized arbitration
US10602268B1 (en) 2018-12-20 2020-03-24 Sonos, Inc. Optimization of network microphone devices using noise classification
US10867604B2 (en) 2019-02-08 2020-12-15 Sonos, Inc. Devices, systems, and methods for distributed voice processing
US11315556B2 (en) 2019-02-08 2022-04-26 Sonos, Inc. Devices, systems, and methods for distributed voice processing by transmitting sound data associated with a wake word to an appropriate device for identification
US11120794B2 (en) 2019-05-03 2021-09-14 Sonos, Inc. Voice assistant persistence across multiple network microphone devices
US10779105B1 (en) * 2019-05-31 2020-09-15 Apple Inc. Sending notification and multi-channel audio over channel limited link for independent gain control
US11361756B2 (en) 2019-06-12 2022-06-14 Sonos, Inc. Conditional wake word eventing based on environment
US11200894B2 (en) 2019-06-12 2021-12-14 Sonos, Inc. Network microphone device with command keyword eventing
US10586540B1 (en) 2019-06-12 2020-03-10 Sonos, Inc. Network microphone device with command keyword conditioning
US10871943B1 (en) 2019-07-31 2020-12-22 Sonos, Inc. Noise classification for event detection
US11138975B2 (en) 2019-07-31 2021-10-05 Sonos, Inc. Locally distributed keyword detection
US11189286B2 (en) 2019-10-22 2021-11-30 Sonos, Inc. VAS toggle based on device orientation
US11200900B2 (en) 2019-12-20 2021-12-14 Sonos, Inc. Offline voice control
US11562740B2 (en) 2020-01-07 2023-01-24 Sonos, Inc. Voice verification for media playback
US11308958B2 (en) 2020-02-07 2022-04-19 Sonos, Inc. Localized wakeword verification
US11308962B2 (en) 2020-05-20 2022-04-19 Sonos, Inc. Input detection windowing
US11482224B2 (en) 2020-05-20 2022-10-25 Sonos, Inc. Command keywords with input detection windowing
US11698771B2 (en) 2020-08-25 2023-07-11 Sonos, Inc. Vocal guidance engines for playback devices

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5652800A (en) 1995-11-02 1997-07-29 Peavey Electronics Corporation Automatic mixer priority circuit
US6011851A (en) 1997-06-23 2000-01-04 Cisco Technology, Inc. Spatial audio processing method and apparatus for context switching between telephony applications
US6230131B1 (en) 1998-04-29 2001-05-08 Matsushita Electric Industrial Co., Ltd. Method for generating spelling-to-pronunciation decision tree
US6662022B1 (en) 1999-04-19 2003-12-09 Sanyo Electric Co., Ltd. Portable telephone set
US20040186734A1 (en) 2002-12-28 2004-09-23 Samsung Electronics Co., Ltd. Method and apparatus for mixing audio stream and information storage medium thereof
JP2005079922A (en) 2003-08-29 2005-03-24 Sharp Corp Portable telephone set with broadcast receiving function
EP0995191B1 (en) 1998-05-18 2005-08-31 Koninklijke Philips Electronics N.V. Mixing audio streams
US20060023900A1 (en) * 2004-07-28 2006-02-02 Erhart George W Method and apparatus for priority based audio mixing
US20060094474A1 (en) * 2002-10-15 2006-05-04 Peter Zatloukal Mobile digital communication/computing device having a context sensitive audio system
JP2006186651A (en) 2004-12-27 2006-07-13 Kyocera Corp Call system, call device used for it, and speaker used for call system and call device
JP2006197625A (en) 1999-04-19 2006-07-27 Sanyo Electric Co Ltd Mobile phone
US7272232B1 (en) * 2001-05-30 2007-09-18 Palmsource, Inc. System and method for prioritizing and balancing simultaneous audio outputs in a handheld device
US20070218878A1 (en) 2006-03-16 2007-09-20 Charbel Khawand Method and system for prioritizing audio channels at a mixer level

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5652800A (en) 1995-11-02 1997-07-29 Peavey Electronics Corporation Automatic mixer priority circuit
US6011851A (en) 1997-06-23 2000-01-04 Cisco Technology, Inc. Spatial audio processing method and apparatus for context switching between telephony applications
US6230131B1 (en) 1998-04-29 2001-05-08 Matsushita Electric Industrial Co., Ltd. Method for generating spelling-to-pronunciation decision tree
EP0995191B1 (en) 1998-05-18 2005-08-31 Koninklijke Philips Electronics N.V. Mixing audio streams
US6662022B1 (en) 1999-04-19 2003-12-09 Sanyo Electric Co., Ltd. Portable telephone set
JP2006197625A (en) 1999-04-19 2006-07-27 Sanyo Electric Co Ltd Mobile phone
US7272232B1 (en) * 2001-05-30 2007-09-18 Palmsource, Inc. System and method for prioritizing and balancing simultaneous audio outputs in a handheld device
US20060094474A1 (en) * 2002-10-15 2006-05-04 Peter Zatloukal Mobile digital communication/computing device having a context sensitive audio system
US20040193430A1 (en) 2002-12-28 2004-09-30 Samsung Electronics Co., Ltd. Method and apparatus for mixing audio stream and information storage medium thereof
US20040186734A1 (en) 2002-12-28 2004-09-23 Samsung Electronics Co., Ltd. Method and apparatus for mixing audio stream and information storage medium thereof
JP2005079922A (en) 2003-08-29 2005-03-24 Sharp Corp Portable telephone set with broadcast receiving function
US20060023900A1 (en) * 2004-07-28 2006-02-02 Erhart George W Method and apparatus for priority based audio mixing
JP2006186651A (en) 2004-12-27 2006-07-13 Kyocera Corp Call system, call device used for it, and speaker used for call system and call device
US20070218878A1 (en) 2006-03-16 2007-09-20 Charbel Khawand Method and system for prioritizing audio channels at a mixer level

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Patent Cooperation Treaty, "Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration" for PCT/US2007/085992, Jun. 4, 2008, pp. 1-12.
United States Patent and Trademark Office, "Final Rejection" for U.S. Appl. No. 11/378,128, Sep. 15, 2011, 17 pages.
United States Patent and Trademark Office, "Final Rejection" for U.S. Appl. No. 11/378,128, Sep. 30, 2010, 13 pages.
United States Patent and Trademark Office, "Non-Final Rejection" for U.S. Appl. No. 11/378,128, dated Apr. 4, 2011, 13 pages.
United States Patent and Trademark Office, "Non-Final Rejection" for U.S. Appl. No. 11/378,128, Mar. 18, 2010, 19 pages.

Cited By (255)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110154509A1 (en) * 2006-06-13 2011-06-23 Roman Mostinski Method and device for providing a security breach indicative audio alert
US9094441B2 (en) * 2006-06-13 2015-07-28 Freescale Semiconductor, Inc. Method and device for providing a security breach indicative audio alert
US9781138B2 (en) 2006-06-13 2017-10-03 Nxp Usa, Inc. Method and device for providing a security breach indicative audio alert
US10469966B2 (en) 2006-09-12 2019-11-05 Sonos, Inc. Zone scene management
US10966025B2 (en) 2006-09-12 2021-03-30 Sonos, Inc. Playback device pairing
US10897679B2 (en) 2006-09-12 2021-01-19 Sonos, Inc. Zone scene management
US9928026B2 (en) 2006-09-12 2018-03-27 Sonos, Inc. Making and indicating a stereo pair
US11082770B2 (en) 2006-09-12 2021-08-03 Sonos, Inc. Multi-channel pairing in a media system
US10028056B2 (en) 2006-09-12 2018-07-17 Sonos, Inc. Multi-channel pairing in a media system
US10848885B2 (en) 2006-09-12 2020-11-24 Sonos, Inc. Zone scene management
US10448159B2 (en) 2006-09-12 2019-10-15 Sonos, Inc. Playback device pairing
US9813827B2 (en) 2006-09-12 2017-11-07 Sonos, Inc. Zone configuration based on playback selections
US11540050B2 (en) 2006-09-12 2022-12-27 Sonos, Inc. Playback device pairing
US11388532B2 (en) 2006-09-12 2022-07-12 Sonos, Inc. Zone scene activation
US10136218B2 (en) 2006-09-12 2018-11-20 Sonos, Inc. Playback device pairing
US11385858B2 (en) 2006-09-12 2022-07-12 Sonos, Inc. Predefined multi-channel listening environment
US9749760B2 (en) 2006-09-12 2017-08-29 Sonos, Inc. Updating zone configuration in a multi-zone media system
US10555082B2 (en) 2006-09-12 2020-02-04 Sonos, Inc. Playback device pairing
US9860657B2 (en) 2006-09-12 2018-01-02 Sonos, Inc. Zone configurations maintained by playback device
US10228898B2 (en) 2006-09-12 2019-03-12 Sonos, Inc. Identification of playback device and stereo pair names
US9756424B2 (en) 2006-09-12 2017-09-05 Sonos, Inc. Multi-channel pairing in a media system
US9766853B2 (en) 2006-09-12 2017-09-19 Sonos, Inc. Pair volume control
US10306365B2 (en) 2006-09-12 2019-05-28 Sonos, Inc. Playback device pairing
US10129624B2 (en) 2007-04-13 2018-11-13 Staton Techiya, Llc Method and device for voice operated control
US10382853B2 (en) 2007-04-13 2019-08-13 Staton Techiya, Llc Method and device for voice operated control
US10631087B2 (en) 2007-04-13 2020-04-21 Staton Techiya, Llc Method and device for voice operated control
US9706280B2 (en) 2007-04-13 2017-07-11 Personics Holdings, Llc Method and device for voice operated control
US9204214B2 (en) 2007-04-13 2015-12-01 Personics Holdings, Llc Method and device for voice operated control
US10051365B2 (en) 2007-04-13 2018-08-14 Staton Techiya, Llc Method and device for voice operated control
US11317202B2 (en) 2007-04-13 2022-04-26 Staton Techiya, Llc Method and device for voice operated control
US11217237B2 (en) 2008-04-14 2022-01-04 Staton Techiya, Llc Method and device for voice operated control
US11610587B2 (en) 2008-09-22 2023-03-21 Staton Techiya Llc Personalized sound management and method
US11853184B2 (en) 2010-10-13 2023-12-26 Sonos, Inc. Adjusting a playback device
US11327864B2 (en) 2010-10-13 2022-05-10 Sonos, Inc. Adjusting a playback device
US9734243B2 (en) 2010-10-13 2017-08-15 Sonos, Inc. Adjusting a playback device
US11429502B2 (en) 2010-10-13 2022-08-30 Sonos, Inc. Adjusting a playback device
US11265652B2 (en) 2011-01-25 2022-03-01 Sonos, Inc. Playback device pairing
US11758327B2 (en) 2011-01-25 2023-09-12 Sonos, Inc. Playback device pairing
US11429343B2 (en) 2011-01-25 2022-08-30 Sonos, Inc. Stereo playback configuration and control
US10108393B2 (en) 2011-04-18 2018-10-23 Sonos, Inc. Leaving group and smart line-in processing
US11531517B2 (en) 2011-04-18 2022-12-20 Sonos, Inc. Networked playback device
US10853023B2 (en) 2011-04-18 2020-12-01 Sonos, Inc. Networked playback device
US11444375B2 (en) 2011-07-19 2022-09-13 Sonos, Inc. Frequency routing based on orientation
US9748647B2 (en) 2011-07-19 2017-08-29 Sonos, Inc. Frequency routing based on orientation
US9748646B2 (en) 2011-07-19 2017-08-29 Sonos, Inc. Configuration based on speaker orientation
US10256536B2 (en) 2011-07-19 2019-04-09 Sonos, Inc. Frequency routing based on orientation
US10965024B2 (en) 2011-07-19 2021-03-30 Sonos, Inc. Frequency routing based on orientation
US9456277B2 (en) 2011-12-21 2016-09-27 Sonos, Inc. Systems, methods, and apparatus to filter audio
US9906886B2 (en) 2011-12-21 2018-02-27 Sonos, Inc. Audio filters based on configuration
US11122382B2 (en) 2011-12-29 2021-09-14 Sonos, Inc. Playback based on acoustic signals
US11825290B2 (en) 2011-12-29 2023-11-21 Sonos, Inc. Media playback based on sensor data
US11910181B2 (en) 2011-12-29 2024-02-20 Sonos, Inc Media playback based on sensor data
US11849299B2 (en) 2011-12-29 2023-12-19 Sonos, Inc. Media playback based on sensor data
US11290838B2 (en) 2011-12-29 2022-03-29 Sonos, Inc. Playback based on user presence detection
US11889290B2 (en) 2011-12-29 2024-01-30 Sonos, Inc. Media playback based on sensor data
US10455347B2 (en) 2011-12-29 2019-10-22 Sonos, Inc. Playback based on number of listeners
US10986460B2 (en) 2011-12-29 2021-04-20 Sonos, Inc. Grouping based on acoustic signals
US10334386B2 (en) 2011-12-29 2019-06-25 Sonos, Inc. Playback based on wireless signal
US11825289B2 (en) 2011-12-29 2023-11-21 Sonos, Inc. Media playback based on sensor data
US10945089B2 (en) 2011-12-29 2021-03-09 Sonos, Inc. Playback based on user settings
US11197117B2 (en) 2011-12-29 2021-12-07 Sonos, Inc. Media playback based on sensor data
US9930470B2 (en) 2011-12-29 2018-03-27 Sonos, Inc. Sound field calibration using listener localization
US11528578B2 (en) 2011-12-29 2022-12-13 Sonos, Inc. Media playback based on sensor data
US11153706B1 (en) 2011-12-29 2021-10-19 Sonos, Inc. Playback based on acoustic signals
US10063202B2 (en) 2012-04-27 2018-08-28 Sonos, Inc. Intelligently modifying the gain parameter of a playback device
US9729115B2 (en) 2012-04-27 2017-08-08 Sonos, Inc. Intelligently increasing the sound level of player
US10720896B2 (en) 2012-04-27 2020-07-21 Sonos, Inc. Intelligently modifying the gain parameter of a playback device
US10097942B2 (en) 2012-05-08 2018-10-09 Sonos, Inc. Playback device calibration
US11812250B2 (en) 2012-05-08 2023-11-07 Sonos, Inc. Playback device calibration
US10771911B2 (en) 2012-05-08 2020-09-08 Sonos, Inc. Playback device calibration
US9524098B2 (en) 2012-05-08 2016-12-20 Sonos, Inc. Methods and systems for subwoofer calibration
US11457327B2 (en) 2012-05-08 2022-09-27 Sonos, Inc. Playback device calibration
USD906284S1 (en) 2012-06-19 2020-12-29 Sonos, Inc. Playback device
USD842271S1 (en) 2012-06-19 2019-03-05 Sonos, Inc. Playback device
US10045138B2 (en) 2012-06-28 2018-08-07 Sonos, Inc. Hybrid test tone for space-averaged room audio calibration using a moving microphone
US11064306B2 (en) 2012-06-28 2021-07-13 Sonos, Inc. Calibration state variable
US11800305B2 (en) 2012-06-28 2023-10-24 Sonos, Inc. Calibration interface
US9668049B2 (en) 2012-06-28 2017-05-30 Sonos, Inc. Playback device calibration user interfaces
US10791405B2 (en) 2012-06-28 2020-09-29 Sonos, Inc. Calibration indicator
US9961463B2 (en) 2012-06-28 2018-05-01 Sonos, Inc. Calibration indicator
US11368803B2 (en) 2012-06-28 2022-06-21 Sonos, Inc. Calibration of playback device(s)
US10045139B2 (en) 2012-06-28 2018-08-07 Sonos, Inc. Calibration state variable
US9690271B2 (en) 2012-06-28 2017-06-27 Sonos, Inc. Speaker calibration
US10296282B2 (en) 2012-06-28 2019-05-21 Sonos, Inc. Speaker calibration user interface
US9648422B2 (en) 2012-06-28 2017-05-09 Sonos, Inc. Concurrent multi-loudspeaker calibration with a single measurement
US10129674B2 (en) 2012-06-28 2018-11-13 Sonos, Inc. Concurrent multi-loudspeaker calibration
US9736584B2 (en) 2012-06-28 2017-08-15 Sonos, Inc. Hybrid test tone for space-averaged room audio calibration using a moving microphone
US9913057B2 (en) 2012-06-28 2018-03-06 Sonos, Inc. Concurrent multi-loudspeaker calibration with a single measurement
US10284984B2 (en) 2012-06-28 2019-05-07 Sonos, Inc. Calibration state variable
US11516606B2 (en) 2012-06-28 2022-11-29 Sonos, Inc. Calibration interface
US9749744B2 (en) 2012-06-28 2017-08-29 Sonos, Inc. Playback device calibration
US10674293B2 (en) 2012-06-28 2020-06-02 Sonos, Inc. Concurrent multi-driver calibration
US9690539B2 (en) 2012-06-28 2017-06-27 Sonos, Inc. Speaker calibration user interface
US11516608B2 (en) 2012-06-28 2022-11-29 Sonos, Inc. Calibration state variable
US10412516B2 (en) 2012-06-28 2019-09-10 Sonos, Inc. Calibration of playback devices
US9788113B2 (en) 2012-06-28 2017-10-10 Sonos, Inc. Calibration state variable
US9820045B2 (en) 2012-06-28 2017-11-14 Sonos, Inc. Playback calibration
US11729568B2 (en) 2012-08-07 2023-08-15 Sonos, Inc. Acoustic signatures in a playback system
US10051397B2 (en) 2012-08-07 2018-08-14 Sonos, Inc. Acoustic signatures
US9519454B2 (en) 2012-08-07 2016-12-13 Sonos, Inc. Acoustic signatures
US10904685B2 (en) 2012-08-07 2021-01-26 Sonos, Inc. Acoustic signatures in a playback system
US9998841B2 (en) 2012-08-07 2018-06-12 Sonos, Inc. Acoustic signatures
US9525931B2 (en) 2012-08-31 2016-12-20 Sonos, Inc. Playback based on received sound waves
US9736572B2 (en) 2012-08-31 2017-08-15 Sonos, Inc. Playback based on received sound waves
US10306364B2 (en) 2012-09-28 2019-05-28 Sonos, Inc. Audio processing adjustments for playback devices based on determined characteristics of audio content
USD829687S1 (en) 2013-02-25 2018-10-02 Sonos, Inc. Playback device
USD991224S1 (en) 2013-02-25 2023-07-04 Sonos, Inc. Playback device
USD848399S1 (en) 2013-02-25 2019-05-14 Sonos, Inc. Playback device
US9270244B2 (en) 2013-03-13 2016-02-23 Personics Holdings, Llc System and method to detect close voice sources and automatically enhance situation awareness
US9271077B2 (en) 2013-12-17 2016-02-23 Personics Holdings, Llc Method and system for directional enhancement of sound using small microphone arrays
US9781513B2 (en) 2014-02-06 2017-10-03 Sonos, Inc. Audio output balancing
US9363601B2 (en) 2014-02-06 2016-06-07 Sonos, Inc. Audio output balancing
US9549258B2 (en) 2014-02-06 2017-01-17 Sonos, Inc. Audio output balancing
US9369104B2 (en) 2014-02-06 2016-06-14 Sonos, Inc. Audio output balancing
US9544707B2 (en) 2014-02-06 2017-01-10 Sonos, Inc. Audio output balancing
US9794707B2 (en) 2014-02-06 2017-10-17 Sonos, Inc. Audio output balancing
US9516419B2 (en) 2014-03-17 2016-12-06 Sonos, Inc. Playback device setting according to threshold(s)
US9419575B2 (en) 2014-03-17 2016-08-16 Sonos, Inc. Audio settings based on environment
US9264839B2 (en) 2014-03-17 2016-02-16 Sonos, Inc. Playback device configuration based on proximity detection
US9743208B2 (en) 2014-03-17 2017-08-22 Sonos, Inc. Playback device configuration based on proximity detection
US9872119B2 (en) 2014-03-17 2018-01-16 Sonos, Inc. Audio settings of multiple speakers in a playback device
US10511924B2 (en) 2014-03-17 2019-12-17 Sonos, Inc. Playback device with multiple sensors
US10299055B2 (en) 2014-03-17 2019-05-21 Sonos, Inc. Restoration of playback device configuration
US10863295B2 (en) 2014-03-17 2020-12-08 Sonos, Inc. Indoor/outdoor playback device calibration
US9344829B2 (en) 2014-03-17 2016-05-17 Sonos, Inc. Indication of barrier detection
US11540073B2 (en) 2014-03-17 2022-12-27 Sonos, Inc. Playback device self-calibration
US9521487B2 (en) 2014-03-17 2016-12-13 Sonos, Inc. Calibration adjustment based on barrier
US9521488B2 (en) 2014-03-17 2016-12-13 Sonos, Inc. Playback device setting based on distortion
US10412517B2 (en) 2014-03-17 2019-09-10 Sonos, Inc. Calibration of playback device to target curve
US10791407B2 (en) 2014-03-17 2020-09-29 Sonon, Inc. Playback device configuration
US11696081B2 (en) 2014-03-17 2023-07-04 Sonos, Inc. Audio settings based on environment
US9439021B2 (en) 2014-03-17 2016-09-06 Sonos, Inc. Proximity detection using audio pulse
US9439022B2 (en) 2014-03-17 2016-09-06 Sonos, Inc. Playback device speaker configuration based on proximity detection
US10051399B2 (en) 2014-03-17 2018-08-14 Sonos, Inc. Playback device configuration according to distortion threshold
US10129675B2 (en) 2014-03-17 2018-11-13 Sonos, Inc. Audio settings of multiple speakers in a playback device
US10061556B2 (en) 2014-07-22 2018-08-28 Sonos, Inc. Audio settings
US11803349B2 (en) 2014-07-22 2023-10-31 Sonos, Inc. Audio settings
US9367283B2 (en) 2014-07-22 2016-06-14 Sonos, Inc. Audio settings
USD988294S1 (en) 2014-08-13 2023-06-06 Sonos, Inc. Playback device with icon
US9936318B2 (en) 2014-09-09 2018-04-03 Sonos, Inc. Playback device calibration
US9781532B2 (en) 2014-09-09 2017-10-03 Sonos, Inc. Playback device calibration
US9749763B2 (en) 2014-09-09 2017-08-29 Sonos, Inc. Playback device calibration
US10154359B2 (en) 2014-09-09 2018-12-11 Sonos, Inc. Playback device calibration
US10599386B2 (en) 2014-09-09 2020-03-24 Sonos, Inc. Audio processing algorithms
US10127008B2 (en) 2014-09-09 2018-11-13 Sonos, Inc. Audio processing algorithm database
US9952825B2 (en) 2014-09-09 2018-04-24 Sonos, Inc. Audio processing algorithms
US9706323B2 (en) 2014-09-09 2017-07-11 Sonos, Inc. Playback device calibration
US10271150B2 (en) 2014-09-09 2019-04-23 Sonos, Inc. Playback device calibration
US10701501B2 (en) 2014-09-09 2020-06-30 Sonos, Inc. Playback device calibration
US11029917B2 (en) 2014-09-09 2021-06-08 Sonos, Inc. Audio processing algorithms
US11625219B2 (en) 2014-09-09 2023-04-11 Sonos, Inc. Audio processing algorithms
US9891881B2 (en) 2014-09-09 2018-02-13 Sonos, Inc. Audio processing algorithm database
US10127006B2 (en) 2014-09-09 2018-11-13 Sonos, Inc. Facilitating calibration of an audio playback device
US9910634B2 (en) 2014-09-09 2018-03-06 Sonos, Inc. Microphone calibration
US9973851B2 (en) 2014-12-01 2018-05-15 Sonos, Inc. Multi-channel playback of audio content
US11818558B2 (en) 2014-12-01 2023-11-14 Sonos, Inc. Audio generation in a media playback system
US11470420B2 (en) 2014-12-01 2022-10-11 Sonos, Inc. Audio generation in a media playback system
US10863273B2 (en) 2014-12-01 2020-12-08 Sonos, Inc. Modified directional effect
US10349175B2 (en) 2014-12-01 2019-07-09 Sonos, Inc. Modified directional effect
US10284983B2 (en) 2015-04-24 2019-05-07 Sonos, Inc. Playback device calibration user interfaces
US10664224B2 (en) 2015-04-24 2020-05-26 Sonos, Inc. Speaker calibration user interface
USD906278S1 (en) 2015-04-25 2020-12-29 Sonos, Inc. Media player device
USD855587S1 (en) 2015-04-25 2019-08-06 Sonos, Inc. Playback device
USD934199S1 (en) 2015-04-25 2021-10-26 Sonos, Inc. Playback device
US11403062B2 (en) 2015-06-11 2022-08-02 Sonos, Inc. Multiple groupings in a playback system
US9729118B2 (en) 2015-07-24 2017-08-08 Sonos, Inc. Loudness matching
US9893696B2 (en) 2015-07-24 2018-02-13 Sonos, Inc. Loudness matching
US10129679B2 (en) 2015-07-28 2018-11-13 Sonos, Inc. Calibration error conditions
US9781533B2 (en) 2015-07-28 2017-10-03 Sonos, Inc. Calibration error conditions
US10462592B2 (en) 2015-07-28 2019-10-29 Sonos, Inc. Calibration error conditions
US9538305B2 (en) 2015-07-28 2017-01-03 Sonos, Inc. Calibration error conditions
US10034115B2 (en) 2015-08-21 2018-07-24 Sonos, Inc. Manipulation of playback device response using signal processing
US10433092B2 (en) 2015-08-21 2019-10-01 Sonos, Inc. Manipulation of playback device response using signal processing
US9736610B2 (en) 2015-08-21 2017-08-15 Sonos, Inc. Manipulation of playback device response using signal processing
US9712912B2 (en) 2015-08-21 2017-07-18 Sonos, Inc. Manipulation of playback device response using an acoustic filter
US9942651B2 (en) 2015-08-21 2018-04-10 Sonos, Inc. Manipulation of playback device response using an acoustic filter
US11528573B2 (en) 2015-08-21 2022-12-13 Sonos, Inc. Manipulation of playback device response using signal processing
US10812922B2 (en) 2015-08-21 2020-10-20 Sonos, Inc. Manipulation of playback device response using signal processing
US10149085B1 (en) 2015-08-21 2018-12-04 Sonos, Inc. Manipulation of playback device response using signal processing
US11197112B2 (en) 2015-09-17 2021-12-07 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US11706579B2 (en) 2015-09-17 2023-07-18 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US9693165B2 (en) 2015-09-17 2017-06-27 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
USD921611S1 (en) 2015-09-17 2021-06-08 Sonos, Inc. Media player
US11099808B2 (en) 2015-09-17 2021-08-24 Sonos, Inc. Facilitating calibration of an audio playback device
US9992597B2 (en) 2015-09-17 2018-06-05 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US10419864B2 (en) 2015-09-17 2019-09-17 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US11803350B2 (en) 2015-09-17 2023-10-31 Sonos, Inc. Facilitating calibration of an audio playback device
US10585639B2 (en) 2015-09-17 2020-03-10 Sonos, Inc. Facilitating calibration of an audio playback device
US10405117B2 (en) 2016-01-18 2019-09-03 Sonos, Inc. Calibration using multiple recording devices
US11432089B2 (en) 2016-01-18 2022-08-30 Sonos, Inc. Calibration using multiple recording devices
US10063983B2 (en) 2016-01-18 2018-08-28 Sonos, Inc. Calibration using multiple recording devices
US9743207B1 (en) 2016-01-18 2017-08-22 Sonos, Inc. Calibration using multiple recording devices
US10841719B2 (en) 2016-01-18 2020-11-17 Sonos, Inc. Calibration using multiple recording devices
US11800306B2 (en) 2016-01-18 2023-10-24 Sonos, Inc. Calibration using multiple recording devices
US11184726B2 (en) 2016-01-25 2021-11-23 Sonos, Inc. Calibration using listener locations
US10390161B2 (en) 2016-01-25 2019-08-20 Sonos, Inc. Calibration based on audio content type
US10735879B2 (en) 2016-01-25 2020-08-04 Sonos, Inc. Calibration based on grouping
US10003899B2 (en) 2016-01-25 2018-06-19 Sonos, Inc. Calibration with particular locations
US11516612B2 (en) 2016-01-25 2022-11-29 Sonos, Inc. Calibration based on audio content
US11106423B2 (en) 2016-01-25 2021-08-31 Sonos, Inc. Evaluating calibration of a playback device
US11006232B2 (en) 2016-01-25 2021-05-11 Sonos, Inc. Calibration based on audio content
US9886234B2 (en) 2016-01-28 2018-02-06 Sonos, Inc. Systems and methods of distributing audio to one or more playback devices
US11526326B2 (en) 2016-01-28 2022-12-13 Sonos, Inc. Systems and methods of distributing audio to one or more playback devices
US10296288B2 (en) 2016-01-28 2019-05-21 Sonos, Inc. Systems and methods of distributing audio to one or more playback devices
US11194541B2 (en) 2016-01-28 2021-12-07 Sonos, Inc. Systems and methods of distributing audio to one or more playback devices
US10592200B2 (en) 2016-01-28 2020-03-17 Sonos, Inc. Systems and methods of distributing audio to one or more playback devices
US11379179B2 (en) 2016-04-01 2022-07-05 Sonos, Inc. Playback device calibration based on representative spectral characteristics
US11736877B2 (en) 2016-04-01 2023-08-22 Sonos, Inc. Updating playback device configuration information based on calibration data
US10405116B2 (en) 2016-04-01 2019-09-03 Sonos, Inc. Updating playback device configuration information based on calibration data
US9860662B2 (en) 2016-04-01 2018-01-02 Sonos, Inc. Updating playback device configuration information based on calibration data
US9864574B2 (en) 2016-04-01 2018-01-09 Sonos, Inc. Playback device calibration based on representation spectral characteristics
US10402154B2 (en) 2016-04-01 2019-09-03 Sonos, Inc. Playback device calibration based on representative spectral characteristics
US10880664B2 (en) 2016-04-01 2020-12-29 Sonos, Inc. Updating playback device configuration information based on calibration data
US10884698B2 (en) 2016-04-01 2021-01-05 Sonos, Inc. Playback device calibration based on representative spectral characteristics
US11212629B2 (en) 2016-04-01 2021-12-28 Sonos, Inc. Updating playback device configuration information based on calibration data
US9763018B1 (en) 2016-04-12 2017-09-12 Sonos, Inc. Calibration of audio playback devices
US10299054B2 (en) 2016-04-12 2019-05-21 Sonos, Inc. Calibration of audio playback devices
US10045142B2 (en) 2016-04-12 2018-08-07 Sonos, Inc. Calibration of audio playback devices
US10750304B2 (en) 2016-04-12 2020-08-18 Sonos, Inc. Calibration of audio playback devices
US11218827B2 (en) 2016-04-12 2022-01-04 Sonos, Inc. Calibration of audio playback devices
US11889276B2 (en) 2016-04-12 2024-01-30 Sonos, Inc. Calibration of audio playback devices
US10750303B2 (en) 2016-07-15 2020-08-18 Sonos, Inc. Spatial audio correction
US11736878B2 (en) 2016-07-15 2023-08-22 Sonos, Inc. Spatial audio correction
US9794710B1 (en) 2016-07-15 2017-10-17 Sonos, Inc. Spatial audio correction
US9860670B1 (en) 2016-07-15 2018-01-02 Sonos, Inc. Spectral correction using spatial calibration
US10448194B2 (en) 2016-07-15 2019-10-15 Sonos, Inc. Spectral correction using spatial calibration
US11337017B2 (en) 2016-07-15 2022-05-17 Sonos, Inc. Spatial audio correction
US10129678B2 (en) 2016-07-15 2018-11-13 Sonos, Inc. Spatial audio correction
US10853022B2 (en) 2016-07-22 2020-12-01 Sonos, Inc. Calibration interface
US11531514B2 (en) 2016-07-22 2022-12-20 Sonos, Inc. Calibration assistance
US10372406B2 (en) 2016-07-22 2019-08-06 Sonos, Inc. Calibration interface
US11237792B2 (en) 2016-07-22 2022-02-01 Sonos, Inc. Calibration assistance
US10459684B2 (en) 2016-08-05 2019-10-29 Sonos, Inc. Calibration of a playback device based on an estimated frequency response
US10853027B2 (en) 2016-08-05 2020-12-01 Sonos, Inc. Calibration of a playback device based on an estimated frequency response
US11698770B2 (en) 2016-08-05 2023-07-11 Sonos, Inc. Calibration of a playback device based on an estimated frequency response
USD851057S1 (en) 2016-09-30 2019-06-11 Sonos, Inc. Speaker grill with graduated hole sizing over a transition area for a media device
USD827671S1 (en) 2016-09-30 2018-09-04 Sonos, Inc. Media playback device
USD930612S1 (en) 2016-09-30 2021-09-14 Sonos, Inc. Media playback device
US10412473B2 (en) 2016-09-30 2019-09-10 Sonos, Inc. Speaker grill with graduated hole sizing over a transition area for a media device
US11481182B2 (en) 2016-10-17 2022-10-25 Sonos, Inc. Room association based on name
USD886765S1 (en) 2017-03-13 2020-06-09 Sonos, Inc. Media playback device
USD920278S1 (en) 2017-03-13 2021-05-25 Sonos, Inc. Media playback device with lights
USD1000407S1 (en) 2017-03-13 2023-10-03 Sonos, Inc. Media playback device
US11432065B2 (en) 2017-10-23 2022-08-30 Staton Techiya, Llc Automatic keyword pass-through system
US10966015B2 (en) 2017-10-23 2021-03-30 Staton Techiya, Llc Automatic keyword pass-through system
US10405082B2 (en) 2017-10-23 2019-09-03 Staton Techiya, Llc Automatic keyword pass-through system
US10299061B1 (en) 2018-08-28 2019-05-21 Sonos, Inc. Playback device calibration
US11206484B2 (en) 2018-08-28 2021-12-21 Sonos, Inc. Passive speaker authentication
US10582326B1 (en) 2018-08-28 2020-03-03 Sonos, Inc. Playback device calibration
US10848892B2 (en) 2018-08-28 2020-11-24 Sonos, Inc. Playback device calibration
US11877139B2 (en) 2018-08-28 2024-01-16 Sonos, Inc. Playback device calibration
US11350233B2 (en) 2018-08-28 2022-05-31 Sonos, Inc. Playback device calibration
US11728780B2 (en) 2019-08-12 2023-08-15 Sonos, Inc. Audio calibration of a portable playback device
US10734965B1 (en) 2019-08-12 2020-08-04 Sonos, Inc. Audio calibration of a portable playback device
US11374547B2 (en) 2019-08-12 2022-06-28 Sonos, Inc. Audio calibration of a portable playback device

Also Published As

Publication number Publication date
US20080144858A1 (en) 2008-06-19
WO2008076607A3 (en) 2008-08-14
WO2008076607A2 (en) 2008-06-26

Similar Documents

Publication Publication Date Title
US8391501B2 (en) Method and apparatus for mixing priority and non-priority audio signals
KR100800725B1 (en) Automatic volume controlling method for mobile telephony audio player and therefor apparatus
CN101461258B (en) Mixing techniques for mixing audio
US9002034B2 (en) Method and system for audio level detection and control
US9281794B1 (en) System and method for digital signal processing
US20080170703A1 (en) User selectable audio mixing
US20060115090A1 (en) Stereo widening network for two loudspeakers
US9378751B2 (en) Method and system for digital gain processing in a hardware audio CODEC for audio transmission
US20050175185A1 (en) Audio bandwidth extending system and method
JP2000165483A (en) Method for adjusting audio output of digital telephone and digital telephone for adjusting audio output in accordance with individual auditory spectrum of user
US20100056050A1 (en) Method and system for audio feedback processing in an audio codec
EP1814355A1 (en) Acoustic adjustment device and acoustic adjustment method
EP1802082A1 (en) Information terminal
US20100057472A1 (en) Method and system for frequency compensation in an audio codec
CN105808198A (en) Audio file processing method and apparatus applied to android system and terminal
KR100705163B1 (en) portable termial and signal transmission method for service the background music
US20100057475A1 (en) Method and system for digital gain control in an audio codec
US8326640B2 (en) Method and system for multi-band amplitude estimation and gain control in an audio CODEC
JP3769433B2 (en) Communication terminal
KR100362150B1 (en) Mobile teminal having function of background music
JP2010141571A (en) Volume control method, audio signal reproduction device, and program
KR100532319B1 (en) Mobile terminal capable of voice change and method therefor
CN114937456A (en) External playing device, method, program and system
EP1357733A1 (en) Audio bandwidth extending system and method
JP6484066B2 (en) Audio equipment

Legal Events

Date Code Title Description
AS Assignment

Owner name: MOTOROLA, INC., ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KHAWAND, CHARBEL;YAGUNOV, MIKHAIL U.;REEL/FRAME:018626/0105

Effective date: 20061212

AS Assignment

Owner name: MOTOROLA MOBILITY, INC, ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOTOROLA, INC;REEL/FRAME:025673/0558

Effective date: 20100731

AS Assignment

Owner name: MOTOROLA MOBILITY LLC, ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOTOROLA MOBILITY, INC.;REEL/FRAME:028829/0856

Effective date: 20120622

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: GOOGLE TECHNOLOGY HOLDINGS LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOTOROLA MOBILITY LLC;REEL/FRAME:034416/0001

Effective date: 20141028

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: 7.5 YR SURCHARGE - LATE PMT W/IN 6 MO, LARGE ENTITY (ORIGINAL EVENT CODE: M1555); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8