US20050203746A1 - Asynchronous speech data communication system and communication method therefor - Google Patents
Asynchronous speech data communication system and communication method therefor Download PDFInfo
- Publication number
- US20050203746A1 US20050203746A1 US11/058,431 US5843105A US2005203746A1 US 20050203746 A1 US20050203746 A1 US 20050203746A1 US 5843105 A US5843105 A US 5843105A US 2005203746 A1 US2005203746 A1 US 2005203746A1
- Authority
- US
- United States
- Prior art keywords
- speech
- data
- communication terminal
- communication
- electronic devices
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2416—Real-time traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/36—Flow control; Congestion control by determining packet size, e.g. maximum transfer unit [MTU]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/60—Substation equipment, e.g. for use by subscribers including speech amplifiers
- H04M1/6033—Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
- H04M1/6041—Portable telephones adapted for handsfree use
- H04M1/6075—Portable telephones adapted for handsfree use adapted for handsfree use in a vehicle
- H04M1/6083—Portable telephones adapted for handsfree use adapted for handsfree use in a vehicle by interfacing with the vehicle audio system
- H04M1/6091—Portable telephones adapted for handsfree use adapted for handsfree use in a vehicle by interfacing with the vehicle audio system including a wireless interface
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W28/00—Network traffic management; Network resource management
- H04W28/02—Traffic management, e.g. flow control or congestion control
- H04W28/0231—Traffic management, e.g. flow control or congestion control based on communication conditions
- H04W28/0236—Traffic management, e.g. flow control or congestion control based on communication conditions radio quality, e.g. interference, losses or delay
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W8/00—Network data management
- H04W8/02—Processing of mobility data, e.g. registration information at HLR [Home Location Register] or VLR [Visitor Location Register]; Transfer of mobility data, e.g. between HLR, VLR or external networks
- H04W8/04—Registration at HLR or HSS [Home Subscriber Server]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W72/00—Local resource management
- H04W72/50—Allocation or scheduling criteria for wireless resources
- H04W72/56—Allocation or scheduling criteria for wireless resources based on priority criteria
- H04W72/563—Allocation or scheduling criteria for wireless resources based on priority criteria of the wireless resources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W84/00—Network topologies
- H04W84/02—Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
- H04W84/10—Small scale networks; Flat hierarchical networks
- H04W84/12—WLAN [Wireless Local Area Networks]
Definitions
- the present invention relates to an asynchronous speech data communication system and a communication method therefor and, more particularly, relates to a communication system for making a hands-free phone conversation in a vehicle.
- a hands-free phone conversation is generally used from the viewpoint of convenience.
- a hands-free phone conversation is made using a mobile phone
- a user uses a vehicle-mounted audio device and an input/output terminal incorporated in a navigation device or uses a hands-free terminal in which a headphone and a microphone are installed in order to communicate speech data in a wireless manner between the terminal and the main unit of the mobile phone.
- BlueTooth is used as a short distance wireless data communication technology.
- transmission and reception of data such as speech is performed in a wireless manner among mobile phones, notebook computers, PDAs (Personal Digital Assistants), etc.
- the frequency band used is a 2.45-GHz radio frequency (RF), the operating range is within approximately 10 m, and the data transfer rate is approximately 1 Mbps.
- RF radio frequency
- Japanese Unexamined Patent Application Publication No. 2001-136190 discloses a technique in which, in order that an AV device in a vehicle can be used in another vehicle, a wireless LAN unit is connected to a LAN system in the vehicle, and the LAN systems of the vehicles are linked via the wireless LAN unit.
- BlueTooth has both an asynchronous data channel for data communication and a synchronous channel for speech communication
- BlueTooth can be used without problems even for an application that is sensitive to speech delay, such as a hands-free phone conversation.
- this is to be replaced with a wireless LAN
- the wireless LAN since the wireless LAN has only an asynchronous data communication system, some kind of mechanism for permitting the maximum delay time in an application that is sensitive to speech delay, such as a hands-free phone conversation, becomes necessary.
- the speech data When speech data is transmitted by a wireless LAN, the speech data is subjected to pulse code modulation (PCM), the coded speech data is packetized, and this packetized speech data is transmitted to an access point.
- PCM pulse code modulation
- CSMA/CA Carrier Sense Multiple Access/Collision Avoidance
- the transmission of speech data must be postponed until the communication of the other terminal is completed. That is, the larger the size of one data frame (packet), the larger the maximum waiting time.
- FIG. 10 shows the format of a physical layer for use in a wireless LAN (direct diffusion method).
- FIG. 11 shows the relationship between each bit ratio when the MPDU (data) is the maximum number of bits (65536 bits) and the transmission time at that time.
- the frame format has a preamble for achieving synchronization among devices, a header for addresses of a destination and a transmission source and lengths thereof, and a data unit (MPDU) containing data of a variable size.
- the variable range of data is 4 to 8192 bytes (32 to 65536 bits).
- the delay time that is, the waiting time, becomes a maximum of approximately 65 ms at the lowest bit rate of 1 Mbps.
- the waiting time in this case is 65 ms ⁇ the number of frames
- the speech data V cannot be transmitted from the mobile phone to the access point while the PDA or the mobile audio is transmitting data to the access point.
- the waiting time Tmax for transmitting the speech data V is proportional to the data size from the PDA and the mobile audio, that is, the packet size, and during that time, the transmission of the speech data must be postponed.
- the delay of the speech data reaches a fixed level or higher, the speech transmission quality deteriorates, and the other party with whom communication is performed may experience some annoyance.
- an object of the present invention is to solve the above-described conventional problems and to provide an asynchronous communication system and a communication method, which are capable of suppressing the delay time of speech data.
- Another object of the present invention is to provide an asynchronous speech data communication system and a communication method, which are capable of making a hands-free phone conversation at a high speed without causing annoyance in a vehicle.
- the present invention provides an asynchronous speech data communication system including: a speech communication terminal having a speech data communication function; and a communication control section that enables asynchronous data communication with another electronic device including the speech communication terminal and that limits the packet size of data to be communicated of the other electronic device when there is communication of speech data by the speech communication terminal.
- the present invention provides an asynchronous speech data communication system including: a speech communication terminal having a speech data communication function; and a communication control section that enables asynchronous data communication with other electronic devices including the speech communication terminal and that causes polling of the speech communication terminal to have a higher priority than that of the other electronic devices.
- the present invention provides a method for asynchronously communicating speech data between a speech communication terminal having a speech data communication function and other electronic devices, the method including: a first step of detecting the presence or absence of communication of speech data by the speech communication terminal; and a second step of limiting the packet size of data to be communicated of the other electronic devices when it is detected that there is communication of speech data.
- the present invention provides a method for asynchronously communicating speech data between a speech communication terminal having a speech data communication function and other electronic devices, the method including: a step of causing polling of the speech communication terminal to have a higher priority than that of the other electronic devices.
- the asynchronous speech data communication system and the communication method in accordance with the present invention even when there is data communication with another electronic device, the waiting time or the delay time of communication of speech data by the speech communication terminal can be reduced, and the speech transmission quality can be maintained at a fixed level. If the present invention is applied to a wireless LAN system in a vehicle, a hands-free system that can be used together with electronic devices mounted in the vehicle can be obtained.
- the asynchronous speech data communication system and the communication method therefor according to the present invention can be used in a network system that performs asynchronous communication by segmenting data into packets.
- a network system that performs asynchronous communication by segmenting data into packets.
- the present invention can be used together with an AVN (audio visual navigation) device.
- AVN audio visual navigation
- FIG. 1 is a block diagram showing the configuration of a hands-free system according to a first embodiment of the present invention
- FIG. 2 is a block diagram showing the configuration of an AVN device
- FIG. 3 is a flowchart illustrating a hands-free operation of the first embodiment of the present invention
- FIGS. 4A and 4B show examples of the transmission of data in the hands-free operation according to the first embodiment of the present invention
- FIG. 5 is a graph showing the relationship between the amount of speech delay and speech transmission quality
- FIG. 6 is a table showing the relationship between speech transmission quality and an R-value range
- FIGS. 7A and 7B illustrate a delay time on a mobile phone side
- FIG. 8 is a flowchart illustrating a hands-free operation according to a second embodiment of the present invention.
- FIG. 9 is a flowchart illustrating a hands-free operation according to a third embodiment of the present invention.
- FIG. 10 shows the frame format of a physical layer
- FIG. 11 is a table showing the relationship between a maximum number of bits and a transmission time.
- FIG. 12 illustrates problems when speech data is communicated in an asynchronous manner in a conventional case.
- An asynchronous speech data communication system is preferably performed in a hands-free system in which an in-vehicle wireless LAN is used.
- An example of an in-vehicle hands-free system will now be described below with reference to the drawings.
- FIG. 1 is a block diagram showing the configuration of a hands-free system according to an embodiment of the present invention.
- An in-vehicle hands-free system 1 includes an access point 10 of a wireless LAN, an AVN (audio visual navigation) device 20 connected to the access point 10 , a speaker 30 , a microphone 32 , a display unit 34 , the speaker 30 , the microphone 32 , and the display unit 34 being connected to the access point 10 , a PDA 40 for performing data communication with the AVN device 20 , a mobile audio 42 , and a hands-free speech communication terminal 44 .
- AVN audio visual navigation
- a mobile phone is used for the hands-free speech communication terminal 44 .
- a terminal having a speech data communication function can be used for example, a hands-free terminal having a microphone for receiving speech from a user, and a speaker and/or a headphone for reproducing speech and having a wireless communication function for performing wireless communication with the access point 10 and other electronic devices may also be used.
- the speech data received by the speech communication terminal 44 from a mobile phone outside a vehicle or an ordinary phone is supplied to the AVN device 20 via the access point 10 , and the speech is generated from the speaker 30 . Furthermore, the speech received from the microphone 32 is processed as speech data by the AVN device 20 , the speech data is transmitted to the hands-free speech communication terminal 44 via the access point 10 , and the speech data is further transmitted from the terminal to a mobile phone outside a vehicle and an ordinary phone.
- the AVN device 20 is such that a navigation device and an AV device are united.
- the AVN device 20 plays back music data and video data stored in a storage device, such as a DVD, a CD-ROM, or a hard disk, from the speaker 30 and on the display unit 34 , and performs a navigation using a GPS.
- the AVN device may also be configured in such a manner that an AV device and a navigation device are configured as one unit, and may be a system in which an AV device and a navigation device are connected to each other.
- the access point 10 together with a client 50 , constitutes a wireless LAN.
- the client 50 includes the PDA 40 , the mobile audio 42 , and the hands-free speech communication terminal 44 , having a wireless communication function. These are only an example, and other electronic devices may also be used.
- the access point 10 performs wireless communication in accordance with standardized specifications based on, for example, IEEE 802.11a or IEEE 802.11b.
- the structure shown in FIG. 10 can be used, and for the access control method, CSMA/CA can be used.
- CSMA/CA can be used.
- CCK Complementary Code Keying
- OFDM Orthogonal Frequency Division Multiplexing
- the client 50 packetizes data to be transmitted, and transmits the packetized frames to the access point 10 by confirming that another terminal is not performing a transmission.
- the access point 10 receives the frames from the client 50
- the access point 10 transmits a response acknowledgement (ACK) to the client 50 .
- the frames received by the access point 10 are supplied to the AVN device 20 , whereby necessary processing is performed thereon.
- the data output from the AVN device 20 to the access point 10 is segmented into frames, and, and the frames are transmitted to the client 50 in a wireless manner.
- FIG. 2 shows the internal configuration of the AVN device 20 .
- the AVN device 20 includes a data input/output section 100 for performing transmission and reception of data with the access point 10 , a communication control section 102 for controlling the operation of the access point 10 , a hands-free function section 104 for processing speech data transmitted and received to and from the hands-free speech communication terminal 44 , an AV function section 106 for playing back music and video, a navigation function section 108 for performing navigation functions, a large-capacity memory 110 for storing programs, application files, a database, etc., a control section 112 for controlling each section, and a bus 114 for interconnecting the sections.
- a data input/output section 100 for performing transmission and reception of data with the access point 10
- a communication control section 102 for controlling the operation of the access point 10
- a hands-free function section 104 for processing speech data transmitted and received to and from the hands-free speech communication terminal 44
- an AV function section 106 for playing back music and video
- the hands-free function section 104 causes speech data to be reproduced from the hands-free speech communication terminal 44 from the speaker, and suitably processes (for example, echo canceling) the speech received from the microphone 32 .
- the processed speech data is transmitted to the speech communication terminal 44 via the access point 10 .
- control section 112 controls the operation of each section in accordance with a program stored in the large-capacity memory 110 .
- FIG. 3 is a flowchart illustrating the operation of the hands-free system.
- the communication control section 102 detects the presence or absence of a hands-free phone conversation (step S 101 ). That is, when data is transmitted via the access point 10 , the communication control section 102 detects whether or not the destination address contained in the header of the frame matches the speech communication terminal 44 . When they match, it is determined that there is a hands-free phone conversation. On the other hand, when data is received via the access point 10 , the communication control section 102 detects whether or not the address of the transmission source, which is contained in the header of the frame, matches the speech communication terminal 44 . When they match, it is determined that there is a hands-free phone conversation.
- asynchronous communication is performed in a normal manner among the access point 10 , the PDA 40 , and the mobile audio 42 (step S 103 ).
- the PDA 40 transmits a frame A 1 to the access point 10 .
- the access point 10 receives the frame A 1
- the access point 10 transmits a response acknowledgement (ACK) to the client 50 .
- the mobile audio 42 transmits a frame B 1 to the access point 10 after the response acknowledgement (ACK) is received.
- the frame size can be varied in the range of 32 to 65536 bits, and preferably, communication can be performed at a maximum packet size (65536 bits) in order to increase the data communication efficiency.
- the communication control section 102 permits the maximum delay time of the hands-free phone conversation and performs control described below in order to minimize the delay of the speech data.
- the access point 10 transmits, to the PDA 40 and the mobile audio 42 , a parameter for defining the maximum packet size at which transmission is permitted per packet (step S 104 ).
- the PDA 40 and the mobile audio 42 receive the parameter and transmit frames in the form of packets of the packet size defined in this parameter or smaller (step S 105 ).
- FIG. 4B shows an example of data communication when there is a hands-free phone conversation.
- the speech communication terminal 44 When the speech communication terminal 44 is going to transmit speech data V 1 (the timing of the broken line in FIG. 4B ), if there is a transmission of the frame A 1 by the PDA 40 , the speech communication terminal 44 must wait for this transmission to be completed.
- the access point 10 transmits, to the client 50 , a response acknowledgement (ACK) for confirming the reception of the frame A 1 from the PDA 40 , the speech communication terminal 44 transmits the speech data V 1 to the access point 10 .
- the packet size of the frame A 1 by the PDA 40 is, for example, 1/10 of the maximum packet size (65536 bits)
- the maximum delay time Tmax of the speech data V 1 can be permitted to be approximately 6.5 ms.
- the access point 10 Upon receiving the speech data V 1 , the access point 10 transmits a response acknowledgement (ACK) to the client 50 . If the mobile audio 42 is going to transmit the frame B 1 , after this acknowledgement is received, the frame B 1 is transmitted to the access point 10 . While the transmission of the frame B 1 is being performed, the speech communication terminal 44 cannot transmit speech data V 2 .
- the waiting time T 1 of the speech data V 2 depends on the packet size of the frame B 1 . If the packet size of the frame B 1 is smaller than that of the frame A 1 , T 1 ⁇ Tmax holds.
- the waiting time of the speech data is shortened as much as possible, so that the speech data can be communicated at a high speed while suppressing the deterioration of the speech transmission quality.
- FIG. 5 is a graph showing the relationship between the amount of speech delay and speech transmission quality. This graph is disclosed in FIG. 2 of the reference document of Jan Janssen et al., “DELAY AND DISTORTION BOUNDS FOR PACKETIZED VOICE CALLS OF TRADITIONAL PSTN QUALITY”, Proceedings of the 1 st IP-Telephony Workshop, GMD Report 95, pp. 105-110 Berlin, Germany, 12-13 April 2000.
- the horizontal axis indicates the delay time (ms) from the mouth to the ear, and the vertical axis indicates a rating R indicating the speech transmission quality.
- EL denotes an echo loss, which is a loss (dB) for the amount of return when speech is sent from the speaker to the other party.
- FIG. 6 shows an R-value range described in Table 1 of the above reference document. It is reported in this reference document that, when the R-value range is lower than or equal to 60, the speech transmission quality is very poor and that the R-value range is preferably at least 70 or higher in the case of a phone conversation using a public network.
- the echo loss when the mobile phone is used in a hands-free manner in a vehicle is approximately 40 dB. It is understood from this fact that the delay time needs to be lower than or equal to 200 ms (see the curve of FIG. 5 in which the EL is 41 dB).
- the mobile phone When the mobile phone is used in the hands-free system 1 of FIG. 1 , that is, when, as shown in FIG. 7A , a phone conversation is made with another phone 240 from a mobile phone 200 (the hands-free speech communication terminal 44 of FIG. 1 ) via a mobile phone base station 220 , it is necessary to limit the total of the delay time on the wireless LAN side and the delay time on the mobile phone side to within 200 ms.
- the delay time of the mobile phone 200 is approximately 20 ms required for a speech coding section 204 to process the speech data received by a wireless LAN module 202 and 20 ms required for an interleave section 206 to process the coded speech data, and thus the total time is 40 ms.
- the time required for a deinterleave section 210 to process the speech data received from a wireless section 208 is 20 ms and the time required for a speech decoding section 212 to process the speech data is 20 ms, and thus the total time is 40 ms.
- the delay time due to the mobile phone base station 220 is 20 ms required for a deinterleave section 224 to process the speech data from a wireless section 222 and 20 ms required for a speech decoding section 226 to process the speech data, and thus the total time is 40 ms. Similarly, it takes a total of 40 ms for a speech coding section 230 and an interleave section 232 to process the speech data received from a public network interface 228 .
- the delay time from the mobile phone base station 220 through the public switched telephone network (PSTN) to another phone 240 becomes approximately 20 ms.
- the delay time required for a phone conversation from the mobile phone 200 to the other phone 240 becomes a total of approximately 100 ms.
- the maximum waiting time in the conventional wireless LAN is approximately 65 ms, and this is the waiting time of one data frame or packet. When there is continuous data communication, a delay time of 65 ms ⁇ the number of frames occurs.
- FIG. 8 shows a flowchart illustrating the operation thereof.
- the access point 10 sends a transmission request as to whether or not there is data to be transmitted, to the hands-free speech communication terminal 44 with a priority higher than those of the PDA 40 and the mobile audio 42 (step S 201 ).
- the speech communication terminal 44 transmits a response to the transmission request to the access point 10 (step S 202 ).
- the communication control section 102 checks the response from the speech communication terminal 44 in order to determine whether or not the speech communication terminal 44 has transmission data (step S 203 ).
- the communication control section 102 transmits a response acknowledgement to the speech communication terminal 44 from the access point 10 (step S 204 ).
- the speech communication terminal 44 receiving the response acknowledgement transmits the speech data to the access point 10 (step S 205 ). Since the other terminal has not received a response acknowledgement from the access point 10 , data cannot be transmitted.
- the communication control section 102 sends a transmission request to another terminal (the PDA 40 or the mobile audio 42 ) (step S 206 ). Then, the communication control section 102 receives a response to the transmission request from the other terminal (step S 207 ) and checks whether or not the other terminal has transmission data (step S 208 ). When the other terminal has transmission data, the access point 10 transmits a response acknowledgement to the other terminal (step S 209 ), and the other terminal transmits data to the access point 10 (step S 210 ).
- the above steps are performed on each of the PDA 40 and the mobile audio 42 , and on which one of them the above steps are performed with a higher priority needs to be determined in advance.
- the communication of speech data by the speech communication terminal 44 is performed with a higher priority, and the waiting time can be shortened.
- the communication control section 102 detects a hands-free phone conversation by the speech communication terminal 44 similarly to the first embodiment (step S 301 ).
- polling of the speech communication terminal 44 takes a higher priority than that of the other terminals (step S 303 ). For example, polling of the speech communication terminal 44 is performed more frequently than polling of the other terminals.
- polling is performed of all the terminals at fixed periods (step S 304 ). Also, in this case, similarly to the second embodiment, polling of the speech communication terminal 44 can be performed with a higher priority.
- the waiting time of the speech data can be shortened.
- the second and third embodiments can be combined with the first embodiment in which the maximum packet size is limited. That is, polling of a speech communication terminal may take a higher priority, and the packet size of data of other terminals may be limited.
- a hands-free system in which a wireless LAN is used is described.
- the present invention is not limited to this example, and may be applied to a hands-free system in which a wired LAN is used.
- the speech communication terminal 44 is connected to the AVN device 20 through a LAN cable and a vehicle-mounted bus (CAN-BUS (Controller Area Network-BUS), etc.).
- CAN-BUS Controller Area Network-BUS
- an in-vehicle hands-free system is described, the present invention is not limited to this system.
- a hands-free system may be used in a call center in a corporation.
- a hands-free function is installed in the AVN device, alternatively, an electronic device, such as a personal computer, may be used.
- an example of communication using a mobile phone is described, alternatively, for example, speech communication using a TV phone and an IP phone is also possible.
Abstract
Description
- 1. Field of the Invention
- The present invention relates to an asynchronous speech data communication system and a communication method therefor and, more particularly, relates to a communication system for making a hands-free phone conversation in a vehicle.
- 2. Description of the Related Art
- When speech transmission is performed in a vehicle, a hands-free phone conversation is generally used from the viewpoint of convenience. For example, when a hands-free phone conversation is made using a mobile phone, a user uses a vehicle-mounted audio device and an input/output terminal incorporated in a navigation device or uses a hands-free terminal in which a headphone and a microphone are installed in order to communicate speech data in a wireless manner between the terminal and the main unit of the mobile phone.
- In wireless communication, BlueTooth is used as a short distance wireless data communication technology. In BlueTooth, transmission and reception of data such as speech is performed in a wireless manner among mobile phones, notebook computers, PDAs (Personal Digital Assistants), etc. The frequency band used is a 2.45-GHz radio frequency (RF), the operating range is within approximately 10 m, and the data transfer rate is approximately 1 Mbps.
- Since the data communication speed of BlueTooth is not very high, as an alternative technology, communication using a wireless LAN (Local Area Network) has begun to be used.
- For example, Japanese Unexamined Patent Application Publication No. 2001-136190 discloses a technique in which, in order that an AV device in a vehicle can be used in another vehicle, a wireless LAN unit is connected to a LAN system in the vehicle, and the LAN systems of the vehicles are linked via the wireless LAN unit.
- However, the conventional in-vehicle hands-free phone conversation using wireless technology has the following problems. Since BlueTooth has both an asynchronous data channel for data communication and a synchronous channel for speech communication, BlueTooth can be used without problems even for an application that is sensitive to speech delay, such as a hands-free phone conversation. However, when this is to be replaced with a wireless LAN, since the wireless LAN has only an asynchronous data communication system, some kind of mechanism for permitting the maximum delay time in an application that is sensitive to speech delay, such as a hands-free phone conversation, becomes necessary.
- When speech data is transmitted by a wireless LAN, the speech data is subjected to pulse code modulation (PCM), the coded speech data is packetized, and this packetized speech data is transmitted to an access point. For the access control system for a wireless LAN, CSMA/CA (Carrier Sense Multiple Access/Collision Avoidance) is used. In this method, when another terminal is communicating with the wireless LAN, the transmission of speech data must be postponed until the communication of the other terminal is completed. That is, the larger the size of one data frame (packet), the larger the maximum waiting time.
-
FIG. 10 shows the format of a physical layer for use in a wireless LAN (direct diffusion method).FIG. 11 shows the relationship between each bit ratio when the MPDU (data) is the maximum number of bits (65536 bits) and the transmission time at that time. - The frame format has a preamble for achieving synchronization among devices, a header for addresses of a destination and a transmission source and lengths thereof, and a data unit (MPDU) containing data of a variable size. The variable range of data is 4 to 8192 bytes (32 to 65536 bits). When the bit rate of the wireless LAN is set to 1 Mbps to 11 Mbps (54 Mbps has also been used in practice), the delay time, that is, the waiting time, becomes a maximum of approximately 65 ms at the lowest bit rate of 1 Mbps.
- Furthermore, since a situation is assumed in which there are two or more terminals waiting for a transmission (the waiting time in this case is 65 ms □ the number of frames), it is not ensured that the speech data can be transmitted in the waiting time of the frame from one terminal. For example, as shown in
FIG. 12 , when there are two or more terminals that perform data communication with the access point of the in-vehicle wireless LAN, for example, when a PDA, a mobile audio, and a mobile phone exist, speech data V cannot be transmitted from the mobile phone to the access point while the PDA or the mobile audio is transmitting data to the access point. As described above, the waiting time Tmax for transmitting the speech data V is proportional to the data size from the PDA and the mobile audio, that is, the packet size, and during that time, the transmission of the speech data must be postponed. When the delay of the speech data reaches a fixed level or higher, the speech transmission quality deteriorates, and the other party with whom communication is performed may experience some annoyance. - Accordingly, an object of the present invention is to solve the above-described conventional problems and to provide an asynchronous communication system and a communication method, which are capable of suppressing the delay time of speech data.
- Another object of the present invention is to provide an asynchronous speech data communication system and a communication method, which are capable of making a hands-free phone conversation at a high speed without causing annoyance in a vehicle.
- To achieve the above-mentioned objects, in one aspect, the present invention provides an asynchronous speech data communication system including: a speech communication terminal having a speech data communication function; and a communication control section that enables asynchronous data communication with another electronic device including the speech communication terminal and that limits the packet size of data to be communicated of the other electronic device when there is communication of speech data by the speech communication terminal.
- In another aspect, the present invention provides an asynchronous speech data communication system including: a speech communication terminal having a speech data communication function; and a communication control section that enables asynchronous data communication with other electronic devices including the speech communication terminal and that causes polling of the speech communication terminal to have a higher priority than that of the other electronic devices.
- In another aspect, the present invention provides a method for asynchronously communicating speech data between a speech communication terminal having a speech data communication function and other electronic devices, the method including: a first step of detecting the presence or absence of communication of speech data by the speech communication terminal; and a second step of limiting the packet size of data to be communicated of the other electronic devices when it is detected that there is communication of speech data.
- In another aspect, the present invention provides a method for asynchronously communicating speech data between a speech communication terminal having a speech data communication function and other electronic devices, the method including: a step of causing polling of the speech communication terminal to have a higher priority than that of the other electronic devices.
- According to the asynchronous speech data communication system and the communication method in accordance with the present invention, even when there is data communication with another electronic device, the waiting time or the delay time of communication of speech data by the speech communication terminal can be reduced, and the speech transmission quality can be maintained at a fixed level. If the present invention is applied to a wireless LAN system in a vehicle, a hands-free system that can be used together with electronic devices mounted in the vehicle can be obtained.
- The asynchronous speech data communication system and the communication method therefor according to the present invention can be used in a network system that performs asynchronous communication by segmenting data into packets. Preferably, in an in-vehicle hands-free system incorporating a wireless LAN, the present invention can be used together with an AVN (audio visual navigation) device.
-
FIG. 1 is a block diagram showing the configuration of a hands-free system according to a first embodiment of the present invention; -
FIG. 2 is a block diagram showing the configuration of an AVN device; -
FIG. 3 is a flowchart illustrating a hands-free operation of the first embodiment of the present invention; -
FIGS. 4A and 4B show examples of the transmission of data in the hands-free operation according to the first embodiment of the present invention; -
FIG. 5 is a graph showing the relationship between the amount of speech delay and speech transmission quality; -
FIG. 6 is a table showing the relationship between speech transmission quality and an R-value range; -
FIGS. 7A and 7B illustrate a delay time on a mobile phone side; -
FIG. 8 is a flowchart illustrating a hands-free operation according to a second embodiment of the present invention; -
FIG. 9 is a flowchart illustrating a hands-free operation according to a third embodiment of the present invention; -
FIG. 10 shows the frame format of a physical layer; -
FIG. 11 is a table showing the relationship between a maximum number of bits and a transmission time; and -
FIG. 12 illustrates problems when speech data is communicated in an asynchronous manner in a conventional case. - An asynchronous speech data communication system is preferably performed in a hands-free system in which an in-vehicle wireless LAN is used. An example of an in-vehicle hands-free system will now be described below with reference to the drawings.
-
FIG. 1 is a block diagram showing the configuration of a hands-free system according to an embodiment of the present invention. An in-vehicle hands-free system 1 according to this embodiment includes anaccess point 10 of a wireless LAN, an AVN (audio visual navigation)device 20 connected to theaccess point 10, aspeaker 30, a microphone 32, adisplay unit 34, thespeaker 30, the microphone 32, and thedisplay unit 34 being connected to theaccess point 10, aPDA 40 for performing data communication with the AVNdevice 20, amobile audio 42, and a hands-freespeech communication terminal 44. - For the hands-free
speech communication terminal 44, preferably, a mobile phone is used. However, in addition to the mobile phone, a terminal having a speech data communication function can be used. For example, a hands-free terminal having a microphone for receiving speech from a user, and a speaker and/or a headphone for reproducing speech and having a wireless communication function for performing wireless communication with theaccess point 10 and other electronic devices may also be used. - When the mobile phone is used as the hands-free
speech communication terminal 44, the speech data received by thespeech communication terminal 44 from a mobile phone outside a vehicle or an ordinary phone is supplied to the AVNdevice 20 via theaccess point 10, and the speech is generated from thespeaker 30. Furthermore, the speech received from the microphone 32 is processed as speech data by theAVN device 20, the speech data is transmitted to the hands-freespeech communication terminal 44 via theaccess point 10, and the speech data is further transmitted from the terminal to a mobile phone outside a vehicle and an ordinary phone. - The
AVN device 20 is such that a navigation device and an AV device are united. TheAVN device 20 plays back music data and video data stored in a storage device, such as a DVD, a CD-ROM, or a hard disk, from thespeaker 30 and on thedisplay unit 34, and performs a navigation using a GPS. The AVN device may also be configured in such a manner that an AV device and a navigation device are configured as one unit, and may be a system in which an AV device and a navigation device are connected to each other. - The
access point 10, together with aclient 50, constitutes a wireless LAN. Theclient 50 includes thePDA 40, themobile audio 42, and the hands-freespeech communication terminal 44, having a wireless communication function. These are only an example, and other electronic devices may also be used. Theaccess point 10 performs wireless communication in accordance with standardized specifications based on, for example, IEEE 802.11a or IEEE 802.11b. - For the frame format, for example, the structure shown in
FIG. 10 can be used, and for the access control method, CSMA/CA can be used. For the modulation method, CCK (Complementary Code Keying) or OFDM (Orthogonal Frequency Division Multiplexing) is used, and communication is performed using a radio wave of a 2.4 GHz band. - The
client 50 packetizes data to be transmitted, and transmits the packetized frames to theaccess point 10 by confirming that another terminal is not performing a transmission. When theaccess point 10 receives the frames from theclient 50, theaccess point 10 transmits a response acknowledgement (ACK) to theclient 50. The frames received by theaccess point 10 are supplied to theAVN device 20, whereby necessary processing is performed thereon. On the other hand, the data output from theAVN device 20 to theaccess point 10 is segmented into frames, and, and the frames are transmitted to theclient 50 in a wireless manner. -
FIG. 2 shows the internal configuration of theAVN device 20. TheAVN device 20 includes a data input/output section 100 for performing transmission and reception of data with theaccess point 10, acommunication control section 102 for controlling the operation of theaccess point 10, a hands-free function section 104 for processing speech data transmitted and received to and from the hands-freespeech communication terminal 44, anAV function section 106 for playing back music and video, anavigation function section 108 for performing navigation functions, a large-capacity memory 110 for storing programs, application files, a database, etc., acontrol section 112 for controlling each section, and abus 114 for interconnecting the sections. - The hands-
free function section 104 causes speech data to be reproduced from the hands-freespeech communication terminal 44 from the speaker, and suitably processes (for example, echo canceling) the speech received from the microphone 32. The processed speech data is transmitted to thespeech communication terminal 44 via theaccess point 10. - Next, a description is given of the operation of a vehicle-installed hands-free system according to this embodiment. For the operation of the hands-free system, preferably, the
control section 112 controls the operation of each section in accordance with a program stored in the large-capacity memory 110. -
FIG. 3 is a flowchart illustrating the operation of the hands-free system. Under the control of thecontrol section 112, thecommunication control section 102 detects the presence or absence of a hands-free phone conversation (step S101). That is, when data is transmitted via theaccess point 10, thecommunication control section 102 detects whether or not the destination address contained in the header of the frame matches thespeech communication terminal 44. When they match, it is determined that there is a hands-free phone conversation. On the other hand, when data is received via theaccess point 10, thecommunication control section 102 detects whether or not the address of the transmission source, which is contained in the header of the frame, matches thespeech communication terminal 44. When they match, it is determined that there is a hands-free phone conversation. - When it is detected that there is no hands-free phone conversation (step S102), asynchronous communication is performed in a normal manner among the
access point 10, thePDA 40, and the mobile audio 42 (step S103). For example, as shown inFIG. 4A , after confirming that another terminal is not performing a transmission, thePDA 40 transmits a frame A1 to theaccess point 10. When theaccess point 10 receives the frame A1, theaccess point 10 transmits a response acknowledgement (ACK) to theclient 50. Similarly, themobile audio 42 transmits a frame B1 to theaccess point 10 after the response acknowledgement (ACK) is received. In this communication, the frame size can be varied in the range of 32 to 65536 bits, and preferably, communication can be performed at a maximum packet size (65536 bits) in order to increase the data communication efficiency. - On the other hand, when it is detected that there is a hands-free phone conversation (step S102), the
communication control section 102 permits the maximum delay time of the hands-free phone conversation and performs control described below in order to minimize the delay of the speech data. - As described in the conventional technology, when the packet size (the data frame length) becomes a maximum, the maximum waiting time of the
speech communication terminal 44 is approximately 65 ms. To avoid this wait, the packet size at which each terminal can perform communication continuously is limited, and the maximum waiting time of the hands-free phone conversation is shortened. Under the control of thecommunication control section 102, theaccess point 10 transmits, to thePDA 40 and themobile audio 42, a parameter for defining the maximum packet size at which transmission is permitted per packet (step S104). - The
PDA 40 and themobile audio 42 receive the parameter and transmit frames in the form of packets of the packet size defined in this parameter or smaller (step S105). -
FIG. 4B shows an example of data communication when there is a hands-free phone conversation. When thespeech communication terminal 44 is going to transmit speech data V1 (the timing of the broken line inFIG. 4B ), if there is a transmission of the frame A1 by thePDA 40, thespeech communication terminal 44 must wait for this transmission to be completed. When theaccess point 10 transmits, to theclient 50, a response acknowledgement (ACK) for confirming the reception of the frame A1 from thePDA 40, thespeech communication terminal 44 transmits the speech data V1 to theaccess point 10. At this time, if the packet size of the frame A1 by thePDA 40 is, for example, 1/10 of the maximum packet size (65536 bits), the maximum delay time Tmax of the speech data V1 can be permitted to be approximately 6.5 ms. - Upon receiving the speech data V1, the
access point 10 transmits a response acknowledgement (ACK) to theclient 50. If themobile audio 42 is going to transmit the frame B1, after this acknowledgement is received, the frame B1 is transmitted to theaccess point 10. While the transmission of the frame B1 is being performed, thespeech communication terminal 44 cannot transmit speech data V2. The waiting time T1 of the speech data V2 depends on the packet size of the frame B1. If the packet size of the frame B1 is smaller than that of the frame A1, T1<Tmax holds. - In the manner described above, when there is a hands-free phone conversation, by limiting the packet size of another terminal, the waiting time of the speech data is shortened as much as possible, so that the speech data can be communicated at a high speed while suppressing the deterioration of the speech transmission quality.
- Next, a description is given of the delay time of the speech data, which is permitted in a hands-free phone conversation.
FIG. 5 is a graph showing the relationship between the amount of speech delay and speech transmission quality. This graph is disclosed inFIG. 2 of the reference document of Jan Janssen et al., “DELAY AND DISTORTION BOUNDS FOR PACKETIZED VOICE CALLS OF TRADITIONAL PSTN QUALITY”, Proceedings of the 1 st IP-Telephony Workshop, GMD Report 95, pp. 105-110 Berlin, Germany, 12-13 April 2000. The horizontal axis indicates the delay time (ms) from the mouth to the ear, and the vertical axis indicates a rating R indicating the speech transmission quality. EL denotes an echo loss, which is a loss (dB) for the amount of return when speech is sent from the speaker to the other party. -
FIG. 6 shows an R-value range described in Table 1 of the above reference document. It is reported in this reference document that, when the R-value range is lower than or equal to 60, the speech transmission quality is very poor and that the R-value range is preferably at least 70 or higher in the case of a phone conversation using a public network. The echo loss when the mobile phone is used in a hands-free manner in a vehicle is approximately 40 dB. It is understood from this fact that the delay time needs to be lower than or equal to 200 ms (see the curve ofFIG. 5 in which the EL is 41 dB). - When the mobile phone is used in the hands-
free system 1 ofFIG. 1 , that is, when, as shown inFIG. 7A , a phone conversation is made with anotherphone 240 from a mobile phone 200 (the hands-freespeech communication terminal 44 ofFIG. 1 ) via a mobilephone base station 220, it is necessary to limit the total of the delay time on the wireless LAN side and the delay time on the mobile phone side to within 200 ms. - As shown in
FIG. 7B , the delay time of themobile phone 200 is approximately 20 ms required for aspeech coding section 204 to process the speech data received by awireless LAN module interleave section 206 to process the coded speech data, and thus the total time is 40 ms. Similarly, the time required for adeinterleave section 210 to process the speech data received from awireless section 208 is 20 ms and the time required for aspeech decoding section 212 to process the speech data is 20 ms, and thus the total time is 40 ms. - As shown in
FIG. 7B , the delay time due to the mobilephone base station 220 is 20 ms required for adeinterleave section 224 to process the speech data from awireless section speech decoding section 226 to process the speech data, and thus the total time is 40 ms. Similarly, it takes a total of 40 ms for aspeech coding section 230 and aninterleave section 232 to process the speech data received from apublic network interface 228. - The delay time from the mobile
phone base station 220 through the public switched telephone network (PSTN) to anotherphone 240 becomes approximately 20 ms. As a result, the delay time required for a phone conversation from themobile phone 200 to theother phone 240 becomes a total of approximately 100 ms. - For the delay time on the wireless LAN side, the process performed by the echo canceller for canceling echo of input speech by the hands-free-installed
AVN device 20 shown inFIG. 1 takes approximately 30 ms. Therefore, the delay time permitted in the wireless LAN communication is approximately 70 ms (70 ms=200 ms−100 ms−30 ms). The maximum waiting time in the conventional wireless LAN is approximately 65 ms, and this is the waiting time of one data frame or packet. When there is continuous data communication, a delay time of 65 ms×the number of frames occurs. In comparison, in this embodiment, when there is a hands-free phone conversation, by limiting the maximum packet size that can be permitted to approximately 1/10, even if communication of continuous frames is performed, speech data can be communicated within the permitteddelay time 70 ms at a very high frequency, and the deterioration of the speech transmission quality can be suppressed. - Next, a description is given of the operation of a second embodiment of the present invention. The second embodiment is such that a polling function is combined with the hands-free system incorporating a wireless LAN.
FIG. 8 shows a flowchart illustrating the operation thereof. - The
access point 10 sends a transmission request as to whether or not there is data to be transmitted, to the hands-freespeech communication terminal 44 with a priority higher than those of thePDA 40 and the mobile audio 42 (step S201). - The
speech communication terminal 44 transmits a response to the transmission request to the access point 10 (step S202). Thecommunication control section 102 checks the response from thespeech communication terminal 44 in order to determine whether or not thespeech communication terminal 44 has transmission data (step S203). When thespeech communication terminal 44 has speech data to be transmitted, thecommunication control section 102 transmits a response acknowledgement to thespeech communication terminal 44 from the access point 10 (step S204). Thespeech communication terminal 44 receiving the response acknowledgement transmits the speech data to the access point 10 (step S205). Since the other terminal has not received a response acknowledgement from theaccess point 10, data cannot be transmitted. - On the other hand, when the
speech communication terminal 44 does not have transmission data (step S203), thecommunication control section 102 sends a transmission request to another terminal (thePDA 40 or the mobile audio 42) (step S206). Then, thecommunication control section 102 receives a response to the transmission request from the other terminal (step S207) and checks whether or not the other terminal has transmission data (step S208). When the other terminal has transmission data, theaccess point 10 transmits a response acknowledgement to the other terminal (step S209), and the other terminal transmits data to the access point 10 (step S210). The above steps are performed on each of thePDA 40 and themobile audio 42, and on which one of them the above steps are performed with a higher priority needs to be determined in advance. - According to the second embodiment, as a result of performing polling of the
speech communication terminal 44 with a higher priority than that of the other terminals, the communication of speech data by thespeech communication terminal 44 is performed with a higher priority, and the waiting time can be shortened. - Next, a description will be given, with reference to the flowchart in
FIG. 9 , of the operation according to a third embodiment of the present invention. Thecommunication control section 102 detects a hands-free phone conversation by thespeech communication terminal 44 similarly to the first embodiment (step S301). When it is detected that there is a hands-free phone conversation (step S302), polling of thespeech communication terminal 44 takes a higher priority than that of the other terminals (step S303). For example, polling of thespeech communication terminal 44 is performed more frequently than polling of the other terminals. - When a hands-free phone conversation is not detected, polling is performed of all the terminals at fixed periods (step S304). Also, in this case, similarly to the second embodiment, polling of the
speech communication terminal 44 can be performed with a higher priority. - According to the third embodiment, when there is a hands-free phone conversation, as a result of causing polling of the
speech communication terminal 44 to have a higher priority than that of the other terminals, the waiting time of the speech data can be shortened. - The second and third embodiments can be combined with the first embodiment in which the maximum packet size is limited. That is, polling of a speech communication terminal may take a higher priority, and the packet size of data of other terminals may be limited.
- In the above-described embodiments, a hands-free system in which a wireless LAN is used is described. However, the present invention is not limited to this example, and may be applied to a hands-free system in which a wired LAN is used. In this case, the
speech communication terminal 44 is connected to theAVN device 20 through a LAN cable and a vehicle-mounted bus (CAN-BUS (Controller Area Network-BUS), etc.). - Although, in the above-described embodiments, an in-vehicle hands-free system is described, the present invention is not limited to this system. For example, a hands-free system may be used in a call center in a corporation. Although, in the above-described embodiments, a hands-free function is installed in the AVN device, alternatively, an electronic device, such as a personal computer, may be used. Although an example of communication using a mobile phone is described, alternatively, for example, speech communication using a TV phone and an IP phone is also possible.
Claims (20)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004044907A JP4562402B2 (en) | 2004-02-20 | 2004-02-20 | Asynchronous communication system for voice data and communication method therefor |
JP2004-044907 | 2004-02-20 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20050203746A1 true US20050203746A1 (en) | 2005-09-15 |
US7839837B2 US7839837B2 (en) | 2010-11-23 |
Family
ID=34824491
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/058,431 Active 2027-07-27 US7839837B2 (en) | 2004-02-20 | 2005-02-15 | Asynchronous speech data communication system and communication method therefor |
Country Status (4)
Country | Link |
---|---|
US (1) | US7839837B2 (en) |
EP (1) | EP1575225B1 (en) |
JP (1) | JP4562402B2 (en) |
DE (1) | DE602005002577T2 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050144013A1 (en) * | 2003-11-20 | 2005-06-30 | Jun Fujimoto | Conversation control apparatus, conversation control method, and programs therefor |
US7877501B2 (en) | 2002-09-30 | 2011-01-25 | Avaya Inc. | Packet prioritization and associated bandwidth and buffer management techniques for audio over IP |
US7978827B1 (en) | 2004-06-30 | 2011-07-12 | Avaya Inc. | Automatic configuration of call handling based on end-user needs and characteristics |
US8218751B2 (en) | 2008-09-29 | 2012-07-10 | Avaya Inc. | Method and apparatus for identifying and eliminating the source of background noise in multi-party teleconferences |
US20130077771A1 (en) * | 2005-01-05 | 2013-03-28 | At&T Intellectual Property Ii, L.P. | System and Method of Dialog Trajectory Analysis |
US8593959B2 (en) | 2002-09-30 | 2013-11-26 | Avaya Inc. | VoIP endpoint call admission |
US20170171734A1 (en) * | 2015-12-09 | 2017-06-15 | Hyundai Motor Company | Method of operating avn, avn, and vehicle including the same |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102005008285B3 (en) * | 2005-02-23 | 2006-09-28 | Bury Sp.Z.O.O | Navigation device for vehicles |
WO2006133547A1 (en) | 2005-06-13 | 2006-12-21 | E-Lane Systems Inc. | Vehicle immersive communication system |
US8102878B2 (en) * | 2005-09-29 | 2012-01-24 | Qualcomm Incorporated | Video packet shaping for video telephony |
US8406309B2 (en) | 2005-10-21 | 2013-03-26 | Qualcomm Incorporated | Video rate adaptation to reverse link conditions |
US8842555B2 (en) | 2005-10-21 | 2014-09-23 | Qualcomm Incorporated | Methods and systems for adaptive encoding of real-time information in packet-switched wireless communication systems |
US8514711B2 (en) | 2005-10-21 | 2013-08-20 | Qualcomm Incorporated | Reverse link lower layer assisted video error control |
US8548048B2 (en) | 2005-10-27 | 2013-10-01 | Qualcomm Incorporated | Video source rate control for video telephony |
US8015010B2 (en) | 2006-06-13 | 2011-09-06 | E-Lane Systems Inc. | Vehicle communication system with news subscription service |
US9976865B2 (en) | 2006-07-28 | 2018-05-22 | Ridetones, Inc. | Vehicle communication system with navigation |
KR101370478B1 (en) | 2007-01-10 | 2014-03-06 | 퀄컴 인코포레이티드 | Content-and link-dependent coding adaptation for multimedia telephony |
US8797850B2 (en) | 2008-01-10 | 2014-08-05 | Qualcomm Incorporated | System and method to adapt to network congestion |
WO2009111884A1 (en) | 2008-03-12 | 2009-09-17 | E-Lane Systems Inc. | Speech understanding method and system |
CA2719301C (en) | 2008-03-25 | 2016-10-04 | E-Lane Systems Inc. | Multi-participant, mixed-initiative voice interaction system |
CA2727951A1 (en) | 2008-06-19 | 2009-12-23 | E-Lane Systems Inc. | Communication system with voice mail access and call by spelling functionality |
US9652023B2 (en) | 2008-07-24 | 2017-05-16 | Intelligent Mechatronic Systems Inc. | Power management system |
WO2010135837A1 (en) | 2009-05-28 | 2010-12-02 | Intelligent Mechatronic Systems Inc | Communication system with personal information management and remote vehicle monitoring and control features |
US9667726B2 (en) | 2009-06-27 | 2017-05-30 | Ridetones, Inc. | Vehicle internet radio interface |
US9978272B2 (en) | 2009-11-25 | 2018-05-22 | Ridetones, Inc | Vehicle to vehicle chatting and communication system |
US9078125B2 (en) * | 2013-03-13 | 2015-07-07 | GM Global Technology Operations LLC | Vehicle communications system and method |
US9665707B2 (en) | 2015-01-09 | 2017-05-30 | GM Global Technology Operations LLC | Systems and methods for cyber security of intra-vehicular peripherals powered by wire |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4975906A (en) * | 1988-02-15 | 1990-12-04 | Hitachi, Ltd. | Network system |
US6163532A (en) * | 1996-03-08 | 2000-12-19 | Ntt Mobile Communications Networks, Inc. | Packet data transferring method for mobile radio data communication system |
US6362730B2 (en) * | 1999-06-14 | 2002-03-26 | Sun Microsystems, Inc. | System and method for collecting vehicle information |
US6370163B1 (en) * | 1998-03-11 | 2002-04-09 | Siemens Information And Communications Network, Inc. | Apparatus and method for speech transport with adaptive packet size |
US6542495B1 (en) * | 1998-03-17 | 2003-04-01 | Sony Corporation | Wireless communicating method, wireless communicating system, communicating station, and controlling station |
US20030110286A1 (en) * | 2001-12-12 | 2003-06-12 | Csaba Antal | Method and apparatus for segmenting a data packet |
US20040022203A1 (en) * | 2002-07-30 | 2004-02-05 | Michelson Steven M. | Method of sizing packets for routing over a communication network for VoIP calls on a per call basis |
US20040156350A1 (en) * | 2003-01-22 | 2004-08-12 | Waverider Communications Inc. | Hybrid polling/contention MAC layer with multiple grades of service |
US20050135409A1 (en) * | 2003-12-19 | 2005-06-23 | Intel Corporation | Polling in wireless networks |
US20050222933A1 (en) * | 2002-05-21 | 2005-10-06 | Wesby Philip B | System and method for monitoring and control of wireless modules linked to assets |
US7046794B2 (en) * | 2003-12-12 | 2006-05-16 | Motorola, Inc. | Double talk activity detector and method for an echo canceler circuit |
US7046643B1 (en) * | 1997-07-30 | 2006-05-16 | Bellsouth Intellectual Property Corporation | Method for dynamic multi-level pricing for wireless communications according to quality of service |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2836603B2 (en) * | 1996-09-27 | 1998-12-14 | 日本電気株式会社 | Local area network |
JP2001136190A (en) * | 1999-11-10 | 2001-05-18 | Sony Corp | On-vehicle lan method and on-vehicle lan equipment |
-
2004
- 2004-02-20 JP JP2004044907A patent/JP4562402B2/en not_active Expired - Fee Related
-
2005
- 2005-01-21 EP EP20050250312 patent/EP1575225B1/en not_active Expired - Fee Related
- 2005-01-21 DE DE200560002577 patent/DE602005002577T2/en active Active
- 2005-02-15 US US11/058,431 patent/US7839837B2/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4975906A (en) * | 1988-02-15 | 1990-12-04 | Hitachi, Ltd. | Network system |
US6163532A (en) * | 1996-03-08 | 2000-12-19 | Ntt Mobile Communications Networks, Inc. | Packet data transferring method for mobile radio data communication system |
US7046643B1 (en) * | 1997-07-30 | 2006-05-16 | Bellsouth Intellectual Property Corporation | Method for dynamic multi-level pricing for wireless communications according to quality of service |
US6370163B1 (en) * | 1998-03-11 | 2002-04-09 | Siemens Information And Communications Network, Inc. | Apparatus and method for speech transport with adaptive packet size |
US6542495B1 (en) * | 1998-03-17 | 2003-04-01 | Sony Corporation | Wireless communicating method, wireless communicating system, communicating station, and controlling station |
US6362730B2 (en) * | 1999-06-14 | 2002-03-26 | Sun Microsystems, Inc. | System and method for collecting vehicle information |
US20030110286A1 (en) * | 2001-12-12 | 2003-06-12 | Csaba Antal | Method and apparatus for segmenting a data packet |
US20050222933A1 (en) * | 2002-05-21 | 2005-10-06 | Wesby Philip B | System and method for monitoring and control of wireless modules linked to assets |
US20040022203A1 (en) * | 2002-07-30 | 2004-02-05 | Michelson Steven M. | Method of sizing packets for routing over a communication network for VoIP calls on a per call basis |
US20040156350A1 (en) * | 2003-01-22 | 2004-08-12 | Waverider Communications Inc. | Hybrid polling/contention MAC layer with multiple grades of service |
US7046794B2 (en) * | 2003-12-12 | 2006-05-16 | Motorola, Inc. | Double talk activity detector and method for an echo canceler circuit |
US20050135409A1 (en) * | 2003-12-19 | 2005-06-23 | Intel Corporation | Polling in wireless networks |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8593959B2 (en) | 2002-09-30 | 2013-11-26 | Avaya Inc. | VoIP endpoint call admission |
US7877501B2 (en) | 2002-09-30 | 2011-01-25 | Avaya Inc. | Packet prioritization and associated bandwidth and buffer management techniques for audio over IP |
US7877500B2 (en) | 2002-09-30 | 2011-01-25 | Avaya Inc. | Packet prioritization and associated bandwidth and buffer management techniques for audio over IP |
US8015309B2 (en) | 2002-09-30 | 2011-09-06 | Avaya Inc. | Packet prioritization and associated bandwidth and buffer management techniques for audio over IP |
US8370515B2 (en) | 2002-09-30 | 2013-02-05 | Avaya Inc. | Packet prioritization and associated bandwidth and buffer management techniques for audio over IP |
US20050144013A1 (en) * | 2003-11-20 | 2005-06-30 | Jun Fujimoto | Conversation control apparatus, conversation control method, and programs therefor |
US7676369B2 (en) * | 2003-11-20 | 2010-03-09 | Universal Entertainment Corporation | Conversation control apparatus, conversation control method, and programs therefor |
US7978827B1 (en) | 2004-06-30 | 2011-07-12 | Avaya Inc. | Automatic configuration of call handling based on end-user needs and characteristics |
US20130077771A1 (en) * | 2005-01-05 | 2013-03-28 | At&T Intellectual Property Ii, L.P. | System and Method of Dialog Trajectory Analysis |
US8949131B2 (en) * | 2005-01-05 | 2015-02-03 | At&T Intellectual Property Ii, L.P. | System and method of dialog trajectory analysis |
US8218751B2 (en) | 2008-09-29 | 2012-07-10 | Avaya Inc. | Method and apparatus for identifying and eliminating the source of background noise in multi-party teleconferences |
US20170171734A1 (en) * | 2015-12-09 | 2017-06-15 | Hyundai Motor Company | Method of operating avn, avn, and vehicle including the same |
CN107054249A (en) * | 2015-12-09 | 2017-08-18 | 现代自动车株式会社 | Run AVN method, AVN and the vehicle including AVN |
US9973911B2 (en) * | 2015-12-09 | 2018-05-15 | Hyundai Motor Company | Method of operating AVN, AVN, and vehicle including the same |
Also Published As
Publication number | Publication date |
---|---|
US7839837B2 (en) | 2010-11-23 |
JP2005236783A (en) | 2005-09-02 |
EP1575225A2 (en) | 2005-09-14 |
DE602005002577T2 (en) | 2008-06-19 |
JP4562402B2 (en) | 2010-10-13 |
DE602005002577D1 (en) | 2007-11-08 |
EP1575225A3 (en) | 2005-10-05 |
EP1575225B1 (en) | 2007-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7839837B2 (en) | Asynchronous speech data communication system and communication method therefor | |
US10477474B2 (en) | Arbitrating a low power mode for multiple applications running on a device | |
US7778172B2 (en) | Home networking system and admission control method thereof | |
EP3273620B1 (en) | Controlling connected multimedia devices | |
US8977202B2 (en) | Communication apparatus having a unit to determine whether a profile is operating | |
JP5074192B2 (en) | Wireless communication gateway and wireless communication terminal | |
US7349685B2 (en) | Method and apparatus for generating service billing records for a wireless client | |
US7751420B2 (en) | Network switching method and apparatus, wireless access apparatus, and wireless network | |
US20070275756A1 (en) | Apparatus and method for transmitting/receiving data using multi-channel of wireless LAN in a mobile communication terminal | |
MX2008000317A (en) | System and method for resolving conflicts in multiple simultaneous communications in a wireless system. | |
US20100172335A1 (en) | Data transmission method and apparatus based on Wi-Fi multimedia | |
WO2022161403A1 (en) | Station-announcing method and system, electronic device, and broadcast device | |
US20060120312A1 (en) | Communications method, communications system, relay apparatus, and recording medium | |
US20030067922A1 (en) | Communication method, communication device, and communication terminal | |
JP5086176B2 (en) | Mobile communication terminal and wireless communication method | |
US20110032916A1 (en) | Wireless communication apparatus and method using the same | |
US20070070984A1 (en) | Method for dynamically handling the data path of a wireless IP phone | |
JP2006191523A (en) | Communication method, communication system, relay device, and computer program | |
JPH08251229A (en) | Radio communication system | |
EP1295443B1 (en) | Method to assess the quality of a voice communication over packet networks | |
US20030123430A1 (en) | Voice over packet network phone | |
US7912495B2 (en) | Fixed bit rate wireless communications apparatus and method | |
KR100715981B1 (en) | Bridge connecting wibro protocol and most protocol and system including the same | |
CN1246998A (en) | Mobile telecommunications unit and system and method relating thereof | |
WO2008067388A1 (en) | Adaptive buffer management for wireless connectivity system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ALPINE ELECTRONICS, INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OBATA, KIJURO;REEL/FRAME:016564/0128 Effective date: 20050412 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552) Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |