CN103729059A - Interactive method and device - Google Patents

Interactive method and device Download PDF

Info

Publication number
CN103729059A
CN103729059A CN201310740414.3A CN201310740414A CN103729059A CN 103729059 A CN103729059 A CN 103729059A CN 201310740414 A CN201310740414 A CN 201310740414A CN 103729059 A CN103729059 A CN 103729059A
Authority
CN
China
Prior art keywords
content
user
sub
syllables
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310740414.3A
Other languages
Chinese (zh)
Inventor
于魁飞
张宏江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhigu Ruituo Technology Services Co Ltd
Original Assignee
Beijing Zhigu Ruituo Technology Services Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhigu Ruituo Technology Services Co Ltd filed Critical Beijing Zhigu Ruituo Technology Services Co Ltd
Priority to CN201310740414.3A priority Critical patent/CN103729059A/en
Publication of CN103729059A publication Critical patent/CN103729059A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides an interactive method and device. The method includes the steps that an area to which a user pays attention in display content is determined; according to spelling information of the user, the sub content to which the user pays attention in the area is determined, and correlation information of the sub content is provided for the user. According to the human-computer interactive scheme, the focus point of the user can be accurately positioned based on the spelling information, interference with the user is reduced, and the correlation information can be triggered more naturally.

Description

Exchange method and device
Technical field
The embodiment of the present invention relates to field of human-computer interaction, relates in particular to a kind of exchange method and device.
Background technology
Along with the people of various culture and languages exchange further deeply, instant translation has become an indispensable function.Translation software usually, by catching the Move Mode of mouse, provides corresponding translation result in time, for instance, is opening on the basis of screen word-selecting function, catches word or the word of mouse-pointing, the translation result of this word of instant playback or word.In touch panel device, often do not have mouse to move such operation, and slip choose the operation of text usually to have other contextmenus of system definition, has caused to show conflict, as shown in Figure 1.
Summary of the invention
In view of this, the embodiment of the present invention object is to provide a kind of man-machine interaction scheme.
For achieving the above object, according to the embodiment of the present invention aspect, provide a kind of exchange method, comprising:
Determine the region that in displaying contents, user pays close attention to;
According to described user's the information that combines into syllables, determine the interested sub-content of user described in described region;
The related information of described sub-content is offered to described user.
For achieving the above object, another aspect according to the embodiment of the present invention, provides a kind of interactive device, comprising:
The first determination module, for the region of determining that displaying contents user pays close attention to;
The second determination module, for according to described user's the information that combines into syllables, determines the interested sub-content of user described in described region;
Provide module, for the related information of described sub-content is offered to described user.
At least one technical scheme in a plurality of technical schemes has following beneficial effect above:
The region of paying close attention to by user in displaying contents, according to described user's the information that combines into syllables, determine the interested sub-content of described user, and the related information of described sub-content is offered to described user, a kind of man-machine interaction scheme is provided, and, based on the information of combining into syllables the focus of consumer positioning more exactly, reduce the interference to user, the triggering of related information is more natural.
Accompanying drawing explanation
Fig. 1 is a kind of application scenarios schematic diagram of instant translation;
Fig. 2 is the process flow diagram of a kind of exchange method embodiment mono-provided by the invention;
Fig. 3 is the structural drawing of a kind of interactive device embodiment mono-provided by the invention;
Fig. 4 is the structural drawing of a kind of interactive device embodiment bis-provided by the invention.
Embodiment
Below in conjunction with drawings and Examples, the specific embodiment of the present invention is described in further detail.Following examples are used for illustrating the present invention, but are not used for limiting the scope of the invention.
Fig. 2 is the process flow diagram of a kind of exchange method embodiment mono-provided by the invention.As shown in Figure 2, the present embodiment comprises:
101, determine the region that in displaying contents, user pays close attention to.
For instance, the interactive device that can be provided by the embodiment of the present invention is carried out the present embodiment one, carries out 101~103.Particularly, interactive device can be arranged in subscriber equipment, or interactive device itself is exactly subscriber equipment, and described subscriber equipment is and to have the equipment of display interface with user interactions, such as mobile phone, computer etc.
Wherein, described displaying contents can be the content being presented in user interface, and its form includes but not limited to: document, picture etc.Further, the part in displaying contents can be contained in the region that in displaying contents, user pays close attention to, such as, a few row in document, or, the content in a certain figure in picture.
Conventionally, determine that can there be multiple implementation in the region that in displaying contents, user pays close attention to.For instance, in a kind of optional implementation, the region that in described definite displaying contents, user pays close attention to, comprising:
According to the feature of described eyes of user, determine the region that user described in displaying contents pays close attention to.
For instance, the feature of described eyes can comprise eyes to corner location or eye fundus image etc.Wherein, can there be multiple implementation the focusing position of detecting eyes, and three kinds of optional implementations are below provided:
A) according to collect that eyeground presents picture rich in detail time image capture position and eyes between the optical parametric of light path, determine the focusing position of eyes.
B) follow the tracks of the direction of visual lines of two eyes, the intersection point of the direction of visual lines by two eyes obtains the focusing position of eyes.
C) follow the tracks of the direction of visual lines of eyes, according to the intersection point of the display plane of described direction of visual lines and described displaying contents, obtain the focusing position of eyes.
In the present embodiment, focusing obtained above position conventionally on the display plane of described displaying contents, correspondingly, the region that the region of the periphery certain limit of focusing position described in described displaying contents is paid close attention to as described user.
Particularly, the retinal centre on people's eyeground is macular area, and the optical centre district of macular area in human eye, is the subpoint of eyesight axis.The depression of macula lutea central authorities is called central fovea, is the sharpest place of eyesight, eyes watch attentively object projection in the central recess of macular area.Therefore, by gathering the imaging corresponding to central recess of macular area in fundus imaging, can determine user's the object of watching attentively, the region that in displaying contents, user pays close attention to.
In another optional implementation, the region that in described definite displaying contents, user pays close attention to, comprising: according to described user's gesture, determine the region that in displaying contents, user pays close attention to.
Particularly, the region that user pays close attention in displaying contents, the region that the gesture of user described in described displaying contents can be chosen.
102,, according to described user's the information that combines into syllables, determine the interested sub-content of user described in described region.
Wherein, described in to combine information into syllables be owing to combining the information that produces of action into syllables, include but not limited to following at least one: pronunciation, the shape of the mouth as one speaks, to combine relevant bioelectricity feature into syllables.Wherein, the described bioelectric feature to combining relevant bioelectricity feature into syllables and referring to that described user health when combining into syllables produces, comprises bioelectric parameter, variation characteristic etc.
Preferably, described sub-content comprise following at least one: word, letter, symbol, word, phrase, sentence.For instance, described sub-content can comprise " I ", or, " we ", or, " we ", or, " figure out ", or, " Ω ", etc.
In a kind of optional implementation, described in combine information into syllables and comprise pronunciation; Described according to described user's the information that combines into syllables, determine the interested sub-content of user described in described region, comprising:
In the content containing in described region, the sub-content of mating with described user's pronunciation is defined as to the interested sub-content of described user.
Particularly, the mode of above-mentioned coupling can have multiple.For instance, can first convert described user's pronunciation to text, then search the sub-content conforming to described text in the content containing in described region; Or the reference pronunciation of the content containing according to described region, searches the sub-content conforming to described user's pronunciation with reference to pronunciation; Or, the reference pronunciation of the content containing according to described region, with reference to the pronunciation sub-content the highest with described user's pronunciation similarity as the sub-content of mating with described user's pronunciation.Wherein, with reference to pronunciation and corresponding sub-content thereof, can be kept in reference voice storehouse.The present embodiment does not limit the mode of coupling.
In another optional implementation, described in combine information into syllables and comprise the shape of the mouth as one speaks; Described according to described user's the information that combines into syllables, determine the interested sub-content of user described in described region, comprising:
In the content containing in described region, the sub-content of mating with described user's the shape of the mouth as one speaks is defined as to the interested sub-content of described user.
Particularly, the reference shape of the mouth as one speaks that the content that can contain according to described region is corresponding while combining into syllables, determines the sub-content of mating with described user's the shape of the mouth as one speaks.Further, the reference shape of the mouth as one speaks of the content that can contain according to described region, searches the sub-content conforming to described user's the shape of the mouth as one speaks with reference to the shape of the mouth as one speaks; Or, the reference shape of the mouth as one speaks of the content containing according to described region, with reference to the shape of the mouth as one speaks sub-content the highest with described user's shape of the mouth as one speaks similarity as the sub-content of mating with described user's the shape of the mouth as one speaks.Wherein, with reference to the shape of the mouth as one speaks and corresponding sub-content thereof, can be kept at reference in shape of the mouth as one speaks storehouse.The present embodiment does not limit the mode of coupling.
Above-mentioned implementation is more preferably applied to user and reads silently, in the aphonic scene combining into syllables.
In another optional implementation, described in combine into syllables information comprise pronunciation and the shape of the mouth as one speaks; Described according to described user's the information that combines into syllables, determine the interested sub-content of user described in described region, comprise: in the content containing in described region, the sub-content that the pronunciation with described user and the shape of the mouth as one speaks are all mated is defined as the interested sub-content of described user.
In another optional implementation, described in combine information into syllables and comprise and combine relevant bioelectricity feature into syllables; Described according to described user's the information that combines into syllables, determine the interested sub-content of user described in described region, comprising: in the content containing in described region, the sub-content to combining relevant bioelectricity characteristic matching into syllables is defined as to the interested sub-content of described user.
In above-mentioned each implementation, preferably, described method also comprises: the information that combines into syllables of obtaining described user.Particularly, for each class, combine information into syllables and all can have corresponding multiple obtain manner.For instance, for the shape of the mouth as one speaks, can obtain by image recognition technology or bioelectricity detection technique.
It should be noted that, can carry out before 101, or carry out after 101, or when carrying out 101, carry out the information that combines into syllables of catching described user.Preferably, the information that combines into syllables of catching described user can be a lasting process, has and intersects or overlapping in time with the process of carrying out 101.
103, the related information of described sub-content is offered to described user.
Preferably, described related information include but not limited to following at least one: attribute, explanation, translation, with reference to pronunciation, advertisement.For instance, described sub-content is " apple ", correspondingly, can provide related information to user, as " a kind of fruit ", " high-tech company of the U.S., core business is electronics technology product " etc.For instance, described sub-content is " topographical ", correspondingly, can provide related information to user, as " topography; Topographic ".
Preferably, can during lower than predetermined threshold value, the reference pronunciation of described sub-content be offered to user in the similarity of described user's pronunciation and the reference of described sub-content pronunciation.
Particularly, provide the mode of related information can have multiple.For instance, for the related information of textual form, can show, such as, the translation of sub-content is presented on display interface; For the related information of multimedia form, can play, such as, the reference pronunciation of playing sub-content; For the related information of all kinds of forms, can show the link of this related information.
The region that the embodiment of the present invention is paid close attention to by user in displaying contents, according to described user's the information that combines into syllables, determine the interested sub-content of described user, and the related information of described sub-content is offered to described user, a kind of man-machine interaction scheme is provided, and, based on the information of combining into syllables the focus of consumer positioning more exactly, reduce the interference to user, the triggering of related information is more natural.
Fig. 3 is the structural drawing of a kind of interactive device embodiment mono-provided by the invention.As shown in Figure 3, interactive device 200 comprises:
The first determination module 21, for the region of determining that displaying contents user pays close attention to;
The second determination module 22, for according to described user's the information that combines into syllables, determines the interested sub-content of user described in described region;
Provide module 23, for the related information of described sub-content is offered to described user.
Wherein, described displaying contents can be the content being presented in user interface, and its form includes but not limited to: document, picture etc.Further, the part in displaying contents can be contained in the region that in displaying contents, user pays close attention to, such as, a few row in document, or, the content in a certain figure in picture.
Conventionally, the first determination module 21 can have various ways to determine the region that in displaying contents, user pays close attention to.For instance, in a kind of optional implementation, the first determination module 21 specifically for: according to the feature of described eyes of user, determine the region that user described in displaying contents pays close attention to.
For instance, the feature of described eyes can comprise eyes to corner location or eye fundus image etc.Wherein, can there be multiple implementation the focusing position of detecting eyes, and three kinds of optional implementations are below provided:
A) according to collect that eyeground presents picture rich in detail time image capture position and eyes between the optical parametric of light path, determine the focusing position of eyes.
B) follow the tracks of the direction of visual lines of two eyes, the intersection point of the direction of visual lines by two eyes obtains the focusing position of eyes.
C) follow the tracks of the direction of visual lines of eyes, according to the intersection point of the display plane of described direction of visual lines and described displaying contents, obtain the focusing position of eyes.
In the present embodiment, focusing obtained above position is conventionally on the display plane of described displaying contents, and correspondingly, the first determination module 21 is using the region of the periphery certain limit of focusing position described in described displaying contents as the region of described user's concern.
Particularly, the retinal centre on people's eyeground is macular area, and the optical centre district of macular area in human eye, is the subpoint of eyesight axis.The depression of macula lutea central authorities is called central fovea, is the sharpest place of eyesight, eyes watch attentively object projection in the central recess of macular area.Therefore, by gathering the imaging corresponding to central recess of macular area in fundus imaging, can determine user's the object of watching attentively, the region that in displaying contents, user pays close attention to.
In another optional implementation, the first determination module 21, according to described user's gesture, is determined the region that in displaying contents, user pays close attention to.Particularly, the region that user pays close attention in displaying contents, the region that the first determination module 21 can be chosen the gesture of user described in described displaying contents.
Wherein, described in to combine information into syllables be owing to combining the information that produces of action into syllables, include but not limited to following at least one: pronunciation, the shape of the mouth as one speaks, to combine relevant bioelectricity feature into syllables.Wherein, the described bioelectric feature to combining relevant bioelectricity feature into syllables and referring to that described user health when combining into syllables produces, comprises bioelectric parameter, variation characteristic etc.
Preferably, described sub-content comprise following at least one: word, letter, symbol, word, phrase, sentence.For instance, described sub-content can comprise " I ", or, " we ", or, " we ", or, " figure out ", or, " Ω ", etc.
In a kind of optional implementation, described in combine information into syllables and comprise pronunciation; The second determination module 22 specifically for:
In the content containing in described region, in the content containing in described region, the sub-content of mating with described user's pronunciation is defined as to the interested sub-content of described user.
Particularly, the mode of above-mentioned coupling can have multiple.For instance, the second determination module 22 can first convert described user's pronunciation to text, then searches the sub-content conforming to described text in the content containing in described region; Or the reference pronunciation of the content that the second determination module 22 is contained according to described region, searches the sub-content conforming to described user's pronunciation with reference to pronunciation; Or, the reference pronunciation of the content that the second determination module 22 is contained according to described region, with reference to the pronunciation sub-content the highest with described user's pronunciation similarity as the sub-content of mating with described user's pronunciation.Wherein, with reference to pronunciation and corresponding sub-content thereof, can be kept in reference voice storehouse.The present embodiment does not limit the matching way of the second determination module 22.
In another optional implementation, described in combine information into syllables and comprise the shape of the mouth as one speaks; The second determination module 22 specifically for: in the content containing in described region, the sub-content of mating with described user's the shape of the mouth as one speaks is defined as to the interested sub-content of described user.
Particularly, the reference shape of the mouth as one speaks that content that the second determination module 22 can be contained according to described region is corresponding while combining into syllables, determines the sub-content of mating with described user's the shape of the mouth as one speaks.Further, the reference shape of the mouth as one speaks of the content that the second determination module 22 can be contained according to described region, searches the sub-content conforming to described user's the shape of the mouth as one speaks with reference to the shape of the mouth as one speaks; Or, the reference shape of the mouth as one speaks of the content that the second determination module 22 is contained according to described region, with reference to the shape of the mouth as one speaks sub-content the highest with described user's shape of the mouth as one speaks similarity as the sub-content of mating with described user's the shape of the mouth as one speaks.Wherein, with reference to the shape of the mouth as one speaks and corresponding sub-content thereof, can be kept at reference in shape of the mouth as one speaks storehouse.The present embodiment does not limit the matching way of the second determination module 22.
Above-mentioned implementation is more preferably applied to user and reads silently, in the aphonic scene combining into syllables.
In another optional implementation, described in combine into syllables information comprise pronunciation and the shape of the mouth as one speaks; The second determination module 22 specifically for: the sub-content that the pronunciation with described user and the shape of the mouth as one speaks are all mated is defined as the interested sub-content of described user.
In another optional implementation, described in combine information into syllables and comprise and combine relevant bioelectricity feature into syllables; The second determination module 22 specifically for:
In the content containing in described region, the sub-content to combining relevant bioelectricity characteristic matching into syllables is defined as to the interested sub-content of described user.
In above-mentioned each implementation, preferably, interactive device 200 also comprises: acquisition of information module, and for obtaining described user's the information that combines into syllables.Particularly, for each class, combine information into syllables, acquisition of information module all can have multiple obtain manner accordingly.For instance, for the shape of the mouth as one speaks, acquisition of information module can be obtained by image recognition technology or bioelectricity detection technique.
It should be noted that, acquisition of information module can be before the first determination module 21 be determined the region that in displaying contents, user pays close attention to, or afterwards, or when determining with the first determination module 21 region that in displaying contents, user pays close attention to, obtain described user's the information that combines into syllables.Preferably, the information that combines into syllables that acquisition of information module is obtained described user can be a lasting process, determines that the process in the region that in displaying contents, user pays close attention to has in time intersect or overlapping with the first determination module 21.
Preferably, described related information include but not limited to following at least one: attribute, explanation, translation, with reference to pronunciation, advertisement.For instance, described sub-content is " apple ", correspondingly, can provide related information to user, as " a kind of fruit ", " high-tech company of the U.S., core business is electronics technology product " etc.For instance, described sub-content is " topographical ", correspondingly, can provide related information to user, as " topography; Topographic ".
Preferably, provide module 23 during lower than predetermined threshold value, the reference pronunciation of described sub-content to be offered to user in the similarity of described user's pronunciation and the reference of described sub-content pronunciation.
Particularly, provide the mode that module 23 provides related information can have multiple.For instance, for the related information of textual form, provide module 23 to show, such as, provide module 23 that the translation of sub-content is presented on display interface; For the related information of multimedia form, provide module 23 to play, such as, the reference pronunciation that provides module 23 to play sub-content; For the related information of all kinds of forms, provide module 23 can show the link of this related information.
It should be noted that, interactive device 200 can be realized in the mode of software, or, in the mode of hardware, realize, or, in the mode of software and hardware combining, realize.Particularly, interactive device 200 can be arranged in subscriber equipment, or interactive device 200 itself is exactly subscriber equipment, and described subscriber equipment is and to have the equipment of display interface with user interactions, such as mobile phone, computer etc.
The region that the embodiment of the present invention is paid close attention to by user in displaying contents, according to described user's the information that combines into syllables, determine the interested sub-content of described user, and the related information of described sub-content is offered to described user, a kind of man-machine interaction scheme is provided, and, based on the information of combining into syllables the focus of consumer positioning more exactly, reduce the interference to user, the triggering of related information is more natural.
Fig. 4 is the structural drawing of a kind of interactive device embodiment bis-provided by the invention.As shown in Figure 4, interactive device 300 comprises:
Processor (processor) 31, communication interface (Communications Interface) 32, storer (memory) 33 and communication bus 34.Wherein:
Processor 31, communication interface 32 and storer 33 complete mutual communication by communication bus 34.
Communication interface 32, for communicating by letter such as external units such as bioelectricity checkout equipments.
Further, interactive device 300 can also comprise photographing module, microphone module etc., not shown.
Processor 31, for executive routine 332, specifically can carry out the correlation step in said method embodiment mono-.
Particularly, program 332 can comprise program code, and described program code comprises computer-managed instruction.
Processor 31 may be a central processor CPU, or specific integrated circuit ASIC(Application Specific Integrated Circuit), or be configured to implement one or more integrated circuit of the embodiment of the present invention.
Storer 33, for depositing program 332.Storer 33 may comprise high-speed RAM storer, also may also comprise nonvolatile memory (non-volatile memory), for example at least one magnetic disk memory.Program 332 specifically can be for making interactive device 300 carry out following steps:
Determine the interested region of user in displaying contents;
According to described user's the information that combines into syllables, determine the sub-content that user described in described region pays close attention to;
The related information of described sub-content is offered to described user.
In program 332, the specific implementation of each step can, referring to description corresponding in the corresponding steps in above-mentioned exchange method embodiment mono-and unit, be not repeated herein.Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the equipment of foregoing description and module, can describe with reference to the corresponding process in aforementioned exchange method embodiment mono-, does not repeat them here.
Those of ordinary skills can recognize, unit and the method step of each example of describing in conjunction with embodiment disclosed herein, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions are carried out with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can specifically should be used for realizing described function with distinct methods to each, but this realization should not thought and exceeds scope of the present invention.
If described function usings that the form of SFU software functional unit realizes and during as production marketing independently or use, can be stored in a computer read/write memory medium.Understanding based on such, the part that technical scheme of the present invention contributes to original technology in essence in other words or the part of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprise that some instructions are with so that a computer equipment (can be personal computer, server, or the network equipment etc.) carry out all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: various media that can be program code stored such as USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CDs.
Above embodiment is only for illustrating the present invention; and be not limitation of the present invention; the those of ordinary skill in relevant technologies field; without departing from the spirit and scope of the present invention; can also make a variety of changes and modification; therefore all technical schemes that are equal to also belong to category of the present invention, and scope of patent protection of the present invention should be defined by the claims.

Claims (16)

1. an exchange method, is characterized in that, described method comprises:
Determine the region that in displaying contents, user pays close attention to;
According to described user's the information that combines into syllables, determine the interested sub-content of user described in described region;
The related information of described sub-content is offered to described user.
2. method according to claim 1, is characterized in that, described in combine information into syllables and comprise pronunciation; Described according to described user's the information that combines into syllables, determine the interested sub-content of user described in described region, comprising:
In the content containing in described region, the sub-content of mating with described user's pronunciation is defined as to the interested sub-content of described user.
3. method according to claim 1, is characterized in that, described in combine information into syllables and comprise the shape of the mouth as one speaks; Described according to described user's the information that combines into syllables, determine the interested sub-content of user described in described region, comprising:
In the content containing in described region, the sub-content of mating with described user's the shape of the mouth as one speaks is defined as to the interested sub-content of described user.
4. method according to claim 1, is characterized in that, described in combine into syllables information comprise pronunciation and the shape of the mouth as one speaks; Described according to described user's the information that combines into syllables, determine the interested sub-content of user described in described region, comprising:
In the content containing in described region, the sub-content that the pronunciation with described user and the shape of the mouth as one speaks are all mated is defined as the interested sub-content of described user.
5. method according to claim 1, is characterized in that, described in combine information into syllables and comprise and combine relevant bioelectricity feature into syllables; Described according to described user's the information that combines into syllables, determine the interested sub-content of user described in described region, comprising:
In the content containing in described region, the sub-content to combining relevant bioelectricity characteristic matching into syllables is defined as to the interested sub-content of described user.
6. method according to claim 1, is characterized in that, the region that in described definite displaying contents, user pays close attention to, comprising:
According to the feature of described eyes of user, determine the region that user described in displaying contents pays close attention to.
7. according to arbitrary described method in claim 1~6, it is characterized in that, described sub-content comprise following at least one: word, letter, symbol, word, phrase, sentence.
8. according to arbitrary described method in claim 1~7, it is characterized in that, described related information comprise following at least one: attribute, explanation, translation, with reference to pronunciation, advertisement.
9. an interactive device, is characterized in that, described device comprises:
The first determination module, for the region of determining that displaying contents user pays close attention to;
The second determination module, for according to described user's the information that combines into syllables, determines the interested sub-content of user described in described region;
Provide module, for the related information of described sub-content is offered to described user.
10. device according to claim 9, is characterized in that, described in combine information into syllables and comprise pronunciation; Described the second determination module specifically for:
In the content containing in described region, in the content containing in described region, the sub-content of mating with described user's pronunciation is defined as to the interested sub-content of described user.
11. devices according to claim 9, is characterized in that, described in combine information into syllables and comprise the shape of the mouth as one speaks; Described the second determination module specifically for:
In the content containing in described region, the sub-content of mating with described user's the shape of the mouth as one speaks is defined as to the interested sub-content of described user.
12. devices according to claim 9, is characterized in that, described in combine into syllables information comprise pronunciation and the shape of the mouth as one speaks; Described the second determination module specifically for:
The sub-content that pronunciation with described user and the shape of the mouth as one speaks are all mated is defined as the interested sub-content of described user.
13. devices according to claim 9, is characterized in that, described in combine information into syllables and comprise and combine relevant bioelectricity feature into syllables; Described the second determination module specifically for:
In the content containing in described region, the sub-content to combining relevant bioelectricity characteristic matching into syllables is defined as to the interested sub-content of described user.
14. devices according to claim 9, is characterized in that, described the first determination module specifically for:
According to the feature of described eyes of user, determine the region that user described in displaying contents pays close attention to.
15. according to arbitrary described device in claim 9~14, it is characterized in that, described sub-content comprise following at least one: word, letter, symbol, word, phrase, sentence.
16. according to arbitrary described device in claim 9~15, it is characterized in that, described related information comprise following at least one: attribute, explanation, translation, with reference to pronunciation, advertisement.
CN201310740414.3A 2013-12-27 2013-12-27 Interactive method and device Pending CN103729059A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310740414.3A CN103729059A (en) 2013-12-27 2013-12-27 Interactive method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310740414.3A CN103729059A (en) 2013-12-27 2013-12-27 Interactive method and device

Publications (1)

Publication Number Publication Date
CN103729059A true CN103729059A (en) 2014-04-16

Family

ID=50453167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310740414.3A Pending CN103729059A (en) 2013-12-27 2013-12-27 Interactive method and device

Country Status (1)

Country Link
CN (1) CN103729059A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106779826A (en) * 2016-12-05 2017-05-31 深圳艺特珑信息科技有限公司 A kind of method and system that advertisement is optimized based on gyroscope and thermal map analysis
WO2018103620A1 (en) * 2016-12-06 2018-06-14 腾讯科技(深圳)有限公司 Notification method in virtual scene, related device and computer storage medium
CN109036416A (en) * 2018-07-02 2018-12-18 腾讯科技(深圳)有限公司 simultaneous interpretation method and system, storage medium and electronic device
CN109032467A (en) * 2018-06-28 2018-12-18 成都西可科技有限公司 Data positioning method, device, electronic equipment and computer readable storage medium
CN109815409A (en) * 2019-02-02 2019-05-28 北京七鑫易维信息技术有限公司 A kind of method for pushing of information, device, wearable device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030061607A1 (en) * 2001-02-12 2003-03-27 Hunter Charles Eric Systems and methods for providing consumers with entertainment content and associated periodically updated advertising
CN1449558A (en) * 2000-09-20 2003-10-15 国际商业机器公司 Eye gaze for contextual speech recognition
CN1969249A (en) * 2004-06-18 2007-05-23 托比技术有限公司 Arrangement, method and computer program for controlling a computer apparatus based on eye-tracking
CN102841746A (en) * 2012-07-11 2012-12-26 广东欧珀移动通信有限公司 Mobile phone webpage interaction method
CN103123578A (en) * 2011-12-07 2013-05-29 微软公司 Displaying virtual data as printed content

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1449558A (en) * 2000-09-20 2003-10-15 国际商业机器公司 Eye gaze for contextual speech recognition
US20030061607A1 (en) * 2001-02-12 2003-03-27 Hunter Charles Eric Systems and methods for providing consumers with entertainment content and associated periodically updated advertising
CN1969249A (en) * 2004-06-18 2007-05-23 托比技术有限公司 Arrangement, method and computer program for controlling a computer apparatus based on eye-tracking
CN103123578A (en) * 2011-12-07 2013-05-29 微软公司 Displaying virtual data as printed content
CN102841746A (en) * 2012-07-11 2012-12-26 广东欧珀移动通信有限公司 Mobile phone webpage interaction method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106779826A (en) * 2016-12-05 2017-05-31 深圳艺特珑信息科技有限公司 A kind of method and system that advertisement is optimized based on gyroscope and thermal map analysis
WO2018103070A1 (en) * 2016-12-05 2018-06-14 深圳艺特珑信息科技有限公司 Gyroscope and heat map analysis-based advertisement optimization method and system
WO2018103620A1 (en) * 2016-12-06 2018-06-14 腾讯科技(深圳)有限公司 Notification method in virtual scene, related device and computer storage medium
US10786735B2 (en) 2016-12-06 2020-09-29 Tencent Technology (Shenzhen) Company Limited Prompt method and related apparatus in virtual scene, and computer storage medium
CN109032467A (en) * 2018-06-28 2018-12-18 成都西可科技有限公司 Data positioning method, device, electronic equipment and computer readable storage medium
CN109036416A (en) * 2018-07-02 2018-12-18 腾讯科技(深圳)有限公司 simultaneous interpretation method and system, storage medium and electronic device
CN109036416B (en) * 2018-07-02 2022-12-20 腾讯科技(深圳)有限公司 Simultaneous interpretation method and system, storage medium and electronic device
CN109815409A (en) * 2019-02-02 2019-05-28 北京七鑫易维信息技术有限公司 A kind of method for pushing of information, device, wearable device and storage medium
CN109815409B (en) * 2019-02-02 2021-01-01 北京七鑫易维信息技术有限公司 Information pushing method and device, wearable device and storage medium

Similar Documents

Publication Publication Date Title
US11423909B2 (en) Word flow annotation
JP6798010B2 (en) Sensory eyewear
CN109729426B (en) Method and device for generating video cover image
US20180137681A1 (en) Methods and systems for generating virtual reality environments from electronic documents
CN107077201A (en) The eye gaze that spoken word in being interacted for multimodal session understands
CN106060572A (en) Video playing method and device
CN103729059A (en) Interactive method and device
US9785834B2 (en) Methods and systems for indexing multimedia content
CN109191940B (en) Interaction method based on intelligent equipment and intelligent equipment
US10783431B2 (en) Image search using emotions
Bragg et al. The fate landscape of sign language ai datasets: An interdisciplinary perspective
CN106463119A (en) Modification of visual content to facilitate improved speech recognition
Liao et al. RealityTalk: Real-time speech-driven augmented presentation for AR live storytelling
CN113392273A (en) Video playing method and device, computer equipment and storage medium
CN109784128A (en) Mixed reality intelligent glasses with text and language process function
KR102301231B1 (en) Method and device for providing image
CN104615231A (en) Determination method for input information, and equipment
KR102396263B1 (en) A System for Smart Language Learning Services using Scripts
US10133920B2 (en) OCR through voice recognition
US11513768B2 (en) Information processing device and information processing method
Huh et al. AVscript: Accessible Video Editing with Audio-Visual Scripts
CN115052194B (en) Learning report generation method, device, electronic equipment and storage medium
US20230077446A1 (en) Smart seamless sign language conversation device
Polys et al. Increasing Web3D Accessibility with Audio Captioning
CN116894444A (en) Translation method, translation device, electronic equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140416

RJ01 Rejection of invention patent application after publication