WO2010100503A2 - User interface for an electronic device having a touch-sensitive surface - Google Patents

User interface for an electronic device having a touch-sensitive surface Download PDF

Info

Publication number
WO2010100503A2
WO2010100503A2 PCT/GB2010/050385 GB2010050385W WO2010100503A2 WO 2010100503 A2 WO2010100503 A2 WO 2010100503A2 GB 2010050385 W GB2010050385 W GB 2010050385W WO 2010100503 A2 WO2010100503 A2 WO 2010100503A2
Authority
WO
WIPO (PCT)
Prior art keywords
movement
gesture
touch
action
sensitive surface
Prior art date
Application number
PCT/GB2010/050385
Other languages
French (fr)
Other versions
WO2010100503A3 (en
Inventor
Khalil Arafat
Original Assignee
Khalil Arafat
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Khalil Arafat filed Critical Khalil Arafat
Publication of WO2010100503A2 publication Critical patent/WO2010100503A2/en
Publication of WO2010100503A3 publication Critical patent/WO2010100503A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Definitions

  • the present invention relates to a user interface for an electronic device and a method of interfacing with an electronic device, specifically in which the electronic device has a touch-sensitive surface.
  • Electronic devices such as mobile communication devices (e.g. telephones), personal digital assistants, mp3 players or similar hand-held devices, GPS or satellite navigation units, personal computers, laptops, etc., have until recently been controlled by input devices such as keyboards (physical and virtual), mice, jog-dials, joysticks, tracker balls, speech recognition and most recently via touch-sensitive input devices.
  • input devices such as keyboards (physical and virtual), mice, jog-dials, joysticks, tracker balls, speech recognition and most recently via touch-sensitive input devices.
  • a user interface for use with an electronic device having a touch-sensitive surface, configured for: performing a first action in response to a user performing a first gesture comprising a first movement across the touch-sensitive surface, the first movement comprising tracing out a first path on the touch-sensitive surface; performing a second action in response to a user performing a second gesture comprising a second movement across the touch-sensitive surface, the second movement comprising tracing out a second path on the touch-sensitive surface, the second path being substantially a reverse of the first path; and performing a third action in response to a user performing a third gesture comprising the first movement linked to the second movement across the touch-sensitive surface, the second movement being performed subsequent to the first movement; wherein the third action does not comprise the first action and the second action, and wherein user contact with the touch-sensitive surface is maintained between the first movement and the second movement.
  • a method for a user to interface with an eiectronic device having a touch-sensitive surface comprising: performing a first action in response to a user performing a first gesture comprising a first movement across the touch-sensitive surface, the first movement comprising tracing out a first path on the touch-sensitive surface; performing a second action in response to a user performing a second gesture comprising a second movement across the touch-sensitive surface, the second movement comprising tracing out a second path on the touch-sensitive surface, the second path being substantially a reverse of the first path; and performing a third action in response to a user performing a third gesture comprising the first movement linked to the second movement across the touch-sensitive surface, the second movement being performed subsequent to the first movement; wherein the third action does not correspond to the first action and the second action, and wherein user contact with the touch-sensitive surface is maintained between the first movement and the second movement.
  • a computer program product comprising a piurality of program code portions for carrying out the above method.
  • an electronic device comprising: a touch-sensitive surface; and a processor configured for: performing a first action in response to a user performing a first gesture comprising a first movement across the touch-sensitive surface, the first movement comprising tracing out a first path on the touch-sensitive surface; performing a second action in response to a user performing a second gesture comprising a second movement across the touch-sensitive surface, the second movement comprising tracing out a second path on the touch-sensitive surface, the second path being substantially a reverse of the first path; and performing a third action in response to a user performing a third gesture comprising the first movement linked to the second movement across the touch-sensitive surface, the second movement being performed subsequent to the first movement; wherein the third action does not correspond to the first action and the second action, and wherein user contact with the touch-sensitive surface is maintained between the first movement and the second movement.
  • the present invention provides additional functionality to be available to a user by interpreting the performance of a linked combination of individual movements by a user, in a single, complete gesture, across a touch-sensitive surface differently from the performance of the individual movements separately.
  • the single, complete gesture comprises a combination of the first and second gesture such that user contact with the touch-sensitive surface is maintained.
  • Operating systems and applications therefore function more quickly and effectively when their actions are mapped to specific linked combinations of individual movements, rather than a user having to locate specific areas of a graphical user interface (e.g. a linked combination of individual movements provides a short-cut that obviates need to navigate through a plurality of menus and sub-menus).
  • the first gesture could be sweeping a finger to the right (causing a pointer to move right on a display), the second gesture could be sweeping a finger to the left (causing a pointer to move left on the display), but a combination of the two sweeps linked together could cause a portion of text or a portion of an image to be removed from the display along the path of the pointer.
  • this embodiment allows for interpretation of two apparently mutually contradictory commands as one single command to perform some other (potentially completely different) function.
  • a linked combination of relatively few individual movements performed on a touch- sensitive input device could provide the capability for a mobile communication device to launch its phone application and dial a specific contact without the need for the user to launch the phone application, enter various menu systems, search for the specific contact or go to a specific area or areas within an application in order to identify the contact in a favourites section.
  • a linked combination of relatively few individual movements performed on a touch-sensitive input device could provide the capability to control a mobile device that features GPS capability to execute a navigation application, instructing the device to locate itself and provide guidance to a predetermined destination residing within the said application.
  • a linked combination of relatively few individual movements performed on a touch-sensitive input device could provide the capability to control a mobile device that features an audio playback appiication, the performance movements at any point or time could initiate the application to perform a mute or stop function of the audio being played by the application, without having to locate the application directly and search for specific virtual buttons within an interface related to the application.
  • a linked combination of relatively few individual movements (that are performed in succession) on a touch-sensitive surface could be used as a password for accessing an electronic device.
  • additional functionality over the known functionality may be provided.
  • the additional functionality may be provided while blocking the original functionality.
  • the third gesture may comprise the second movement being performed within a predetermined time period of the first movement.
  • the interface will be able to distinguish between two movements spaced in time that are intended to be interpreted as distinct movements, and those that are intended to be interpreted as a single gesture whilst the user contact with the touch-sensitive surface is maintained.
  • Repetition of the third gesture may cause an additional different action to be performed. For instance: if the first gesture is performed, the first action is performed in response; if the second gesture is performed, the second action is performed in response; if the first and second gestures are performed (i.e. if the third gesture is performed), the third action is performed; if the third gesture is performed, followed by the first gesture being performed, a different action is performed; and if the third gesture is performed twice, a further different action is performed; and so on.
  • the net effect may be the tracing out of the first path, but the addition of the backward ⁇ -forwards second path - first path combination may be interpreted differently by the user interface.
  • the first movement and the second movement may be performed at any absolute position on the touch-sensitive surface in order for the first, second and third actions to be performed.
  • the first, second and third gestures according to this embodiment of the invention are location independent.
  • the gestures may be identified and distinguished by means of their general shape, orientation, direction of performance, rotational sense, spatial parity and/or size (e.g. minimum size or over a range of sizes), but not due to the specific location of their start and end points on the touch-sensitive surface.
  • Performing the third action may comprise: receiving a potential predetermined gesture in response to a user performing the third gesture comprising the first movement linked to the second movement across the touch-sensitive surface; determining whether the potential predetermined gesture substantially resembles one of a plurality of predetermined gestures, in response to receiving the potential predetermined gesture; and performing the third action in response to determining that the potential predetermined gesture substantially resembles one of the plurality of predetermined gestures, wherein the third action is associated with said one of the plurality of predetermined gestures.
  • the potential predetermined gesture may be compared to a plurality of predetermined gestures stored in a library (a virtual or physical memory), and if a match is found, performing an action corresponding to the matched predetermined gesture.
  • This act of performing may be used in addition to prior art methods (for instance, the first and second gestures may be interpreted by existing means, with an additional comparison with the predetermined gestures also being made). Alternatively, the first and second gestures may also match predetermined gestures stored in the library. Thus, the user interface may be required to determine which of the first, second and third actions take precedence, based on some predetermined hierarchy.
  • Performing at least one of the first and second actions may comprise performing no action. That is, the user interface may not perform any action in response to a user performing merely the first or the second action. Rather, this may be left to some unit external to the user interface. Such an arrangement would allow for the presently claimed user interface to be applied as an add-on to existing technoiogy.
  • FIGS 1-4 show various methods according to the present invention.
  • Figure 5 shows an embodiment of a user interface according to the present invention.
  • Figure 6 shows a further embodiment of a user interface according to the present invention.
  • Figure 7 shows various non-limiting representations of gestures that could be implemented using the present invention.
  • Figure 8 shows the interaction of a user's hand with a touch-sensitive surface.
  • a user interacts with an electronic device via a touch-sensitive surface (such as a touch-sensitive screen) of a mobile communication device (e.g. a telephone), a personal digital assistant, an mp3 player or similar hand-held device, a GPS or satellite navigation unit, a personal computer, a laptop, etc.
  • a touch-sensitive surface such as a touch-sensitive screen
  • a mobile communication device e.g. a telephone
  • a personal digital assistant e.g. a personal digital assistant
  • an mp3 player or similar hand-held device e.g. a GPS or satellite navigation unit
  • a personal computer e.g. a laptop
  • This interaction is detected by the touch-sensitive surface circuitry, as is known by the person skilled in the art, in the form of a first gesture and an indication that the first gesture has been performed is received (step 1 1 ).
  • decision step 13 it is determined whether a second gesture has been performed.
  • Step 13 requires that the second gesture also be performed with some other condition being satisfied, that is , the second gesture is performed without loss of contact by the user after performing the first gesture so that the combined first and second gesture comprises a single gesture, A further condition may be applied, for example, the second gesture may be required to be performed within a specified time period, or some other equivalent parameter. If the second gesture is deemed performed, then the third action is executed at process step 15; if not, the first action is executed at process step 17. At 19, a new indication that a gesture has been performed is awaited. Execution of the third action could optionally include execution of the first and second actions as well.
  • Figure 2 shows an alternative embodiment to that shown in figure 1 , in which an indication that the first gesture has been performed is received, step 21.
  • the first action is then executed.
  • decision step 23 it is determined whether the second gesture has been performed. Again, decision step 23 requires that the second gesture also be performed with some other condition satisfied, as above. If the second gesture is deemed performed, then the third action is executed at process step 25, followed at step 29 by awaiting a new indication that a gesture has been performed; if not, step 29 is proceeded to directly.
  • Execution of the third action couid optionally include execution of the second action, in addition to the execution of the first action at process step 27 and in addition to the execution of the additional functionality provided by the third action.
  • Figure 3 shows a further alternative embodiment, in which an indication that the first and second gestures have been performed substantially concurrently is received, step
  • the third action is then executed at process step 35, followed at step 39, by awaiting a new indication that a gesture has been performed. Execution of the third action could optionally include execution of the first and second actions.
  • Figure 4 shows a yet further alternative embodiment, in which an indication that a potential predetermined gesture has been performed is received, step 41, in response to a user performing a gesture.
  • decision step 43 it is determined whether the potential predetermined gesture substantially resembles one of a plurality of predetermined gestures, in this case corresponding to the third gesture. If there is a match, then the third action is executed at process step 45, followed at step 49 by awaiting a new indication that a gesture has been performed; if not, step 49 is proceeded to directly.
  • the process of figure 4 is intended to operate in parallel with known prior art methods for user interaction; that is, while the method in figure 4 is being performed, a conventional system could be processing the component gestures of the third gesture (i.e. the first and second gestures). Further, the third action (performed at process step 45) could include perturbing the operation of the conventional system by, for instance, preventing the execution of the first and second actions.
  • FIG. 5 shows an embodiment of a user interface 51 according to the present invention.
  • the user interface 51 includes an input device 52, a gesture detector 53, a gesture comparator 54, a library 55, a gesture execution unit 56 and an output device 57.
  • the input device 52 is communicatively coupled to the gesture detector 53, which is communicatively coupled to the gesture comparator 54.
  • the gesture comparator 54 is communicatively coupled with the library 55, as well as being communicatively coupled to the gesture execution unit 56, which is communicatively coupled to the output device 57.
  • the input device 52 is configured to receive an input from a user in the form of gestures performed on a touch-sensitive surface.
  • the gesture detector 53 detects the gesture performed by the user and passes this to the gesture comparator 54, which compares the gesture performed by the user to a list of predetermined gestures held in the library 55. Depending on the result of this comparison, the gesture execution unit 56 performs an appropriate action and outputs the result to the output device 57.
  • Figure 6 shows another embodiment of a user interface 61 according to the present invention.
  • the user interface 61 includes an input device 62, a gesture detector 63, a gesture comparator 64, a library 65, a gesture execution unit 66 and an output device 67.
  • user interface 61 includes a conventional gesture interpretation system 68,
  • the input device 62 is communicatively coupled to the gesture detector 63, which is communicatively coupled to the gesture comparator 64.
  • the gesture comparator 64 is communicatively coupled with the library 65, as well as being communicatively coupled to the gesture execution unit 66, which is communicatively coupled to the output device 67.
  • the input device 62, the gesture execution unit 66 and the output device 67 are each communicatively coupled to the conventional system 68
  • the conventional gesture interpretation system 68 acts in parallel with the user interface 51 of figure 5.
  • the user interface 61 acts in substantially the same manner as the user interface 51, with the exception that the library 65 merely contains a list of predetermined gestures that differ from those conventionally used (which in turn are processed in the conventional system 68).
  • the gestures contained in the library 65 include compound gestures; i.e. those gestures formed from a plurality of individual gestures.
  • the user interface 61 will behave like a conventional user interface 68 unless a compound gesture is detected by the gesture detector 63.
  • the gesture execution unit 66 could be configured for perturbing the operation of the conventional system 68 by, for instance, preventing the execution of certain actions.
  • Figure 7 ⁇ a)-(g) shows various representations of gestures that could be implemented using the present invention. These gestures could be used to launch an application on a device, dial a specific contact on a mobile communication device, execute a navigation application on a GPS unit, control an audio playback application on an mp3 player (e.g. control volume or pause/skip playback), accessing an electronic device (i.e. by constituting a password), controlling a zooming function on a screen, or 'rubbing out' (i.e. deleting) a portion of text/image.
  • mp3 player e.g. control volume or pause/skip playback
  • accessing an electronic device i.e. by constituting a password
  • controlling a zooming function on a screen i.e. by constituting a password
  • 'rubbing out' i.e. deleting
  • the l-gesture 71 comprises a first movement tracing out a straight path, and a second movement re-tracing the first path back to its start point. It may be that the absolute position of the start point on the touch-sensitive surface is irrelevant; that is, performing a given gesture on any portion of the touch-sensitive surface will produce the same action. However, this is not necessarily the case. Having said that, the user interface may be configured to distinguish between gestures based on the above gesture, but of varying length or orientation, and perform varying actions accordingly. As a specific (and non-limiting) example of the use of the l-gesture 71, the user interface could be configured to remove portions of text/imagery corresponding to the path traced out by the l-gesture 71. This would be particularly useful if a touch-sensitive screen is used as the touch-sensitive surface. Optionally, this deletion could reveal additional text/imagery 'behind' the removed text/imagery.
  • the U-gesture 72 comprises a first movement tracing out a horseshoe-shaped path and a second movement re-tracing the first path back to its start point. It may be that the absolute position of the start point on the touch-sensitive surface is irrelevant; that is, performing a given gesture on any portion of the touch-sensitive surface will produce the same action. However, this is not necessarily the case. Having said that, the user interface may be configured to distinguish between gestures based on the above gesture, but of varying size, orientation or rotational sense, and perform varying actions accordingly.
  • the user interface could be configured to move to the 'next' or 'previous' page in a sequence of pages, depending on the orientation of the U-gesture 72, in a document.
  • the V-gesture 73 comprises a first movement tracing out a straight path in a first direction followed by a straight path in a second direction at an angle to the first direction, and a second movement re-tracing the first path back to its start point it may be that the absolute position of the start point on the touch-sensitive surface is irrelevant; that is, performing a given gesture on any portion of the touch-sensitive surface will produce the same action. However, this is not necessarily the case. Having said that, the user interface may be configured to distinguish between gestures based on the above gesture, but of varying size, orientation or sense, and perform varying actions accordingly.
  • the user interface could be configured to go to the first or last page (depending on the direction of the V-gesture 73) in a sequence of pages of a document.
  • the Z-gesture 74 comprises a first movement tracing out a Z-shaped path, and a second movement re-tracing the first path back to its start point. It may be that the absolute position of the start point on the touch-sensitive surface is irrelevant; that is, performing a given gesture on any portion of the touch-sensitive surface will produce the same action. However, this is not necessarily the case. Having said that, the user interface may be configured to distinguish between gestures based on the above gesture, but of varying size, orientation or parity (i.e. a backwards Z shape), and perform varying actions accordingly. As a specific (and non-limiting) example of the use of the Z-gesture 74, the user interface could be configured to automatically dial a predetermined telephone number, depending on the direction of performance, and sense of the Z-gesture 74.
  • the O-gesture 75 comprises a first movement tracing out a circular path, and a second movement re-tracing the first path back to its start point. It may be that the absolute position of the start point on the touch-sensitive surface is irrelevant; that is, performing a given gesture on any portion of the touch-sensitive surface will produce the same action, However, this is not necessarily the case. Having said that, the user interface may be configured to distinguish between gestures based on the above gesture, but of varying size, orientation of the start point about the circle or rotational sense, and perform varying actions accordingly.
  • the user interface could be configured to navigate a web- browser to the browser's home page, or another page saved in the browsers favourites/bookmarks folder, depending on the start point around the perimeter of the circular path and/or the sense of first motion (i.e. clockwise followed by anticlockwise, or anticlockwise followed by clockwise).
  • the O-gesture 76 is similar to the O-gesture 75, but the path described by the first and second movements is not circular but substantially elliptical, with the start point lying on the major axis of the ellipse.
  • the O-gesture 77 is similar to the O-gesture 75, but the path described by the first movement is not circular but substantially elliptical, with the start point lying on the minor axis of the ellipse.
  • the user interface could be configured to launch a calculator application.
  • Figure 8 shows a non-limiting example of the interaction of a user's hand 81 with a touch-sensitive surface 83.
  • the user contacts the surface with a finger 85 and performs the l-gesture 71 by moving the hand 81 from its start position 81a directly to its intermediate position 81 b, then directly back to its start position 81a.
  • the length of the motion is about 4cm and at an orientation of about 120° to the touch-sensitive surface's primary up direction.

Abstract

A user interface and method, comprising performing a first action in response to a user performing a first movement across a touch-sensitive surface, performing a second action in response to a user performing a second movement across the touch-sensitive surface, and performing a third action in response to a user performing the first movement linked to the second movement across the touch-sensitive surface wherein the third action does not comprise the first action and the second action.

Description

USER INTERFACE FOR AN ELECTRONIC DEVICE HAVING A TOUCH-SENSITIVE SURFACE
Field of the Invention
The present invention relates to a user interface for an electronic device and a method of interfacing with an electronic device, specifically in which the electronic device has a touch-sensitive surface.
Background of the Invention
Electronic devices, such as mobile communication devices (e.g. telephones), personal digital assistants, mp3 players or similar hand-held devices, GPS or satellite navigation units, personal computers, laptops, etc., have until recently been controlled by input devices such as keyboards (physical and virtual), mice, jog-dials, joysticks, tracker balls, speech recognition and most recently via touch-sensitive input devices. These input devices interact with the underlying operating system and/or applications (directly or indirectly associated with the operating system) that reside on these eiectronic devices.
Current touch-sensitive input devices contain no physical keys or moving parts to allow for extra and alternative functionality; these touch-sensitive devices' main function is to interact with various virtual interfaces as a replacement for the conventional physical keyboards, mice and so on. As a result, the user interface is complicated in order to compensate for the limited actions that can be carried out by the user when contacting the touch-sensitive surface in order to provide complete functionaiity.
Furthermore, there are certain limitations to the actual amount of interaction and physical instruction a human user can perform at any given time when using a touch- sensitive input device.
Summary of the Invention
According to a first aspect of the present invention, there is provided a user interface, for use with an electronic device having a touch-sensitive surface, configured for: performing a first action in response to a user performing a first gesture comprising a first movement across the touch-sensitive surface, the first movement comprising tracing out a first path on the touch-sensitive surface; performing a second action in response to a user performing a second gesture comprising a second movement across the touch-sensitive surface, the second movement comprising tracing out a second path on the touch-sensitive surface, the second path being substantially a reverse of the first path; and performing a third action in response to a user performing a third gesture comprising the first movement linked to the second movement across the touch-sensitive surface, the second movement being performed subsequent to the first movement; wherein the third action does not comprise the first action and the second action, and wherein user contact with the touch-sensitive surface is maintained between the first movement and the second movement.
According to a second aspect of the present invention, there is provided a method for a user to interface with an eiectronic device having a touch-sensitive surface, comprising: performing a first action in response to a user performing a first gesture comprising a first movement across the touch-sensitive surface, the first movement comprising tracing out a first path on the touch-sensitive surface; performing a second action in response to a user performing a second gesture comprising a second movement across the touch-sensitive surface, the second movement comprising tracing out a second path on the touch-sensitive surface, the second path being substantially a reverse of the first path; and performing a third action in response to a user performing a third gesture comprising the first movement linked to the second movement across the touch-sensitive surface, the second movement being performed subsequent to the first movement; wherein the third action does not correspond to the first action and the second action, and wherein user contact with the touch-sensitive surface is maintained between the first movement and the second movement.
According to a third aspect of the present invention, there is provided a computer program product comprising a piurality of program code portions for carrying out the above method.
According to a fourth aspect of the present invention, there is provided an electronic device, comprising: a touch-sensitive surface; and a processor configured for: performing a first action in response to a user performing a first gesture comprising a first movement across the touch-sensitive surface, the first movement comprising tracing out a first path on the touch-sensitive surface; performing a second action in response to a user performing a second gesture comprising a second movement across the touch-sensitive surface, the second movement comprising tracing out a second path on the touch-sensitive surface, the second path being substantially a reverse of the first path; and performing a third action in response to a user performing a third gesture comprising the first movement linked to the second movement across the touch-sensitive surface, the second movement being performed subsequent to the first movement; wherein the third action does not correspond to the first action and the second action, and wherein user contact with the touch-sensitive surface is maintained between the first movement and the second movement.
Thus, the present invention provides additional functionality to be available to a user by interpreting the performance of a linked combination of individual movements by a user, in a single, complete gesture, across a touch-sensitive surface differently from the performance of the individual movements separately. The single, complete gesture comprises a combination of the first and second gesture such that user contact with the touch-sensitive surface is maintained. Operating systems and applications therefore function more quickly and effectively when their actions are mapped to specific linked combinations of individual movements, rather than a user having to locate specific areas of a graphical user interface (e.g. a linked combination of individual movements provides a short-cut that obviates need to navigate through a plurality of menus and sub-menus). This therefore provides for a faster and more effective means of controlling an electronic device, its associated operating system and any directly or indirectly associated applications. Thus, the efficiency and speed of the device is increased, and in turn the user has the capability to control the device in a faster and more effective manner with fewer processes needing to be performed; hence decreasing the drain on the device's processor and power supply. Moreover, the present invention provides for more commands to be made available, which leads to increased functionality.
According to a first embodiment, the first gesture could be sweeping a finger to the right (causing a pointer to move right on a display), the second gesture could be sweeping a finger to the left (causing a pointer to move left on the display), but a combination of the two sweeps linked together could cause a portion of text or a portion of an image to be removed from the display along the path of the pointer. In effect, this embodiment allows for interpretation of two apparently mutually contradictory commands as one single command to perform some other (potentially completely different) function.
A linked combination of relatively few individual movements performed on a touch- sensitive input device could provide the capability for a mobile communication device to launch its phone application and dial a specific contact without the need for the user to launch the phone application, enter various menu systems, search for the specific contact or go to a specific area or areas within an application in order to identify the contact in a favourites section. Similarly, a linked combination of relatively few individual movements performed on a touch-sensitive input device could provide the capability to control a mobile device that features GPS capability to execute a navigation application, instructing the device to locate itself and provide guidance to a predetermined destination residing within the said application. Similarly, a linked combination of relatively few individual movements performed on a touch-sensitive input device could provide the capability to control a mobile device that features an audio playback appiication, the performance movements at any point or time could initiate the application to perform a mute or stop function of the audio being played by the application, without having to locate the application directly and search for specific virtual buttons within an interface related to the application. Similarly, a linked combination of relatively few individual movements (that are performed in succession) on a touch-sensitive surface could be used as a password for accessing an electronic device.
According to a first embodiment of the present invention, additional functionality over the known functionality may be provided. Alternatively, the additional functionality may be provided while blocking the original functionality.
The third gesture may comprise the second movement being performed within a predetermined time period of the first movement. Thus, the interface will be able to distinguish between two movements spaced in time that are intended to be interpreted as distinct movements, and those that are intended to be interpreted as a single gesture whilst the user contact with the touch-sensitive surface is maintained. Repetition of the third gesture (in whole of in part) may cause an additional different action to be performed. For instance: if the first gesture is performed, the first action is performed in response; if the second gesture is performed, the second action is performed in response; if the first and second gestures are performed (i.e. if the third gesture is performed), the third action is performed; if the third gesture is performed, followed by the first gesture being performed, a different action is performed; and if the third gesture is performed twice, a further different action is performed; and so on.
Thus, the net effect may be the tracing out of the first path, but the addition of the backwardε-forwards second path - first path combination may be interpreted differently by the user interface.
According to one particular embodiment, the first movement and the second movement may be performed at any absolute position on the touch-sensitive surface in order for the first, second and third actions to be performed. Thus, the first, second and third gestures according to this embodiment of the invention are location independent. In certain embodiments, the gestures may be identified and distinguished by means of their general shape, orientation, direction of performance, rotational sense, spatial parity and/or size (e.g. minimum size or over a range of sizes), but not due to the specific location of their start and end points on the touch-sensitive surface.
Performing the third action may comprise: receiving a potential predetermined gesture in response to a user performing the third gesture comprising the first movement linked to the second movement across the touch-sensitive surface; determining whether the potential predetermined gesture substantially resembles one of a plurality of predetermined gestures, in response to receiving the potential predetermined gesture; and performing the third action in response to determining that the potential predetermined gesture substantially resembles one of the plurality of predetermined gestures, wherein the third action is associated with said one of the plurality of predetermined gestures. In this way, the potential predetermined gesture may be compared to a plurality of predetermined gestures stored in a library (a virtual or physical memory), and if a match is found, performing an action corresponding to the matched predetermined gesture. This act of performing may be used in addition to prior art methods (for instance, the first and second gestures may be interpreted by existing means, with an additional comparison with the predetermined gestures also being made). Alternatively, the first and second gestures may also match predetermined gestures stored in the library. Thus, the user interface may be required to determine which of the first, second and third actions take precedence, based on some predetermined hierarchy.
Performing at least one of the first and second actions may comprise performing no action. That is, the user interface may not perform any action in response to a user performing merely the first or the second action. Rather, this may be left to some unit external to the user interface. Such an arrangement would allow for the presently claimed user interface to be applied as an add-on to existing technoiogy.
It would be understood by those skilled in the art that the present invention could be applied to technologies other than touch-sensitive surfaces. For instance, tracking sensors placed on gloves or within stickers placed on a user's fingers could be used to determine user interaction gestures.
Further aspects of the present invention may be found in the appendent independent claims, to which reference should now be made. Embodiments of the present invention may be found in the appendent dependent claims.
Brief Description of the Drawings
For a better understanding of the present invention, and to show more clearly how it may be carried into effect, reference will now be made, by way of example, to the following drawings, in which:
Figures 1-4 show various methods according to the present invention.
Figure 5 shows an embodiment of a user interface according to the present invention.
Figure 6 shows a further embodiment of a user interface according to the present invention. Figure 7 shows various non-limiting representations of gestures that could be implemented using the present invention.
Figure 8 shows the interaction of a user's hand with a touch-sensitive surface.
Detailed Description of Specific Embodiments of the Invention Figures 1-4 show various methods according to the present invention for a user to interface with an electronic device.
Referring to figure 1 , a user interacts with an electronic device via a touch-sensitive surface (such as a touch-sensitive screen) of a mobile communication device (e.g. a telephone), a personal digital assistant, an mp3 player or similar hand-held device, a GPS or satellite navigation unit, a personal computer, a laptop, etc. This interaction is detected by the touch-sensitive surface circuitry, as is known by the person skilled in the art, in the form of a first gesture and an indication that the first gesture has been performed is received (step 1 1 ). At decision step 13, it is determined whether a second gesture has been performed. Decision step 13 requires that the second gesture also be performed with some other condition being satisfied, that is , the second gesture is performed without loss of contact by the user after performing the first gesture so that the combined first and second gesture comprises a single gesture, A further condition may be applied, for example, the second gesture may be required to be performed within a specified time period, or some other equivalent parameter. If the second gesture is deemed performed, then the third action is executed at process step 15; if not, the first action is executed at process step 17. At 19, a new indication that a gesture has been performed is awaited. Execution of the third action could optionally include execution of the first and second actions as well.
Figure 2 shows an alternative embodiment to that shown in figure 1 , in which an indication that the first gesture has been performed is received, step 21. At process step 27, the first action is then executed. At decision step 23, it is determined whether the second gesture has been performed. Again, decision step 23 requires that the second gesture also be performed with some other condition satisfied, as above. If the second gesture is deemed performed, then the third action is executed at process step 25, followed at step 29 by awaiting a new indication that a gesture has been performed; if not, step 29 is proceeded to directly. Execution of the third action couid optionally include execution of the second action, in addition to the execution of the first action at process step 27 and in addition to the execution of the additional functionality provided by the third action.
Figure 3 shows a further alternative embodiment, in which an indication that the first and second gestures have been performed substantially concurrently is received, step
31. The third action is then executed at process step 35, followed at step 39, by awaiting a new indication that a gesture has been performed. Execution of the third action could optionally include execution of the first and second actions.
Figure 4 shows a yet further alternative embodiment, in which an indication that a potential predetermined gesture has been performed is received, step 41, in response to a user performing a gesture. At decision step 43, it is determined whether the potential predetermined gesture substantially resembles one of a plurality of predetermined gestures, in this case corresponding to the third gesture. If there is a match, then the third action is executed at process step 45, followed at step 49 by awaiting a new indication that a gesture has been performed; if not, step 49 is proceeded to directly. The process of figure 4 is intended to operate in parallel with known prior art methods for user interaction; that is, while the method in figure 4 is being performed, a conventional system could be processing the component gestures of the third gesture (i.e. the first and second gestures). Further, the third action (performed at process step 45) could include perturbing the operation of the conventional system by, for instance, preventing the execution of the first and second actions.
Figure 5 shows an embodiment of a user interface 51 according to the present invention. The user interface 51 includes an input device 52, a gesture detector 53, a gesture comparator 54, a library 55, a gesture execution unit 56 and an output device 57.
The input device 52 is communicatively coupled to the gesture detector 53, which is communicatively coupled to the gesture comparator 54. The gesture comparator 54 is communicatively coupled with the library 55, as well as being communicatively coupled to the gesture execution unit 56, which is communicatively coupled to the output device 57.
The input device 52 is configured to receive an input from a user in the form of gestures performed on a touch-sensitive surface. The gesture detector 53 detects the gesture performed by the user and passes this to the gesture comparator 54, which compares the gesture performed by the user to a list of predetermined gestures held in the library 55. Depending on the result of this comparison, the gesture execution unit 56 performs an appropriate action and outputs the result to the output device 57. Figure 6 shows another embodiment of a user interface 61 according to the present invention. Like the user interface 51 , the user interface 61 includes an input device 62, a gesture detector 63, a gesture comparator 64, a library 65, a gesture execution unit 66 and an output device 67. In addition, user interface 61 includes a conventional gesture interpretation system 68,
The input device 62 is communicatively coupled to the gesture detector 63, which is communicatively coupled to the gesture comparator 64. The gesture comparator 64 is communicatively coupled with the library 65, as well as being communicatively coupled to the gesture execution unit 66, which is communicatively coupled to the output device 67. In addition, the input device 62, the gesture execution unit 66 and the output device 67 are each communicatively coupled to the conventional system 68
The conventional gesture interpretation system 68 acts in parallel with the user interface 51 of figure 5. The user interface 61 acts in substantially the same manner as the user interface 51, with the exception that the library 65 merely contains a list of predetermined gestures that differ from those conventionally used (which in turn are processed in the conventional system 68). The gestures contained in the library 65 include compound gestures; i.e. those gestures formed from a plurality of individual gestures. Thus, the user interface 61 will behave like a conventional user interface 68 unless a compound gesture is detected by the gesture detector 63. The gesture execution unit 66 could be configured for perturbing the operation of the conventional system 68 by, for instance, preventing the execution of certain actions.
Figure 7{a)-(g) shows various representations of gestures that could be implemented using the present invention. These gestures could be used to launch an application on a device, dial a specific contact on a mobile communication device, execute a navigation application on a GPS unit, control an audio playback application on an mp3 player (e.g. control volume or pause/skip playback), accessing an electronic device (i.e. by constituting a password), controlling a zooming function on a screen, or 'rubbing out' (i.e. deleting) a portion of text/image.
The l-gesture 71 comprises a first movement tracing out a straight path, and a second movement re-tracing the first path back to its start point. It may be that the absolute position of the start point on the touch-sensitive surface is irrelevant; that is, performing a given gesture on any portion of the touch-sensitive surface will produce the same action. However, this is not necessarily the case. Having said that, the user interface may be configured to distinguish between gestures based on the above gesture, but of varying length or orientation, and perform varying actions accordingly. As a specific (and non-limiting) example of the use of the l-gesture 71, the user interface could be configured to remove portions of text/imagery corresponding to the path traced out by the l-gesture 71. This would be particularly useful if a touch-sensitive screen is used as the touch-sensitive surface. Optionally, this deletion could reveal additional text/imagery 'behind' the removed text/imagery.
The U-gesture 72 comprises a first movement tracing out a horseshoe-shaped path and a second movement re-tracing the first path back to its start point. It may be that the absolute position of the start point on the touch-sensitive surface is irrelevant; that is, performing a given gesture on any portion of the touch-sensitive surface will produce the same action. However, this is not necessarily the case. Having said that, the user interface may be configured to distinguish between gestures based on the above gesture, but of varying size, orientation or rotational sense, and perform varying actions accordingly. As a specific (and non-limiting) example of the use of the U-gesture 72, the user interface could be configured to move to the 'next' or 'previous' page in a sequence of pages, depending on the orientation of the U-gesture 72, in a document.
The V-gesture 73 comprises a first movement tracing out a straight path in a first direction followed by a straight path in a second direction at an angle to the first direction, and a second movement re-tracing the first path back to its start point it may be that the absolute position of the start point on the touch-sensitive surface is irrelevant; that is, performing a given gesture on any portion of the touch-sensitive surface will produce the same action. However, this is not necessarily the case. Having said that, the user interface may be configured to distinguish between gestures based on the above gesture, but of varying size, orientation or sense, and perform varying actions accordingly. As a specific (and non-iimitiπg) example of the use of the V-gesture 73, the user interface could be configured to go to the first or last page (depending on the direction of the V-gesture 73) in a sequence of pages of a document.
The Z-gesture 74 comprises a first movement tracing out a Z-shaped path, and a second movement re-tracing the first path back to its start point. It may be that the absolute position of the start point on the touch-sensitive surface is irrelevant; that is, performing a given gesture on any portion of the touch-sensitive surface will produce the same action. However, this is not necessarily the case. Having said that, the user interface may be configured to distinguish between gestures based on the above gesture, but of varying size, orientation or parity (i.e. a backwards Z shape), and perform varying actions accordingly. As a specific (and non-limiting) example of the use of the Z-gesture 74, the user interface could be configured to automatically dial a predetermined telephone number, depending on the direction of performance, and sense of the Z-gesture 74.
The O-gesture 75 comprises a first movement tracing out a circular path, and a second movement re-tracing the first path back to its start point. It may be that the absolute position of the start point on the touch-sensitive surface is irrelevant; that is, performing a given gesture on any portion of the touch-sensitive surface will produce the same action, However, this is not necessarily the case. Having said that, the user interface may be configured to distinguish between gestures based on the above gesture, but of varying size, orientation of the start point about the circle or rotational sense, and perform varying actions accordingly. As a specific (and non-limiting) example of the use of the O-gesture 75 the user interface could be configured to navigate a web- browser to the browser's home page, or another page saved in the browsers favourites/bookmarks folder, depending on the start point around the perimeter of the circular path and/or the sense of first motion (i.e. clockwise followed by anticlockwise, or anticlockwise followed by clockwise).
The O-gesture 76 is similar to the O-gesture 75, but the path described by the first and second movements is not circular but substantially elliptical, with the start point lying on the major axis of the ellipse. The O-gesture 77 is similar to the O-gesture 75, but the path described by the first movement is not circular but substantially elliptical, with the start point lying on the minor axis of the ellipse. As a specific (and non-limiting) example of the use of the O-gesture 76 the user interface could be configured to launch a calculator application.
Figure 8 shows a non-limiting example of the interaction of a user's hand 81 with a touch-sensitive surface 83. The user contacts the surface with a finger 85 and performs the l-gesture 71 by moving the hand 81 from its start position 81a directly to its intermediate position 81 b, then directly back to its start position 81a. In this example, the length of the motion is about 4cm and at an orientation of about 120° to the touch-sensitive surface's primary up direction.
Although embodiments of the present invention have been illustrated in the accompanying drawings and described in the foregoing detailed description, it will be understood that the invention is not limited to the embodiments disclosed, but is capable of numerous modifications without departing from the scope of the invention as set out in the following claims.

Claims

Claims
1. A user interface, for use with an electronic device having a touch-sensitive surface, configured for: performing a first action in response to a user performing a first gesture comprising a first movement across the touch-sensitive surface, the first movement comprising tracing out a first path on the touch-sensitive surface; performing a second action in response to a user performing a second gesture comprising a second movement across the touch-sensitive surface, the second movement comprising tracing out a second path on the touch-sensitive surface, the second path being substantially a reverse of the first path; and performing a third action in response to a user performing a third gesture comprising the first movement linked to the second movement across the touch- sensitive surface, the second movement being performed subsequent to the first movement; wherein the third action does not comprise the first action and the second action, and wherein user contact with the touch-sensitive surface is maintained between the first movement and the second movement.
2, The user interface of claim 1 , wherein the third gesture comprises the second movement being performed within a predetermined time period of the first movement.
3. The user interface of any preceding ciaim, further configured for performing a fourth action in response to a user performing a fourth gesture, the fourth gesture comprising the third gesture followed by the first gesture,
4. The user interface of any preceding claim, further configured for performing a fifth action in response to a user performing a fifth gesture, the fifth gesture comprising the third gesture being performed a plurality of times, each being performed subsequent to the last.
5. The user interface of any preceding claim, further configured for performing a sixth action in response to a user performing a sixth gesture, the sixth gesture comprising the third gesture being performed a plurality of times, each being performed subsequent to the last, followed by the first gesture.
6. The user interface of any preceding claim, wherein the first movement and the second movement may be performed at any absolute position on the touch-sensitive surface in order for the first, second and third actions to be performed.
7. The user interface of claim 6, wherein the respective gestures may be identified and distinguished by means of their general shape, orientation, direction of performance, rotational sense, spatial parity and/or size.
8. The user interface of any preceding claim, wherein performing the third action comprises: receiving a potential predetermined gesture in response to a user performing the third gesture comprising the first movement and the second movement across the touch-sensitive surface; determining whether the potential predetermined gesture substantially resembles one of a plurality of predetermined gestures, in response to receiving the potential predetermined gesture; and performing the third action in response to determining that the potential predetermined gesture substantially resembles one of the plurality of predetermined gestures, wherein the third action is associated with said one of the plurality of predetermined gestures.
9. The user interface of any preceding claim, wherein performing at least one of the first and second actions comprises performing no action.
10. A method for a user to interface with an electronic device having a touch- sensitive surface, comprising: performing a first action in response to a user performing a first gesture comprising a first movement across the touch-sensitive surface, the first movement comprising tracing out a first path on the touch-sensitive surface; performing a second action in response to a user performing a second gesture comprising a second movement across the touch-sensitive surface, the second movement comprising tracing out a second path on the touch-sensitive surface, the second path being substantially a reverse of the first path; and performing a third action in response to a user performing a third gesture comprising the first movement linked to the second movement across the touch- sensitive surface, the second movement being performed subsequent to the first movement; wherein the third action does not correspond to the first action and the second action, and wherein user contact with the touch-sensitive surface is maintained between the first movement and the second movement.
11. A computer program product comprising a plurality of program code portions for carrying out the method of claim 10.
(2, 1& An electronic device, comprising: a touch-sensitive surface; and a processor configured for: performing a first action in response to a user performing a first gesture comprising a first movement across the touch-sensitive surface, the first movement comprising tracing out a first path on the touch-sensitive surface; performing a second action in response to a user performing a second gesture comprising a second movement across the touch-sensitive surface, the second movement comprising tracing out a second path on the touch-sensitive surface, the second path being substantially a reverse of the first path; and performing a third action in response to a user performing a third gesture comprising the first movement linked to the second movement across the touch- sensitive surface, the second movement being performed subsequent to the first movement; wherein the third action does not correspond to the first action and the second action, and wherein user contact with the touch-sensitive surface is maintained between the first movement and the second movement.
PCT/GB2010/050385 2009-03-06 2010-03-05 User interface for an electronic device having a touch-sensitive surface WO2010100503A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0903896.9 2009-03-06
GB0903896A GB2459345C (en) 2009-03-06 2009-03-06 User interface for an electronic device having a touch-sensitive surface.

Publications (2)

Publication Number Publication Date
WO2010100503A2 true WO2010100503A2 (en) 2010-09-10
WO2010100503A3 WO2010100503A3 (en) 2011-02-03

Family

ID=40600623

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2010/050385 WO2010100503A2 (en) 2009-03-06 2010-03-05 User interface for an electronic device having a touch-sensitive surface

Country Status (2)

Country Link
GB (1) GB2459345C (en)
WO (1) WO2010100503A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140071171A1 (en) * 2012-09-12 2014-03-13 Alcatel-Lucent Usa Inc. Pinch-and-zoom, zoom-and-pinch gesture control
US20140223382A1 (en) * 2013-02-01 2014-08-07 Barnesandnoble.Com Llc Z-shaped gesture for touch sensitive ui undo, delete, and clear functions
EP4332731A1 (en) * 2022-09-03 2024-03-06 Kingfar International Inc. Human-computer interaction movement track detection method, apparatus, device, and readable storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3546337B2 (en) * 1993-12-21 2004-07-28 ゼロックス コーポレイション User interface device for computing system and method of using graphic keyboard
US7840912B2 (en) * 2006-01-30 2010-11-23 Apple Inc. Multi-touch gesture dictionary
US7932895B2 (en) * 2005-05-24 2011-04-26 Nokia Corporation Control of an electronic device using a gesture as an input
US20070262951A1 (en) * 2006-05-09 2007-11-15 Synaptics Incorporated Proximity sensor device and method with improved indication of adjustment
DE202007018940U1 (en) * 2006-08-15 2009-12-10 N-Trig Ltd. Motion detection for a digitizer
US7770136B2 (en) * 2007-01-24 2010-08-03 Microsoft Corporation Gesture recognition interactive feedback
US8059101B2 (en) * 2007-06-22 2011-11-15 Apple Inc. Swipe gestures for touch screen keyboards

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140071171A1 (en) * 2012-09-12 2014-03-13 Alcatel-Lucent Usa Inc. Pinch-and-zoom, zoom-and-pinch gesture control
US20140223382A1 (en) * 2013-02-01 2014-08-07 Barnesandnoble.Com Llc Z-shaped gesture for touch sensitive ui undo, delete, and clear functions
EP4332731A1 (en) * 2022-09-03 2024-03-06 Kingfar International Inc. Human-computer interaction movement track detection method, apparatus, device, and readable storage medium

Also Published As

Publication number Publication date
GB0903896D0 (en) 2009-04-22
GB2459345B (en) 2010-09-08
GB2459345A (en) 2009-10-28
GB2459345C (en) 2010-11-10
WO2010100503A3 (en) 2011-02-03

Similar Documents

Publication Publication Date Title
US11256396B2 (en) Pinch gesture to navigate application layers
US8456433B2 (en) Signal processing apparatus, signal processing method and selection method of user interface icon for multi-touch panel
TWI569171B (en) Gesture recognition
WO2013094371A1 (en) Display control device, display control method, and computer program
US8878787B2 (en) Multi-touch user input based on multiple quick-point controllers
KR101930225B1 (en) Method and apparatus for controlling touch screen operation mode
US20140306897A1 (en) Virtual keyboard swipe gestures for cursor movement
US20100207892A1 (en) Touch-Control Module
US20100083108A1 (en) Touch-screen device having soft escape key
MX2008014057A (en) Multi-function key with scrolling.
WO2011157527A1 (en) Contextual hierarchical menu system on touch screens
JP5449630B1 (en) Programmable display and its screen operation processing program
US20130106707A1 (en) Method and device for gesture determination
WO2020214500A1 (en) Devices, methods, and systems for performing content manipulation operations
EP2169521A1 (en) Touch-screen device having soft escape key
WO2010100503A2 (en) User interface for an electronic device having a touch-sensitive surface
EP3371686B1 (en) Improved method for selecting an element of a graphical user interface
CN111142775A (en) Gesture interaction method and device
EP3371685B1 (en) Improved method for selecting an element of a graphical user interface
JP6284459B2 (en) Terminal device
KR101482867B1 (en) Method and apparatus for input and pointing using edge touch
US11914789B2 (en) Method for inputting letters, host, and computer readable storage medium
WO2023072406A1 (en) Layout change of a virtual input device
CN117289847A (en) Equipment control method and device and electronic equipment
WO2014016845A1 (en) Computer device and method for converting gesture

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10707655

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10707655

Country of ref document: EP

Kind code of ref document: A2