US20130212541A1 - Method, a device and a system for receiving user input - Google Patents
Method, a device and a system for receiving user input Download PDFInfo
- Publication number
- US20130212541A1 US20130212541A1 US13/701,367 US201013701367A US2013212541A1 US 20130212541 A1 US20130212541 A1 US 20130212541A1 US 201013701367 A US201013701367 A US 201013701367A US 2013212541 A1 US2013212541 A1 US 2013212541A1
- Authority
- US
- United States
- Prior art keywords
- user interface
- event
- gesture
- touch
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 239000003607 modifier Substances 0.000 claims abstract description 65
- 238000004590 computer program Methods 0.000 claims description 44
- 238000004891 communication Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 10
- 238000004091 panning Methods 0.000 description 13
- 230000006399 behavior Effects 0.000 description 11
- 230000008901 benefit Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 238000001914 filtration Methods 0.000 description 10
- 230000001960 triggered effect Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000001419 dependent effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005057 finger movement Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04808—Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
Definitions
- user interface events are first formed from low-level events generated by a user interface input device such as a touch screen.
- the user interface events may be modified by forming information on a modifier for the user interface events such as time and coordinate information.
- the user interface events and their modifiers are sent to a gesture recognition engine, where gesture information is formed from the user interface events and possibly their modifiers.
- the gesture information is then used as user input to the apparatus.
- the gestures may not be formed directly from the low-level events of the input device. Instead, higher-level events i.e. user interface events are formed from the low-level events, and gestures are then recognized from these user interface events.
- a method for receiving user input comprising receiving a low-level event from a user interface input device, forming a user interface event using said low-level event, forming information on a modifier for said user interface event, forming gesture information from said user interface event and said modifier, and using said gesture information as user input to an apparatus.
- the method further comprises forwarding said user interface event and said modifier to a gesture recognizer, and forming said gesture information by said gesture recognizer.
- the method further comprises receiving a plurality of user interface events from a user interface input device, forwarding said user interface events to a plurality of gesture recognizers, and forming at least two gestures by said gesture recognizers.
- the user interface event is one of the group of touch, release, move and hold.
- the method further comprises forming said modifier from at least one of the group of time information, area information, direction information, speed information, and pressure information.
- the method further comprises forming a hold user interface event in response to a touch input or key press input being held in place for a predetermined time, and using said hold event in forming said gesture information.
- the method further comprises receiving at least two distinct user interface events from a multi-touch touch input device, and using said at least two distinct user interface events for forming a multi-touch gesture.
- the user interface input device comprises at least one of the group of a touch screen, a touch pad, a pen, a mouse, a haptic input device, a data glove and a data suit.
- the user interface event is one of the group of touch down, release, hold and move.
- an apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to receive a low-level event from a user interface input module, form a user interface event using said low-level event, form information on a modifier for said user interface event, form gesture information from said user interface event and said modifier, and use said gesture information as user input to an apparatus.
- the apparatus further comprises computer program code configured to cause the apparatus to forward said user interface event and said modifier to a gesture recognizer, and form said gesture information by said gesture recognizer.
- the apparatus further comprises computer program code configured to cause the apparatus to receive a plurality of user interface events from a user interface input device, forward said user interface events to a plurality of gesture recognizers, and form at least two gestures by said gesture recognizers.
- the user interface event is one of the group of touch, release, move and hold.
- the apparatus further comprises computer program code configured to cause the apparatus to form said modifier from at least one of the group of time information, area information, direction information, speed information, and pressure information.
- the apparatus further comprises computer program code configured to cause the apparatus to form a hold user interface event in response to a touch input or key press input being held in place for a predetermined time, and use said hold event in forming said gesture information.
- the apparatus further comprises computer program code configured to cause the apparatus to receive at least two distinct user interface events from a multi-touch touch input device, and use said at least two distinct user interface events for forming a multi-touch gesture.
- the user interface module comprises at least one of the group of a touch screen, a touch pad, a pen, a mouse, a haptic input device, a data glove and a data suit.
- the apparatus is one of a computer, portable communication device, a home appliance, an entertainment device such as a television, a transportation device such as a car, ship or an aircraft, or an intelligent building.
- a system comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the system to receive a low-level event from a user interface input module, forming a user interface event using said low-level event, form information on a modifier for said user interface event, form gesture information from said user interface event and said modifier, and use said gesture information as user input to an apparatus.
- the system comprises at least two apparatuses arranged in communication connection to each other, wherein a first apparatus of said at least two apparatuses is arranged to receive said low-level event and a second apparatus of said at least two apparatuses is arranged to form said gesture information in response to receiving a user interface event from said first apparatus.
- an apparatus comprising, processing means, memory means, and means for receiving a low-level event from a user interface input means, means for forming a user interface event using said low-level event, means for forming information on a modifier for said user interface event, means for forming gesture information from said user interface event and said modifier, and means for using said gesture information as user input to an apparatus.
- a computer program product stored on a computer readable medium and executable in a data processing device, the computer program product comprising a computer program code section for receiving a low-level event from a user interface input device, forming a user interface event using said low-level event, a computer program code section for forming information on a modifier for said user interface event, a computer program code section for forming gesture information from said user interface event and said modifier, and a computer program code section for using said gesture information as user input to an apparatus.
- the computer program product is an operating system.
- FIG. 1 shows a method for gesture based user input according to an example embodiment
- FIG. 2 shows devices and a system arranged to receive gesture based user input according to an example embodiment
- FIGS. 3 a and 3 b show different example gestures composed of touch user interface events
- FIG. 4 a shows a state diagram of a low-level input system according to an example embodiment
- FIG. 4 b shows a state diagram of a user interface event system generating user interface events and comprising a hold state according to an example embodiment
- FIGS. 5 a , 5 b and 5 c show examples of hardware touch signals such as micro-drag signals during a hold user interface event
- FIG. 6 shows a block diagram of levels of abstraction of a user interface system and a computer program product according to an example embodiment
- FIG. 7 a shows a diagram of a gesture recognition engine according to an example embodiment
- FIG. 7 b shows a gesture recognition engine in operation according to an example embodiment
- FIGS. 8 a and 8 b show generation of a hold user interface event according to an example embodiment
- FIG. 9 shows a method for gesture based user input according to an example embodiment
- FIGS. 10 a - 10 g show state and event diagrams for producing user interface events according to an example embodiment.
- the devices employing the different embodiments may comprise a touch screen, a touch pad, a pen, a mouse, a haptic input device, a data glove or a data suit. Also, three-dimensional input systems e.g. based on haptics may use the invention.
- FIG. 1 shows a method for gesture based user input according to an example embodiment.
- a low-level event is received.
- the low-level events may be generated by the operating system of the computer as a response to a person using an input device such as a touch screen or a mouse.
- the user interface events may also be generated directly by specific user input hardware, or by the operating system as a response to hardware events.
- At stage 120 at least one user interface event is formed or generated.
- the user interface events may be generated from the low-level events e.g. by averaging, combining, thresholding, by using timer windows or by using filtering, or by any other means. For example, two low-level events in sequence may be interpreted as a user interface event.
- User interface events may also be generated programmatically for example from other user interface events or as a response to a trigger in the program.
- the user interface events may be generated locally by using user input hardware or remotely e.g. so that the low-level events are received from a remote computer acting as a terminal device.
- At stage 130 at least one user interface event is received.
- the user interface events may be received from the same device e.g. the operating system, or the user interface events may be received from another device e.g. over a wired or wireless communication connection.
- Such another device may be a computer acting as a terminal device to a service, or an input device connected to a computer, such as a touch pad or touch screen.
- modifier information for the user interface event is formed.
- the modifier information may be formed by the operating system from the hardware events and/or signals or other low-level events and data, or it may be formed by the hardware directly.
- the modifier information may be formed at the same time with the user interface event, or it may be formed before or after the user interface event.
- the modifier information may be formed by using a plurality of lower-level events or other events.
- the modifier information may be common to a number of user interface events or it may be different for different user interface events.
- the modifier information may comprise position information such as a point or area on the user interface that was touched or clicked, e.g. in the form of 2-dimensional or 3-dimensional coordinates.
- the modifier information may comprise direction information e.g.
- the modifier information may comprise pressure data e.g. from a touch screen, and it may comprise information on the area that was touched, e.g. so that it can be identified whether the touch was made by a finger or by a pointing device.
- the modifier information may comprise proximity data e.g. as an indication of how close a pointer device or a finger is from a touch input device.
- the modifier information may comprise timing data e.g. the time a touch lasted, or the time between consecutive clicks or touches, or clock event information or other time related data.
- gesture information is formed from at least one user interface event and the respective modifier data.
- the gesture information may be formed by combining a number of user interface events.
- the user interface event or events and the respective modifier data are analyzed by a gesture recognizer that outputs a gesture signal whenever a predetermined gesture is recognized.
- the gesture recognizer may be a state machine, or it may be based on pattern recognition of other kind, or it may be a program module.
- a gesture recognizer may be implemented to recognize a single gesture or it may be implemented to recognize multiple gestures. There may be one or more gesture recognizers operating simultaneously, in a chain or partly simultaneously and partly in chain.
- the gesture may be, for example, a touch gesture such as a combination of touch/tap, move/drag and/or hold events, and it may require a certain timing (e.g. speed of double-tap) or range or speed of movement in order to be recognized.
- the gesture may also be relative in nature, that is, it may not require any absolute timings or ranges or speeds, but may depend on the relative timings, ranges and speeds of the parts of the gesture.
- the gesture information is used as user input.
- a menu option may be triggered when a gesture is detected, or a change in the mode or behavior of the program may be actuated.
- the user input may be received by one or more programs or by the operating system, or by both.
- the behavior after receiving the gesture may be specific to the receiving program.
- the receiving of the gesture by the program may start even before the gesture has been completed so that the program can prepare for action or start the action as a response to the gesture even before the gesture has been completed.
- one or more gestures may be formed and used by the programs and/or the operating system, and the control of the programs and/or the operating system may happen in a multi-gesture manner.
- the forming of the gestures may take place simultaneously or it may take place in a chain so that first, one or more gestures are recognized, and after that other gestures are recognized.
- the gestures may comprise single-touch or multi-touch gestures, that is, they may comprise a single point of touch or click, or they may comprise multiple points of touch or click.
- the gestures may be single gestures or multi-gestures. In multi-gestures, two or more essentially simultaneous or sequential gestures are used as user input. In multi-gestures, the underlying gestures may be single-touch or multi-touch gestures.
- FIG. 2 shows devices and a system arranged to receive gesture based user input according to an example embodiment.
- the different devices may be connected via a fixed network 210 such as the Internet or a local area network; or a mobile communication network 220 such as the Global System for Mobile communications (GSM) network, 3rd Generation (3G) network, 3.5th Generation (3.5G) network, 4th Generation (4G) network, Wireless Local Area Network (WLAN), Bluetooth®, or other contemporary and future networks.
- GSM Global System for Mobile communications
- 3G 3rd Generation
- 3.5G 3.5th Generation
- 4G 4th Generation
- WLAN Wireless Local Area Network
- Bluetooth® Wireless Local Area Network
- the networks comprise network elements such as routers and switches to handle data (not shown), and communication interfaces such as the base stations 230 and 231 in order for providing access for the different devices to the network, and the base stations 230 , 231 are themselves connected to the mobile network 220 via a fixed connection 276 or a wireless connection 277 .
- a server 240 for offering a network service requiring user input and connected to the fixed network 210
- a server 241 for processing user input received from another device in the network and connected to the fixed network 210
- a server 242 for offering a network service requiring user input and for processing user input received from another device and connected to the mobile network 220 .
- Some of the above devices, for example the computers 240 , 241 , 242 may be such that they make up the Internet with the communication elements residing in the fixed network 210 .
- the various devices may be connected to the networks 210 and 220 via communication connections such as a fixed connection 270 , 271 , 272 and 280 to the internet, a wireless connection 273 to the internet 210 , a fixed connection 275 to the mobile network 220 , and a wireless connection 278 , 279 and 282 to the mobile network 220 .
- the connections 271 - 282 are implemented by means of communication interfaces at the respective ends of the communication connection.
- FIG. 2 b shows devices for receiving user input according to an example embodiment.
- the server 240 contains memory 245 , one or more processors 246 , 247 , and computer program code 248 residing in the memory 245 for implementing, for example, gesture recognition.
- the different servers 241 , 242 , 290 may contain at least these same elements for employing functionality relevant to each server.
- the end-user device 251 contains memory 252 , at least one processor 253 and 256 , and computer program code 254 residing in the memory 252 for implementing, for example, gesture recognition.
- the end-user device may also have at least one camera 255 for taking pictures.
- the end-user device may also contain one, two or more microphones 257 and 258 for capturing sound.
- the different end-user devices 250 , 260 may contain at least these same elements for employing functionality relevant to each device.
- Some end-user devices may be equipped with a digital camera enabling taking digital pictures, and one or more microphones enabling audio recording during, before, or after taking a picture.
- receiving the low-level events, forming the user interface events, receiving the user interface events, forming the modifier information and recognizing gestures may be carried out entirely in one user device like 250 , 251 or 260 , or receiving the low-level events, forming the user interface events, receiving the user interface events, forming the modifier information and recognizing gestures may be entirely carried out in one server device 240 , 241 , 242 or 290 , or receiving the low-level events, forming the user interface events, receiving the user interface events, forming the modifier information and recognizing gestures may be carried out across multiple user devices 250 , 251 , 260 or across multiple network devices 240 , 241 , 242 , 290 , or across user devices 250 , 251 , 260 and network devices 240 , 241 , 242 , 290 .
- low-level events may be received in one device, the user interface events and the modifier information may be formed in another device and the gesture recognition may be carried out in a third device.
- the low-level events may be received in one device, and formed into user interface events together with the modifier information, and the user interface events and the modifier information may be used in a second device to form the gestures and using the gestures as input.
- Receiving the low-level events, forming the user interface events, receiving the user interface events, forming the modifier information and recognizing gestures may be implemented as a software component residing on one device or distributed across several devices, as mentioned above, for example so that the devices form a so-called cloud.
- Gesture recognition may also be a service where the user device accesses the service through an interface.
- forming modifier information, processing user interface events and using the gesture information as input may be implemented with the various devices in the system.
- the different embodiments may be implemented as software running on mobile devices and optionally on services.
- the mobile phones may be equipped at least with a memory, processor, display, keypad, motion detector hardware, and communication means such as 2G, 3G, WLAN, or other.
- the different devices may have hardware like a touch screen (single-touch or multi-touch) and means for positioning like network positioning or a global positioning system (GPS) module.
- GPS global positioning system
- There may be various applications on the devices such as a calendar application, a contacts application, a map application, a messaging application, a browser application, and various other applications for office and/or private use.
- FIGS. 3 a and 3 b show different examples of gestures composed of touch user interface events.
- column 301 shows the name of the gesture
- column 303 shows the composition of the gesture as user interface events
- column 305 displays the behavior or use of the gesture in an application or by the operating system
- column 307 indicates a possible symbol for the event.
- Touch down user interface event 310 is a basic interaction element, whose default behaviour is to indicate which object has been touched, and possibly a visible, haptic, or audio feedback is provided.
- Touch release event 312 is another a basic interaction element that by default performs the default action for the object, for example activates a button.
- Move event 314 is a further basic interaction element that by default makes the touched object or the whole canvas follow the movement.
- a gesture is a composite of user interface events.
- a Tap gesture 320 is a combination of a Touch down and Release events. The Touch down and Release events in the Tap gesture may have default behaviour, and the Tap gesture 320 may in addition have special behaviour in an application or in the operating system. For example, while the canvas or the content is moving, a Tap gesture 320 may stop ongoing movement.
- a Long Tap gesture 322 is a combination of Touch down and Hold events (see description of Hold event later in connection with FIGS. 8 a and 8 b ). The Touch down event inside the Long Tap gesture 322 may have default behavior, and the Hold event inside the Long Tap gesture 322 may have specific additional behavior.
- a Double Tap gesture 324 is a combination of two consecutive touch down and release events essentially at the same location within a set time limit.
- a Double Tap gesture may e.g. be used as a zoom toggle (zoom in/zoom out) or actuating the zoom in other ways, or as a trigger for some other specific behaviour. Again, the use of the gesture may be specific to the application.
- a Drag gesture 330 is a combination of Touch down and Move events.
- the touch down and move events may have default behaviour, while the Drag gesture as a whole may have specific behaviour. For example, by default, the content, a control handle or the whole canvas may follow the movement of the Drag gesture.
- Speed scrolling may be implemented by controlling the speed of the scrolling by finger movement.
- a mode to organize user interface elements may be implemented so that the object selected with touch down follows the movement, and the possible drop location is indicated by moving objects accordingly or by some other indication.
- a Drop gesture 332 is a combination of user interface events that make up dragging and a Release.
- a Flick gesture 334 is a combination of Touch down, Move and Touch Release. After Release, the content continues its movement with the direction and speed that it had at the moment of touch release. The content may be stopped manually or when it reaches a snap point or end of content, or it may slow down to stop on its own.
- Dragging (panning) and flicking gestures may be used as default navigation strokes in lists, grids and content views.
- the user may manipulate the content or canvas to make it follow the direction of move.
- Such way of manipulation may make scrollbars as active navigation elements to be unnecessary, which brings more space to the user interface. Consequently, a scrolling indication may be used to indicate that more items are available, e.g. with graphical effects like dynamic gradient, haze etc., or a thin scroll bar appearing when scrolling is ongoing (indication only, not active).
- An index for sorted lists) may be shown when the scrolling speed is too fast for user to follow the content visually.
- Flick scrolling may continue at the end of the flick gesture, and the speed may be determined according to the speed at the end of flick. Deceleration or inertia may not be applied at all, whereby the movement continues frictionless until the end of canvas or until stopped manually with touch down. Alternatively, deceleration or inertia may be applied in relation to the length of scrollable area, until certain defined speed is reached. Deceleration may be applied smoothly before the end of the scrollable area is reached. Touch down after Flick scrolling may stop the scrolling.
- Drag and Hold gestures at the edge of the scroll area may activate speed scrolling.
- Speed of the scroll may be controlled by moving the finger between the edge and centre of the scroll area.
- Content zoom animation may be used to indicate the increasing/decreasing scrolling speed. Scrolling may be stopped by lifting the finger (touch release) or by dragging the finger into the middle of the scrolling area.
- FIG. 4 a shows a state diagram of a low-level input system according to an example embodiment.
- Such an input system may be used e.g. to receive hardware events from a touch screen or another kind of a touch device, or some other input means manipulated by a user.
- the down event 410 is triggered from the hardware or from the driver software of the hardware when the input device is being touched.
- An up event 420 is triggered when the touch is lifted, i.e. the device is no longer touched.
- the up event 420 may also be triggered when there is no movement even though the device is being touched.
- Such up events may be filtered out by using a timer.
- a drag event 430 may be generated when after a down event, the point of touch is being moved.
- the possible state transitions are indicated by arrows in FIG. 4 a and they are: down-up, up-down, down-drag, drag-drag and drag-up.
- the hardware events may be modified. For example, noisy events may be averaged or filtered in another way.
- the touch point may be moved towards the finger tip, depending on the orientation and type of the device.
- FIG. 4 b shows a state diagram of a user input system generating user interface events and comprising a hold state according to an example embodiment.
- a Touch Down state or user interface event 450 occurs when a user touches a touch screen, or for example presses a mouse key down.
- the system has determined that the user has activated a point or an area, and the event or state may be supplemented by modifier information such as the duration or pressure of the touch.
- the Release event may be supplemented e.g. by a modifier indicative of the time from the Touch Down event.
- a Touch Down event or state 450 may occur again.
- a Move event or state 480 occurs.
- a plurality of Move events may be triggered if the moving of the point of touch spans a long enough time.
- the Move event 480 (or plurality of move events) may be supplemented by modifier information indicative of the direction of the move and the speed of the move.
- the Move event 480 may be terminated by lifting the touch, and a Release event 460 occurs.
- the Move event may be terminated also by stopping the move without lifting the touch, in which case a Hold event 470 may occur, if the touch spans a long enough time without moving.
- a Hold event or state 470 may be generated when a Touch Down or Move event or state continues for a long enough time.
- the generation of the Hold event may be done e.g. so that a timer is started at some point in the Touch Down or Move state, and when the timer advances to a large enough value, a Hold event is generated, in case the state is still Touch Down or Move, and the point of touch has not moved significantly.
- a Hold event or state 470 may be terminated by lifting the touch, causing a Release event 460 to be triggered, or by moving the point of activation, causing a Move event 480 to be triggered.
- the existence of the Hold state or event may bring benefits in addition to just having a Touch Down event in the system, for example by allowing an easier and more reliable detection of gestures.
- noise in the hardware signals generated by the user input device e.g. due to the large area of the finger, due to the characteristics of the touch screen, or both.
- noise may be so-called white noise, pink noise or noise of another kind.
- the different noise types may be generated by different types of error sources in the system. Filtering may be used to remove errors and noise.
- the filtering may happen directly in the touch screen or other user input device, or it may happen later in the processing chain, e.g. in the driver software or the operating system.
- the filter here may be a kind of an average or mean filter, where the coordinates of a number of consecutive points (in time or in space) are averaged by an un-weighted or weighted average or another like kind of processing or filter where the coordinate values of the points are processed to yield a single set of output coordinates.
- the noise may be significantly reduced, e.g. in the case of white noise, by a factor of square root of N, where N is the number of points being averaged.
- FIGS. 5 a , 5 b and 5 c show examples of hardware touch signals such as micro-drag signals during a generation of a hold user interface event.
- a hold user interface event is generated by the user holding the finger on a touch screen or mouse pressed down for at least a predetermined time.
- These phenomena cause a degree of uncertainty to the generated low-level events.
- the same hand and the same hardware can lead to different low-level event xy-pattern depending on how the user approaches the device. This is illustrated in FIG. 5 a , where a number of low-level touch down events 510 - 517 are generated near each other.
- FIGS. 5 b and 5 c two different sequences from the same low-level touch down and move events 510 - 517 are shown.
- the first event to be received is the event 510
- the second is the event 511 .
- the sequence continues to events 514 , 512 , 513 , 516 , 515 and 517 , and after that the move continues towards the lower left corner.
- the different move vectors between the events are indicated by arrows 520 , 521 , 522 , 523 and so on.
- the sequence is different.
- buttons may benefit from a common implementation of the touch down user interface event, where a driver or the layer above the driver converts the set of low-level or hardware events to a single Touch Down event.
- a Hold event may be detected in a like manner as Touch down, thereby making it more reliable to detect and interpret gestures like Long Tap, Panning and Scrolling.
- the low-level events may be generated e.g. by sampling with a certain time interval such as 10 milliseconds.
- a timer may be started.
- the events from the hardware are followed, and if they stay within a certain area, a touch down event may be generated.
- the events touch down or drag
- migrate outside the area a touch down user interface event followed by a move user interface event are generated.
- the area may be larger in order to allow a “sloppy touch”, wherein the user touches the input device carelessly.
- the accepted area may then later be reduced to be smaller so that the move user interface event may be generated accurately.
- the area may be determined to be an ellipse, a circle, a square, a rectangle or any other shape.
- the area may be positioned according to the first touch down event or as an average of the position of a few events. If the touch down or move hardware events continue to be generated for a longer time, a hold user interface event may be generated.
- FIG. 6 shows a block diagram of levels of abstraction of a user interface system and a computer program product according to an example embodiment.
- the user interface hardware may generate hardware events or signals or driver events 610 , for example Up, Down and Drag driver or low-level events. The implementation of these events may be hardware-dependent, or they may function more or less similarly on every hardware.
- the driver events 610 may be processed by the window manager (or the operating system) to generate processed low-level events 620 .
- the low-level events may be used to form user interface events 630 such as Touch Down, Release, Move and Hold, as explained earlier.
- gesture engine 640 may operate to specify rules on how gesture recognizers 650 may take and lose control of events.
- Gesture recognizers 650 process User Interface Events 630 with their respective modifiers in order to recognize the beginning of a gesture and/or the whole gesture. The recognized gestures are then forwarded to applications 660 and the operating system to be used for user input.
- FIG. 7 a shows a diagram of a gesture recognition engine according to an example embodiment.
- User interface events 710 such as Touch, Release, Move and Hold are sent to gesture recognizers 720 , 721 , . . . , 727 , . . . 729 .
- the user interface events 710 may comprise modifier information to give more data to the recognizers, e.g. the direction or speed of the movement.
- the gesture recognizers operate on the user interface events and the modifier information, and generate gesture signals as output when a gesture is recognized.
- This gesture signal and associated data on the specific gesture may then be sent to an application 730 for use as user input.
- the gesture engine and/or gesture recognizers may be configured/used to also “filter” the gestures that are forwarded to applications.
- the gesture engine may be configured to capture the gestures that are meant to be handled by these applications, instead of the individual applications on the screen capturing the gestures. This may bring the advantage, that e.g. in a browser application, gestures like panning may behave the same way even if the Web page contains a Flash area or is implemented as a Flash program entirely.
- FIG. 7 b shows a gesture recognition engine in operation according to an example embodiment.
- the Flick Stop recognizer 720 is disabled, since there is no Flick ongoing, and therefore stopping a Flick gesture is irrelevant.
- a Touch user interface event 712 is sent to the recognizers, none of them may react to it, or they may react merely by sending an indication that a gesture may be starting.
- the gesture recognizer 721 is not activated, but the gesture recognizer 722 for Panning is activated, and the recognizer informs an application 730 that panning is to be started.
- the gesture recognizer 722 may also give information on the speed and direction of panning.
- the input user interface event 714 is consumed and does not reach other recognizers, i.e. the recognizer 723 .
- the user interface event is passed to different recognizers in certain order, but the event could also be passed to recognizers simultaneously.
- the recognizer 723 for Flick gesture will be activated.
- the Panning recognizer 722 may send an indication that Panning is ending, and the Flick recognizer 723 may send information on Flick gesture starting to the application 730 , along with information on speed and direction of the flick.
- the recognizer 720 for Flick Stop is enabled.
- a Release user interface event 716 is received when the user releases the press, and the Flick gesture remains active (and Flick Stop remains enabled).
- a Touch user interface event 717 is received. This event is captured by the Flick Stop recognizer 720 that notifies the application 730 that Flick is to be stopped.
- the recognizer 720 for Flick Stop also disables itself, since now there is no Flick gesture ongoing any more.
- the gesture engine and/or the individual gesture recognizers may reside in an application, in a program library used by the applications, in the operating system, or in a module closely linked with the operating system, or any combination of these and other meaningful locations.
- the gesture engine and the recognizers may also be distributed across several devices.
- the gesture engine may be arranged to reside in or close to the operating system, and applications may register the gestures they wish to receive with the gesture engine.
- An application or the operating system may also modify the operation of the gesture engine and the parameters (such as timers) of individual gestures. For example, the order of the gestures to be recognized in a gesture chain may be defined and/or altered, and gestures may be enabled and disabled.
- the state of an application or the operating system or the device may cause a corresponding set or chain of gesture recognizers to be selected so that a change in the state of the application causes a change in how the gestures are recognized.
- gesture recognizers may have an effect on the functionality of the gesture engine: e.g. flick stop may be first in a chain, and in single-touch operation, gestures that are location specific may come earlier than generic gestures. Also, multi-touch gestures may be recognized first, and the left-over events may then be used by the single-touch gesture recognizers.
- a recognizer attached to the gesture engine When a recognizer attached to the gesture engine has recognized a gesture, information on the gesture needs to be sent to an appropriate application and/or the appropriate process. For this, it needs to be known which gesture was recognized, and where the recognition started, ended or took place. Using the location information and information on the gesture, the gesture engine may send the gesture information to the appropriate application or window.
- a gesture such as move or double tap may be initiated in one window and end in another window, in which case the gesture recognizer may, depending on the situation, send the gesture information to the first window, the second window or both windows.
- a gesture recognizer may also choose which event stream or which event streams to use. For this purpose, the gesture recognizer may be told how many input streams there are.
- a long tap gesture may be recognized simultaneously with a drag gesture.
- the recognizers may be arranged to operate simultaneously, or so that they operate in a chain.
- the multi-gesture recognition may happen after a multi-touch recognition and operate on the events not used by the multi-touch recognition.
- the gestures recognized in a multi-gesture may be wholly or partly simultaneous, or they may be sequential, or both.
- the gesture recognizers may be arranged to communicate with each other, or the gesture engine may detect that a multi-gesture was recognized. Alternatively, the application may use multiple gestures from the gesture engine as a multi-gesture.
- FIGS. 8 a and 8 b show generation of a hold user interface event according to an example embodiment.
- the arrow up 812 indicates a driver up or release event.
- the arrow down 813 indicates a driver down event or touch user interface event.
- the arrow right 814 indicates a drag or move user interface event (in any direction).
- the open arrow down 815 indicates the generated hold user interface event.
- Other events 816 are marked with a circle.
- the sequence begins with a driver down event 813 .
- at least one timer may be started to detect the time the touch or down state lasts.
- a sequence of driver drag events is generated. These events may be a series of micro-drag events, as explained earlier.
- a Touch user interface event is generated at 820 .
- a Hold user interface event is generated at 822 . It needs to be noted that the Hold event may be generated without generating the Touch event.
- FIG. 9 shows a method for gesture based user input according to an example embodiment.
- hardware events and signals such as down or drag are received.
- the events and signals may be filtered or otherwise processed at stage 920 , for example by applying filtering as explained earlier.
- low-level driver data is received, for example indicative of hardware events.
- These low-level data or events may be formed into user interface events at stage 940 , and the respective modifiers at stage 945 , as has been explained earlier.
- the lower level signals and events are “collected” into user interface events and their modifiers.
- new events such as hold events may be formed from either low-level data or other user interface events, or both. It needs to be noted that the order of the above steps may vary, for example, filtering may happen later in the process and hold events may be formed earlier in the process.
- the user interface events with respective modifiers may then be forwarded to gesture recognizers, possibly by or through a gesture engine.
- gesture recognizers possibly by or through a gesture engine.
- stages 951 , 952 and so on a start of a gesture recognized by the respective gesture recognizer may be recognized.
- the different gesture recognizers may be arranged to operate so that only one gesture may be recognized at one time, or so that multiple gestures may be detected simultaneously. This may bring about the benefit that also multi-gesture input may be used in applications.
- the completed gestures recognized by the respective gesture recognizers are detected.
- the detected/recognized gestures are sent to applications and possibly the operating system so that they can be used for input.
- both the start of the gestures and the complete gestures may be forwarded to applications. This may have the benefit that applications may react earlier to gestures if they do not have to wait for the gesture to end.
- the gestures are then used for input by the applications.
- gesture recognition may operate as follows.
- the gesture engine may receive all or essentially all user interface events in a given screen area, or even the entire screen.
- the operating system may give each application a window (screen area) and the application uses this area for user input and output.
- the user interface events may be given to the gesture engine so that the gesture recognizers are in a specific order, such that certain gestures will activate themselves first and others later, if there are user interface events left.
- Gestures that are to be recognized across the entire screen area may be placed before the ones that are more specific.
- the gesture engine is configured to receive the user interface events of a collection of windows.
- the gesture recognizers for gestures that are to be recognized by the browser e.g.
- panning, pinch zooming, etc. receive user interface events before e.g. Flash applications, even if the user interface events originated in the Flash window.
- Another example is double-tap; in the case of the browser, the sequence of taps may not fall within the same window where the first tap originated. Since the gesture engine receives all taps, it may recognize the double tap in this case, too.
- Yet another example is the drag; the movement may extend beyond the original window where the drag started. Since the gesture engine receives the user interface events from a plurality of windows or even the whole user interface area, it may be able to detect gestures spanning the window area of multiple applications.
- FIGS. 10 a - 10 g show examples of state and event diagrams for producing user interface events according to an example embodiment.
- the Init state is the state where the state machine resides before anything has happened, and where is returns after completing all operations emanating from a user input.
- the individual input streams start from the Init state.
- the Dispatch state is a general state of the state machine if no touch, hold or suppress timers are running.
- the InTouchTime state is a state where the state machine resides after the user has touched the input device, and is ended by lifting the touch, moving away from the touch area or by holding a long enough time in place. The state also filters some accidental up and down events away.
- the InTouchArea state is a state that filters events away that stay in the touch area (events from micro movements).
- the InHoldTime_U state is a state that monitors the holding down of the touch, and produces a HOLD event if the hold stays for a long enough time.
- a purpose of this state is to filter away micro movements to see if a Hold user interface event is to be generated.
- the InHoldTime_D state is used for handling up-down events during hold.
- the state Suppress_D use used to filter accidental up and down sequences away. The functionality of the Suppress_D state functionality may be advantageous in the context of resistive touch panels where such accidental up/down events may easily happen.
- the state machine is in the Init state.
- the event is consumed (i.e. not passed further or allowed to be used later) and timers are initialized (consumption of an event is marked with a box with a dotted circumference as illustrated in FIG. 10 a ). If no timers are in use, a TOUCH user interface event is produced (production of an event is marked with a box having a horizontal line on top as illustrated in FIG. 10 a ). After this, if the Hold Timer >0, the state machine goes into the InHoldTime_U state (state transition is marked with a box having a vertical line on the left side). If Touch Area >0, the state machine goes into InTouchArea state to determine whether the touch stays inside the original area. Otherwise, the state machine goes into the Dispatch state. Other events than down are erroneous and may be ignored.
- the state machine is in the Dispatch state. If a drag or up hardware event is received, the event is consumed. For a capacitive touch device, a RELEASE user interface event is produced, and for a resistive touch device, a RELEASE is produced if there is no suppress timer active. After producing the RELEASE, the state machine goes into the Init state. For a resistive touch device, if there is an active suppress timer, the timer is initialized, and the state machine goes into the Suppress_D state. If a drag hardware event is received, a MOVE user interface event is produced. If the criteria for a HOLD user interface event are not matched, the state machine goes into the Dispatch state. If the criteria for a HOLD are matched, the hold timer is initialized, and the state machine goes into the InHoldTime_U state.
- the filtering of hardware events in the InTouchTime state is shown. If a drag hardware event is received inside the (initial) touch area, the event is consumed and the state machine goes into InTouchTime state. If a drag event or an up event in a capacitive device outside the predetermined touch area is received, all timers are cleared and a TOUCH user interface event is produced. The state machine then goes into the Dispatch state. If a TOUCH timeout event or an up event from a resistive touch device is received, the TOUCH timer is cleared and a TOUCH event is produced. If the HOLD timer >0, the state machine goes into the InHoldTime_U state.
- the state machine of FIG. 10 c may have the advantage of eliminating sporadic up/down events during HOLD detection.
- FIG. 10 d the filtering of hardware events in the InTouchArea state is shown. If a drag hardware event is received inside the touch area, the event is consumed and the state machine stays in the InTouchArea state. In other words, if drag events that are sufficiently close to the original down event are received, the state machine filters out these events as micro-drag events, as described earlier. If a drag event is received outside the area, or an up event is received, the state machine goes into the Dispatch state.
- the filtering of accidental up and down hardware events in the Suppress_D state is shown. If a down hardware event is received, the suppress timer is cleared and the event is renamed as a drag hardware event. The state machine then goes into the Dispatch state. If a suppress timeout event is received, the suppress timer is cleared and a RELEASE user interface event is produced. The state machine then goes into the Init state. In other words, the state machine replaces an accidental up event followed by a down event with a drag event. RELEASE is produced if no down event is detected during a timeout.
- the Suppress_D state may be used for resistive input devices.
- the filtering of hardware events during hold in the InHoldTime_U state is shown. If a down hardware event is received, the state machine goes into the InHoldTime_D state. If a drag event is received inside the hold area, the event is consumed and the state machine stays in the InHoldTime_U state. If a drag event outside the hold area or a capacitive up event is received, the hold timer is cleared and the state machine goes into the Dispatch state. If an up event from a resistive input device is received, the event is consumed, the suppress timer is initialized, and the state machine goes into the InHoldTime_D state.
- a HOLD timeout is received, a HOLD user interface event is produced, and the HOLD timer is restarted.
- the state machine stays in the InHoldTime_U state.
- a HOLD user interface event is produced when the HOLD timer produces a timeout, and HOLD detection is aborted if a drag event is received outside the hold area, or a valid up event is received.
- FIG. 10 g the filtering of hardware events during hold in the InHoldTime_D state is shown. If an up hardware event is received, the state machine goes into the InHoldTime_U state. If a timeout is received, a RELEASE user interface event is produced, timers are cleared and the state machine goes into the Init state. If a down hardware event is received, the event is consumed, and the suppress timer is cleared. If the event was received inside the hold area, the state machine goes into the InHoldTime_U state. If the event was received outside the hold area, a MOVE user interface event is produced, the hold timer is cleared and the state machine goes into the Dispatch state.
- the InHoldTime_D state is entered if an up event was previously received (in InHoldTime_U).
- the state waits for a down event for a specified time, and if a timeout is produced, the state produces a RELEASE user interface event. If a down event is received, the state machine returns to the previous state if the event was received inside the hold area, and if the event was received outside the hold area, a MOVE event is produced.
- the invention may provide advantages through the abstraction of the hardware or low-level events into higher-level user interface events.
- a resistive touch screen may produce phantom events when the user changes direction of the movement or stops the movement.
- such low-level phantom events may not reach the gesture recognizers, since the system first generates higher-level user interface events from the low-level events.
- the phantom events are filtered out through the use of timers and other means as explained earlier.
- the higher-level user interface events may be simpler to use in programming applications for the platform where embodiments of the invention are used.
- the invention may also allow simpler implementation of multi-gesture recognition. Furthermore, switching from one gesture to another may be simpler to detect.
- the generation of a Hold user interface event may make it unnecessary for the recognizer of Panning or other gestures to detect the end of the movement, since another gesture recognizer takes care of that. Since the user interface events are generated consistently from the low-level events, the invention may also provide predictability and ease of testing for applications. Generally, the different embodiments may simplify the programming and use of applications on a platform where the invention is applied.
- a terminal device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the terminal device to carry out the features of an embodiment.
- a network device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of an embodiment.
Abstract
The invention relates to a method, a device and system for receiving user input. User interface events are first formed from low-level events generated by a user interface input device such as a touch screen. The user interface events are modified by forming information on a modifier 5 for the user interface events such as time and coordinate information. The events and their modifiers are sent to a gesture recognition engine, where gesture information is formed from the user interface events and their modifiers. The gesture information is then used as user input to the apparatus. In other words, the gestures may not be 10 formed directly from the low-level events of the input device. Instead, user interface events are formed from the low-level events, and gestures are then recognized from these user interface events.
Description
- Advances in computer technology have made it possible to manufacture devices that are powerful in terms of computing speed and yet easily movable or even pocket-sized like the contemporary mobile communication devices and multimedia devices. There are also ever more advanced features and software applications in familiar home appliances, vehicles for personal transportation and even houses. These advanced devices and software applications require input methods and devices that are capable enough for controlling them. Perhaps for this reason, touch input in forms of touch screens and touch pads has recently become more popular. Currently, such devices are able to replace more conventional input means like the mouse and the keyboard. However, implementing the input needs of the most advanced software applications and user input systems may require more than just a replacement of the conventional input means.
- There is, therefore, a need for solutions that improve the usability and versatility of user input means such as touch screens and touch pads.
- Now there has been invented an improved method and technical equipment implementing the method, by which the above problems may be at least alleviated. Various aspects of the invention include a method, an apparatus, a server, a client and a computer readable medium comprising a computer program stored therein, which are characterized by what is stated in the independent claims. Various embodiments of the invention are disclosed in the dependent claims.
- In one example embodiment, user interface events (higher-level events) are first formed from low-level events generated by a user interface input device such as a touch screen. The user interface events may be modified by forming information on a modifier for the user interface events such as time and coordinate information. The user interface events and their modifiers are sent to a gesture recognition engine, where gesture information is formed from the user interface events and possibly their modifiers. The gesture information is then used as user input to the apparatus. In other words, according to one example embodiment the gestures may not be formed directly from the low-level events of the input device. Instead, higher-level events i.e. user interface events are formed from the low-level events, and gestures are then recognized from these user interface events.
- According to a first aspect, there is provided a method for receiving user input, comprising receiving a low-level event from a user interface input device, forming a user interface event using said low-level event, forming information on a modifier for said user interface event, forming gesture information from said user interface event and said modifier, and using said gesture information as user input to an apparatus.
- According to an embodiment, the method further comprises forwarding said user interface event and said modifier to a gesture recognizer, and forming said gesture information by said gesture recognizer. According to an embodiment, the method further comprises receiving a plurality of user interface events from a user interface input device, forwarding said user interface events to a plurality of gesture recognizers, and forming at least two gestures by said gesture recognizers. According to an embodiment, the user interface event is one of the group of touch, release, move and hold. According to an embodiment, the method further comprises forming said modifier from at least one of the group of time information, area information, direction information, speed information, and pressure information. According to an embodiment, the method further comprises forming a hold user interface event in response to a touch input or key press input being held in place for a predetermined time, and using said hold event in forming said gesture information. According to an embodiment, the method further comprises receiving at least two distinct user interface events from a multi-touch touch input device, and using said at least two distinct user interface events for forming a multi-touch gesture. According to an embodiment, the user interface input device comprises at least one of the group of a touch screen, a touch pad, a pen, a mouse, a haptic input device, a data glove and a data suit. According to an embodiment, the user interface event is one of the group of touch down, release, hold and move.
- According to a second aspect, there is provided an apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to receive a low-level event from a user interface input module, form a user interface event using said low-level event, form information on a modifier for said user interface event, form gesture information from said user interface event and said modifier, and use said gesture information as user input to an apparatus.
- According to an embodiment, the apparatus further comprises computer program code configured to cause the apparatus to forward said user interface event and said modifier to a gesture recognizer, and form said gesture information by said gesture recognizer. According to an embodiment, the apparatus further comprises computer program code configured to cause the apparatus to receive a plurality of user interface events from a user interface input device, forward said user interface events to a plurality of gesture recognizers, and form at least two gestures by said gesture recognizers. According to an embodiment, the user interface event is one of the group of touch, release, move and hold. According to an embodiment, the apparatus further comprises computer program code configured to cause the apparatus to form said modifier from at least one of the group of time information, area information, direction information, speed information, and pressure information. According to an embodiment, the apparatus further comprises computer program code configured to cause the apparatus to form a hold user interface event in response to a touch input or key press input being held in place for a predetermined time, and use said hold event in forming said gesture information. According to an embodiment, the apparatus further comprises computer program code configured to cause the apparatus to receive at least two distinct user interface events from a multi-touch touch input device, and use said at least two distinct user interface events for forming a multi-touch gesture. According to an embodiment, the user interface module comprises at least one of the group of a touch screen, a touch pad, a pen, a mouse, a haptic input device, a data glove and a data suit. According to an embodiment, the apparatus is one of a computer, portable communication device, a home appliance, an entertainment device such as a television, a transportation device such as a car, ship or an aircraft, or an intelligent building.
- According to a third aspect, there is provided a system comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the system to receive a low-level event from a user interface input module, forming a user interface event using said low-level event, form information on a modifier for said user interface event, form gesture information from said user interface event and said modifier, and use said gesture information as user input to an apparatus. According to an embodiment, the system comprises at least two apparatuses arranged in communication connection to each other, wherein a first apparatus of said at least two apparatuses is arranged to receive said low-level event and a second apparatus of said at least two apparatuses is arranged to form said gesture information in response to receiving a user interface event from said first apparatus.
- According to a fourth aspect, there is provided an apparatus comprising, processing means, memory means, and means for receiving a low-level event from a user interface input means, means for forming a user interface event using said low-level event, means for forming information on a modifier for said user interface event, means for forming gesture information from said user interface event and said modifier, and means for using said gesture information as user input to an apparatus.
- According to a fifth aspect, there is provided a computer program product stored on a computer readable medium and executable in a data processing device, the computer program product comprising a computer program code section for receiving a low-level event from a user interface input device, forming a user interface event using said low-level event, a computer program code section for forming information on a modifier for said user interface event, a computer program code section for forming gesture information from said user interface event and said modifier, and a computer program code section for using said gesture information as user input to an apparatus. According to an embodiment, the computer program product is an operating system.
- In the following, various embodiments of the invention will be described in more detail with reference to the appended drawings, in which
-
FIG. 1 shows a method for gesture based user input according to an example embodiment; -
FIG. 2 shows devices and a system arranged to receive gesture based user input according to an example embodiment; -
FIGS. 3 a and 3 b show different example gestures composed of touch user interface events; -
FIG. 4 a shows a state diagram of a low-level input system according to an example embodiment; -
FIG. 4 b shows a state diagram of a user interface event system generating user interface events and comprising a hold state according to an example embodiment; -
FIGS. 5 a, 5 b and 5 c show examples of hardware touch signals such as micro-drag signals during a hold user interface event; -
FIG. 6 shows a block diagram of levels of abstraction of a user interface system and a computer program product according to an example embodiment; -
FIG. 7 a shows a diagram of a gesture recognition engine according to an example embodiment; -
FIG. 7 b shows a gesture recognition engine in operation according to an example embodiment; -
FIGS. 8 a and 8 b show generation of a hold user interface event according to an example embodiment; -
FIG. 9 shows a method for gesture based user input according to an example embodiment; and -
FIGS. 10 a-10 g show state and event diagrams for producing user interface events according to an example embodiment. - In the following, several embodiments of the invention will be described in the context of a touch user interface and methods and devices for the same. It is to be noted, however, that the invention is not limited to touch user interface. In fact, the different embodiments have applications widely in any environment where improvements of user interface operations are required. For example, devices with a large touch screen such as e-books and digital newspapers or personal computers and multimedia devices such as tablets and tables may benefit from the use of the invention. Likewise, user interface systems such as navigation interfaces of various vehicles, ships and aircraft may benefit from the invention. Computers, portable communication devices, home appliances, entertainment devices such as televisions, and intelligent buildings may also benefit from the use of the different embodiments. The devices employing the different embodiments may comprise a touch screen, a touch pad, a pen, a mouse, a haptic input device, a data glove or a data suit. Also, three-dimensional input systems e.g. based on haptics may use the invention.
-
FIG. 1 shows a method for gesture based user input according to an example embodiment. Atstage 110, a low-level event is received. The low-level events may be generated by the operating system of the computer as a response to a person using an input device such as a touch screen or a mouse. The user interface events may also be generated directly by specific user input hardware, or by the operating system as a response to hardware events. - At
stage 120, at least one user interface event is formed or generated. The user interface events may be generated from the low-level events e.g. by averaging, combining, thresholding, by using timer windows or by using filtering, or by any other means. For example, two low-level events in sequence may be interpreted as a user interface event. User interface events may also be generated programmatically for example from other user interface events or as a response to a trigger in the program. The user interface events may be generated locally by using user input hardware or remotely e.g. so that the low-level events are received from a remote computer acting as a terminal device. - At
stage 130, at least one user interface event is received. There may be a plurality of user interface events received, and user interface events may be combined together, split and grouped together and/or used as such as individual user interface events. The user interface events may be received from the same device e.g. the operating system, or the user interface events may be received from another device e.g. over a wired or wireless communication connection. Such another device may be a computer acting as a terminal device to a service, or an input device connected to a computer, such as a touch pad or touch screen. - At
stage 140, modifier information for the user interface event is formed. The modifier information may be formed by the operating system from the hardware events and/or signals or other low-level events and data, or it may be formed by the hardware directly. The modifier information may be formed at the same time with the user interface event, or it may be formed before or after the user interface event. The modifier information may be formed by using a plurality of lower-level events or other events. The modifier information may be common to a number of user interface events or it may be different for different user interface events. The modifier information may comprise position information such as a point or area on the user interface that was touched or clicked, e.g. in the form of 2-dimensional or 3-dimensional coordinates. The modifier information may comprise direction information e.g. on the direction of movement, drag or change of the point of touch or click, and the modifier may also comprise information on speed of this movement or change. The modifier information may comprise pressure data e.g. from a touch screen, and it may comprise information on the area that was touched, e.g. so that it can be identified whether the touch was made by a finger or by a pointing device. The modifier information may comprise proximity data e.g. as an indication of how close a pointer device or a finger is from a touch input device. The modifier information may comprise timing data e.g. the time a touch lasted, or the time between consecutive clicks or touches, or clock event information or other time related data. - At
stage 150, gesture information is formed from at least one user interface event and the respective modifier data. The gesture information may be formed by combining a number of user interface events. The user interface event or events and the respective modifier data are analyzed by a gesture recognizer that outputs a gesture signal whenever a predetermined gesture is recognized. The gesture recognizer may be a state machine, or it may be based on pattern recognition of other kind, or it may be a program module. A gesture recognizer may be implemented to recognize a single gesture or it may be implemented to recognize multiple gestures. There may be one or more gesture recognizers operating simultaneously, in a chain or partly simultaneously and partly in chain. The gesture may be, for example, a touch gesture such as a combination of touch/tap, move/drag and/or hold events, and it may require a certain timing (e.g. speed of double-tap) or range or speed of movement in order to be recognized. The gesture may also be relative in nature, that is, it may not require any absolute timings or ranges or speeds, but may depend on the relative timings, ranges and speeds of the parts of the gesture. - At
stage 160, the gesture information is used as user input. For example, a menu option may be triggered when a gesture is detected, or a change in the mode or behavior of the program may be actuated. The user input may be received by one or more programs or by the operating system, or by both. The behavior after receiving the gesture may be specific to the receiving program. The receiving of the gesture by the program may start even before the gesture has been completed so that the program can prepare for action or start the action as a response to the gesture even before the gesture has been completed. At the same time, one or more gestures may be formed and used by the programs and/or the operating system, and the control of the programs and/or the operating system may happen in a multi-gesture manner. The forming of the gestures may take place simultaneously or it may take place in a chain so that first, one or more gestures are recognized, and after that other gestures are recognized. The gestures may comprise single-touch or multi-touch gestures, that is, they may comprise a single point of touch or click, or they may comprise multiple points of touch or click. The gestures may be single gestures or multi-gestures. In multi-gestures, two or more essentially simultaneous or sequential gestures are used as user input. In multi-gestures, the underlying gestures may be single-touch or multi-touch gestures. -
FIG. 2 shows devices and a system arranged to receive gesture based user input according to an example embodiment. The different devices may be connected via a fixednetwork 210 such as the Internet or a local area network; or amobile communication network 220 such as the Global System for Mobile communications (GSM) network, 3rd Generation (3G) network, 3.5th Generation (3.5G) network, 4th Generation (4G) network, Wireless Local Area Network (WLAN), Bluetooth®, or other contemporary and future networks. Different networks are connected to each other by means of acommunication interface 280. The networks comprise network elements such as routers and switches to handle data (not shown), and communication interfaces such as thebase stations base stations mobile network 220 via afixed connection 276 or awireless connection 277. - There may be a number of servers connected to the network, and in the example of
FIG. 2 a are shown aserver 240 for offering a network service requiring user input and connected to the fixednetwork 210, aserver 241 for processing user input received from another device in the network and connected to the fixednetwork 210, and aserver 242 for offering a network service requiring user input and for processing user input received from another device and connected to themobile network 220. Some of the above devices, for example thecomputers network 210. - There are also a number of end-user devices such as mobile phones and
smart phones 251, Internet access devices (Internet tablets) 250 andpersonal computers 260 of various sizes and formats. Thesedevices networks fixed connection wireless connection 273 to theinternet 210, afixed connection 275 to themobile network 220, and awireless connection mobile network 220. The connections 271-282 are implemented by means of communication interfaces at the respective ends of the communication connection. -
FIG. 2 b shows devices for receiving user input according to an example embodiment. As shown inFIG. 2 b, theserver 240 containsmemory 245, one ormore processors computer program code 248 residing in thememory 245 for implementing, for example, gesture recognition. Thedifferent servers user device 251 containsmemory 252, at least oneprocessor computer program code 254 residing in thememory 252 for implementing, for example, gesture recognition. The end-user device may also have at least onecamera 255 for taking pictures. The end-user device may also contain one, two ormore microphones user devices - It needs to be understood that different embodiments allow different parts to be carried out in different elements. For example, receiving the low-level events, forming the user interface events, receiving the user interface events, forming the modifier information and recognizing gestures may be carried out entirely in one user device like 250, 251 or 260, or receiving the low-level events, forming the user interface events, receiving the user interface events, forming the modifier information and recognizing gestures may be entirely carried out in one
server device multiple user devices multiple network devices user devices network devices - The different embodiments may be implemented as software running on mobile devices and optionally on services. The mobile phones may be equipped at least with a memory, processor, display, keypad, motion detector hardware, and communication means such as 2G, 3G, WLAN, or other. The different devices may have hardware like a touch screen (single-touch or multi-touch) and means for positioning like network positioning or a global positioning system (GPS) module. There may be various applications on the devices such as a calendar application, a contacts application, a map application, a messaging application, a browser application, and various other applications for office and/or private use.
-
FIGS. 3 a and 3 b show different examples of gestures composed of touch user interface events. In the figure,column 301 shows the name of the gesture,column 303 shows the composition of the gesture as user interface events,column 305 displays the behavior or use of the gesture in an application or by the operating system andcolumn 307 indicates a possible symbol for the event. In the example ofFIG. 3 a, Touch downuser interface event 310 is a basic interaction element, whose default behaviour is to indicate which object has been touched, and possibly a visible, haptic, or audio feedback is provided.Touch release event 312 is another a basic interaction element that by default performs the default action for the object, for example activates a button. Moveevent 314 is a further basic interaction element that by default makes the touched object or the whole canvas follow the movement. - According to one example embodiment a gesture is a composite of user interface events. A
Tap gesture 320 is a combination of a Touch down and Release events. The Touch down and Release events in the Tap gesture may have default behaviour, and theTap gesture 320 may in addition have special behaviour in an application or in the operating system. For example, while the canvas or the content is moving, aTap gesture 320 may stop ongoing movement. ALong Tap gesture 322 is a combination of Touch down and Hold events (see description of Hold event later in connection withFIGS. 8 a and 8 b). The Touch down event inside theLong Tap gesture 322 may have default behavior, and the Hold event inside theLong Tap gesture 322 may have specific additional behavior. For example, an indication (visible, haptic, audio) that something is appearing may be given, and after a predefined timeout, a specific menu for the touched object may be opened or editing mode in (text) viewers may be activated and a cursor may be brought visible into the touched position. ADouble Tap gesture 324 is a combination of two consecutive touch down and release events essentially at the same location within a set time limit. A Double Tap gesture may e.g. be used as a zoom toggle (zoom in/zoom out) or actuating the zoom in other ways, or as a trigger for some other specific behaviour. Again, the use of the gesture may be specific to the application. - In
FIG. 3 b, aDrag gesture 330 is a combination of Touch down and Move events. The touch down and move events may have default behaviour, while the Drag gesture as a whole may have specific behaviour. For example, by default, the content, a control handle or the whole canvas may follow the movement of the Drag gesture. Speed scrolling may be implemented by controlling the speed of the scrolling by finger movement. A mode to organize user interface elements may be implemented so that the object selected with touch down follows the movement, and the possible drop location is indicated by moving objects accordingly or by some other indication. ADrop gesture 332 is a combination of user interface events that make up dragging and a Release. At the Release, no default action may be performed for the touched object after the whole content has been moved by dragging, and the Release may cancel the action when dragged outside of the allowed content area before Drop. In speed scrolling, Drop may stop scrolling, and in the organise mode, the dragged object may be placed into its indicated location. AFlick gesture 334 is a combination of Touch down, Move and Touch Release. After Release, the content continues its movement with the direction and speed that it had at the moment of touch release. The content may be stopped manually or when it reaches a snap point or end of content, or it may slow down to stop on its own. - Dragging (panning) and flicking gestures may be used as default navigation strokes in lists, grids and content views. The user may manipulate the content or canvas to make it follow the direction of move. Such way of manipulation may make scrollbars as active navigation elements to be unnecessary, which brings more space to the user interface. Consequently, a scrolling indication may be used to indicate that more items are available, e.g. with graphical effects like dynamic gradient, haze etc., or a thin scroll bar appearing when scrolling is ongoing (indication only, not active). An index (for sorted lists) may be shown when the scrolling speed is too fast for user to follow the content visually.
- Flick scrolling may continue at the end of the flick gesture, and the speed may be determined according to the speed at the end of flick. Deceleration or inertia may not be applied at all, whereby the movement continues frictionless until the end of canvas or until stopped manually with touch down. Alternatively, deceleration or inertia may be applied in relation to the length of scrollable area, until certain defined speed is reached. Deceleration may be applied smoothly before the end of the scrollable area is reached. Touch down after Flick scrolling may stop the scrolling.
- Drag and Hold gestures at the edge of the scroll area may activate speed scrolling. Speed of the scroll may be controlled by moving the finger between the edge and centre of the scroll area. Content zoom animation may be used to indicate the increasing/decreasing scrolling speed. Scrolling may be stopped by lifting the finger (touch release) or by dragging the finger into the middle of the scrolling area.
-
FIG. 4 a shows a state diagram of a low-level input system according to an example embodiment. Such an input system may be used e.g. to receive hardware events from a touch screen or another kind of a touch device, or some other input means manipulated by a user. The downevent 410 is triggered from the hardware or from the driver software of the hardware when the input device is being touched. An upevent 420 is triggered when the touch is lifted, i.e. the device is no longer touched. The upevent 420 may also be triggered when there is no movement even though the device is being touched. Such up events may be filtered out by using a timer. Adrag event 430 may be generated when after a down event, the point of touch is being moved. The possible state transitions are indicated by arrows inFIG. 4 a and they are: down-up, up-down, down-drag, drag-drag and drag-up. Before utilizing the hardware events, e.g. for creating user interface events, the hardware events may be modified. For example, noisy events may be averaged or filtered in another way. Furthermore, the touch point may be moved towards the finger tip, depending on the orientation and type of the device. -
FIG. 4 b shows a state diagram of a user input system generating user interface events and comprising a hold state according to an example embodiment. A Touch Down state oruser interface event 450 occurs when a user touches a touch screen, or for example presses a mouse key down. In this Touch Down state, the system has determined that the user has activated a point or an area, and the event or state may be supplemented by modifier information such as the duration or pressure of the touch. From the Touch Downstate 450 it is possible to change to the Release state orevent 460 when the user releases the button or lifts the touch from the touch screen. The Release event may be supplemented e.g. by a modifier indicative of the time from the Touch Down event. After the Release state, a Touch Down event orstate 450 may occur again. - If the point of touch or click is moved after the Touch Down user interface event (without lifting the touch), a Move event or
state 480 occurs. A plurality of Move events may be triggered if the moving of the point of touch spans a long enough time. The Move event 480 (or plurality of move events) may be supplemented by modifier information indicative of the direction of the move and the speed of the move. TheMove event 480 may be terminated by lifting the touch, and aRelease event 460 occurs. The Move event may be terminated also by stopping the move without lifting the touch, in which case aHold event 470 may occur, if the touch spans a long enough time without moving. - A Hold event or
state 470 may be generated when a Touch Down or Move event or state continues for a long enough time. The generation of the Hold event may be done e.g. so that a timer is started at some point in the Touch Down or Move state, and when the timer advances to a large enough value, a Hold event is generated, in case the state is still Touch Down or Move, and the point of touch has not moved significantly. A Hold event orstate 470 may be terminated by lifting the touch, causing aRelease event 460 to be triggered, or by moving the point of activation, causing aMove event 480 to be triggered. The existence of the Hold state or event may bring benefits in addition to just having a Touch Down event in the system, for example by allowing an easier and more reliable detection of gestures. - There may be noise in the hardware signals generated by the user input device e.g. due to the large area of the finger, due to the characteristics of the touch screen, or both. There may be many kinds of noise imposed on top of the baseline path. This noise may be so-called white noise, pink noise or noise of another kind. The different noise types may be generated by different types of error sources in the system. Filtering may be used to remove errors and noise.
- The filtering may happen directly in the touch screen or other user input device, or it may happen later in the processing chain, e.g. in the driver software or the operating system. The filter here may be a kind of an average or mean filter, where the coordinates of a number of consecutive points (in time or in space) are averaged by an un-weighted or weighted average or another like kind of processing or filter where the coordinate values of the points are processed to yield a single set of output coordinates. As a result, the noise may be significantly reduced, e.g. in the case of white noise, by a factor of square root of N, where N is the number of points being averaged.
-
FIGS. 5 a, 5 b and 5 c show examples of hardware touch signals such as micro-drag signals during a generation of a hold user interface event. A hold user interface event is generated by the user holding the finger on a touch screen or mouse pressed down for at least a predetermined time. A finger presses on a fairly large area on the touch screen, and a mouse may make small movements when pressed down. These phenomena cause a degree of uncertainty to the generated low-level events. For example, the same hand and the same hardware can lead to different low-level event xy-pattern depending on how the user approaches the device. This is illustrated inFIG. 5 a, where a number of low-level touch down events 510-517 are generated near each other. - In
FIGS. 5 b and 5 c, two different sequences from the same low-level touch down and move events 510-517 are shown. InFIG. 5 b, the first event to be received is theevent 510, and the second is theevent 511. The sequence continues toevents arrows FIG. 5 c, the sequence is different. It starts from theevent 511, and continues to 512, 513, 515, 516, 514, and 517 and ends at 510. After the end point, the move continues towards the upper right corner. Themove vectors FIG. 5 b. This causes a situation where any SW that would need to process the driver events during Touch Down as such (without processing) could be more or less random, or at least hardware-dependent. This would make the interpretation of gestures more difficult. The example embodiments of the invention may alleviate this newly recognized problem. Even user interface controls like buttons may benefit from a common implementation of the touch down user interface event, where a driver or the layer above the driver converts the set of low-level or hardware events to a single Touch Down event. A Hold event may be detected in a like manner as Touch down, thereby making it more reliable to detect and interpret gestures like Long Tap, Panning and Scrolling. - The low-level events may be generated e.g. by sampling with a certain time interval such as 10 milliseconds. When the first touch down event is received from the hardware, a timer may be started. During a predetermined time, the events from the hardware are followed, and if they stay within a certain area, a touch down event may be generated. On the other hand, if the events (touch down or drag) migrate outside the area, a touch down user interface event followed by a move user interface event are generated. When the first touch down event from the hardware is received, the area may be larger in order to allow a “sloppy touch”, wherein the user touches the input device carelessly. The accepted area may then later be reduced to be smaller so that the move user interface event may be generated accurately. The area may be determined to be an ellipse, a circle, a square, a rectangle or any other shape. The area may be positioned according to the first touch down event or as an average of the position of a few events. If the touch down or move hardware events continue to be generated for a longer time, a hold user interface event may be generated.
-
FIG. 6 shows a block diagram of levels of abstraction of a user interface system and a computer program product according to an example embodiment. The user interface hardware may generate hardware events or signals ordriver events 610, for example Up, Down and Drag driver or low-level events. The implementation of these events may be hardware-dependent, or they may function more or less similarly on every hardware. Thedriver events 610 may be processed by the window manager (or the operating system) to generate processed low-level events 620. According to an example embodiment, the low-level events may be used to formuser interface events 630 such as Touch Down, Release, Move and Hold, as explained earlier. Theseuser interface events 630, with modifiers, may be forwarded to agesture engine 640 that may operate to specify rules on how gesture recognizers 650 may take and lose control of events.Gesture recognizers 650 processUser Interface Events 630 with their respective modifiers in order to recognize the beginning of a gesture and/or the whole gesture. The recognized gestures are then forwarded toapplications 660 and the operating system to be used for user input. -
FIG. 7 a shows a diagram of a gesture recognition engine according to an example embodiment.User interface events 710 such as Touch, Release, Move and Hold are sent togesture recognizers user interface events 710 may comprise modifier information to give more data to the recognizers, e.g. the direction or speed of the movement. The gesture recognizers operate on the user interface events and the modifier information, and generate gesture signals as output when a gesture is recognized. This gesture signal and associated data on the specific gesture may then be sent to anapplication 730 for use as user input. The gesture engine and/or gesture recognizers may be configured/used to also “filter” the gestures that are forwarded to applications. Consider two applications, a Window manager and the Browser. In both cases, the gesture engine may be configured to capture the gestures that are meant to be handled by these applications, instead of the individual applications on the screen capturing the gestures. This may bring the advantage, that e.g. in a browser application, gestures like panning may behave the same way even if the Web page contains a Flash area or is implemented as a Flash program entirely. -
FIG. 7 b shows a gesture recognition engine in operation according to an example embodiment. In the example, there are four gesture recognizers forFlick Stop 720,Tap 721,Panning 722 andFlick 723. In the initial state, theFlick Stop recognizer 720 is disabled, since there is no Flick ongoing, and therefore stopping a Flick gesture is irrelevant. When a Touchuser interface event 712 is sent to the recognizers, none of them may react to it, or they may react merely by sending an indication that a gesture may be starting. When theTouch 712 is followed by a Moveuser interface event 714, thegesture recognizer 721 is not activated, but thegesture recognizer 722 for Panning is activated, and the recognizer informs anapplication 730 that panning is to be started. The gesture recognizer 722 may also give information on the speed and direction of panning. After thegesture recognizer 722 recognizes Panning, the inputuser interface event 714 is consumed and does not reach other recognizers, i.e. therecognizer 723. Here, the user interface event is passed to different recognizers in certain order, but the event could also be passed to recognizers simultaneously. - In case the user interface event Move is a
fast Move 715, the event will not be caught by therecognizer 722 for Panning. Instead, therecognizer 723 for Flick gesture will be activated. As a result, thePanning recognizer 722 may send an indication that Panning is ending, and theFlick recognizer 723 may send information on Flick gesture starting to theapplication 730, along with information on speed and direction of the flick. Furthermore, since the gesture Flick is now ongoing, therecognizer 720 for Flick Stop is enabled. After the Moveuser interface event 715, a Releaseuser interface event 716 is received when the user releases the press, and the Flick gesture remains active (and Flick Stop remains enabled). When the user now touches the screen, a Touchuser interface event 717 is received. This event is captured by theFlick Stop recognizer 720 that notifies theapplication 730 that Flick is to be stopped. Therecognizer 720 for Flick Stop also disables itself, since now there is no Flick gesture ongoing any more. - The gesture engine and/or the individual gesture recognizers may reside in an application, in a program library used by the applications, in the operating system, or in a module closely linked with the operating system, or any combination of these and other meaningful locations. The gesture engine and the recognizers may also be distributed across several devices.
- The gesture engine may be arranged to reside in or close to the operating system, and applications may register the gestures they wish to receive with the gesture engine. There may be gestures and gesture chains available in the gesture engine or in a library, or an application may provide and/or define them. An application or the operating system may also modify the operation of the gesture engine and the parameters (such as timers) of individual gestures. For example, the order of the gestures to be recognized in a gesture chain may be defined and/or altered, and gestures may be enabled and disabled. Also, the state of an application or the operating system or the device may cause a corresponding set or chain of gesture recognizers to be selected so that a change in the state of the application causes a change in how the gestures are recognized. The order of gesture recognizers may have an effect on the functionality of the gesture engine: e.g. flick stop may be first in a chain, and in single-touch operation, gestures that are location specific may come earlier than generic gestures. Also, multi-touch gestures may be recognized first, and the left-over events may then be used by the single-touch gesture recognizers.
- When a recognizer attached to the gesture engine has recognized a gesture, information on the gesture needs to be sent to an appropriate application and/or the appropriate process. For this, it needs to be known which gesture was recognized, and where the recognition started, ended or took place. Using the location information and information on the gesture, the gesture engine may send the gesture information to the appropriate application or window. A gesture such as move or double tap may be initiated in one window and end in another window, in which case the gesture recognizer may, depending on the situation, send the gesture information to the first window, the second window or both windows. In the case there are multiple touch points on the screen, a gesture recognizer may also choose which event stream or which event streams to use. For this purpose, the gesture recognizer may be told how many input streams there are.
- Multiple simultaneous gestures may also be recognized. For example, a long tap gesture may be recognized simultaneously with a drag gesture. For multi-gesture recognition, the recognizers may be arranged to operate simultaneously, or so that they operate in a chain. For example, the multi-gesture recognition may happen after a multi-touch recognition and operate on the events not used by the multi-touch recognition. The gestures recognized in a multi-gesture may be wholly or partly simultaneous, or they may be sequential, or both. The gesture recognizers may be arranged to communicate with each other, or the gesture engine may detect that a multi-gesture was recognized. Alternatively, the application may use multiple gestures from the gesture engine as a multi-gesture.
-
FIGS. 8 a and 8 b show generation of a hold user interface event according to an example embodiment. InFIG. 8 a, the low-level events or driver events used as input for generating the hold event are explained. The arrow up 812 indicates a driver up or release event. The arrow down 813 indicates a driver down event or touch user interface event. The arrow right 814 indicates a drag or move user interface event (in any direction). The open arrow down 815 indicates the generated hold user interface event.Other events 816 are marked with a circle. - In
FIG. 8 b, the sequence begins with a driver downevent 813. At this point, at least one timer may be started to detect the time the touch or down state lasts. While the user holds the touch or mouse down or drags it, a sequence of driver drag events is generated. These events may be a series of micro-drag events, as explained earlier. After a predetermined time has elapsed and this has been detected e.g. by a timer, a Touch user interface event is generated at 820. If the drag or move continues for a longer time and stays within a certain area or certain distance from the first touch, a Hold user interface event is generated at 822. It needs to be noted that the Hold event may be generated without generating the Touch event. During the Hold event timing, there may be a sequence of driver drag, up and down events that are so small in distance or so close together in time that they do not generate a user interface event on their own, but instead contribute to the Hold user interface event. -
FIG. 9 shows a method for gesture based user input according to an example embodiment. Atstage 910, hardware events and signals such as down or drag are received. The events and signals may be filtered or otherwise processed atstage 920, for example by applying filtering as explained earlier. Atstage 930, low-level driver data is received, for example indicative of hardware events. These low-level data or events may be formed into user interface events atstage 940, and the respective modifiers atstage 945, as has been explained earlier. In other words, the lower level signals and events are “collected” into user interface events and their modifiers. Atstage 948, new events such as hold events may be formed from either low-level data or other user interface events, or both. It needs to be noted that the order of the above steps may vary, for example, filtering may happen later in the process and hold events may be formed earlier in the process. - The user interface events with respective modifiers may then be forwarded to gesture recognizers, possibly by or through a gesture engine. At
stages stages stage 970, the detected/recognized gestures are sent to applications and possibly the operating system so that they can be used for input. It needs to be noted that both the start of the gestures and the complete gestures may be forwarded to applications. This may have the benefit that applications may react earlier to gestures if they do not have to wait for the gesture to end. Atstage 980, the gestures are then used for input by the applications. - As an example, gesture recognition may operate as follows. The gesture engine may receive all or essentially all user interface events in a given screen area, or even the entire screen. In other words, the operating system may give each application a window (screen area) and the application uses this area for user input and output. The user interface events may be given to the gesture engine so that the gesture recognizers are in a specific order, such that certain gestures will activate themselves first and others later, if there are user interface events left. Gestures that are to be recognized across the entire screen area may be placed before the ones that are more specific. In other words, the gesture engine is configured to receive the user interface events of a collection of windows. Using a browser application as an example, the gesture recognizers for gestures that are to be recognized by the browser (e.g. panning, pinch zooming, etc.) receive user interface events before e.g. Flash applications, even if the user interface events originated in the Flash window. Another example is double-tap; in the case of the browser, the sequence of taps may not fall within the same window where the first tap originated. Since the gesture engine receives all taps, it may recognize the double tap in this case, too. Yet another example is the drag; the movement may extend beyond the original window where the drag started. Since the gesture engine receives the user interface events from a plurality of windows or even the whole user interface area, it may be able to detect gestures spanning the window area of multiple applications.
-
FIGS. 10 a-10 g show examples of state and event diagrams for producing user interface events according to an example embodiment. - It needs to be understood that different implementations of the states and their functionality may exist, and the different functionality may reside in various states. In this example embodiment, the different states may be described as follows. The Init state is the state where the state machine resides before anything has happened, and where is returns after completing all operations emanating from a user input. The individual input streams start from the Init state. The Dispatch state is a general state of the state machine if no touch, hold or suppress timers are running. The InTouchTime state is a state where the state machine resides after the user has touched the input device, and is ended by lifting the touch, moving away from the touch area or by holding a long enough time in place. The state also filters some accidental up and down events away. A purpose of the state is to allow settling of the touch input before generating a user interface event (the fingertip may be moving slightly, a stylus may jump a bit or other similar micro movement may happen). The InTouchArea state is a state that filters events away that stay in the touch area (events from micro movements). The InHoldTime_U state is a state that monitors the holding down of the touch, and produces a HOLD event if the hold stays for a long enough time. A purpose of this state is to filter away micro movements to see if a Hold user interface event is to be generated. The InHoldTime_D state is used for handling up-down events during hold. The state Suppress_D use used to filter accidental up and down sequences away. The functionality of the Suppress_D state functionality may be advantageous in the context of resistive touch panels where such accidental up/down events may easily happen.
- In the example of
FIG. 10 a, the state machine is in the Init state. When a touch down hardware event is received, the event is consumed (i.e. not passed further or allowed to be used later) and timers are initialized (consumption of an event is marked with a box with a dotted circumference as illustrated inFIG. 10 a). If no timers are in use, a TOUCH user interface event is produced (production of an event is marked with a box having a horizontal line on top as illustrated inFIG. 10 a). After this, if the Hold Timer >0, the state machine goes into the InHoldTime_U state (state transition is marked with a box having a vertical line on the left side). If Touch Area >0, the state machine goes into InTouchArea state to determine whether the touch stays inside the original area. Otherwise, the state machine goes into the Dispatch state. Other events than down are erroneous and may be ignored. - In the example of
FIG. 10 b, the state machine is in the Dispatch state. If a drag or up hardware event is received, the event is consumed. For a capacitive touch device, a RELEASE user interface event is produced, and for a resistive touch device, a RELEASE is produced if there is no suppress timer active. After producing the RELEASE, the state machine goes into the Init state. For a resistive touch device, if there is an active suppress timer, the timer is initialized, and the state machine goes into the Suppress_D state. If a drag hardware event is received, a MOVE user interface event is produced. If the criteria for a HOLD user interface event are not matched, the state machine goes into the Dispatch state. If the criteria for a HOLD are matched, the hold timer is initialized, and the state machine goes into the InHoldTime_U state. - In the example of
FIG. 10 c, the filtering of hardware events in the InTouchTime state is shown. If a drag hardware event is received inside the (initial) touch area, the event is consumed and the state machine goes into InTouchTime state. If a drag event or an up event in a capacitive device outside the predetermined touch area is received, all timers are cleared and a TOUCH user interface event is produced. The state machine then goes into the Dispatch state. If a TOUCH timeout event or an up event from a resistive touch device is received, the TOUCH timer is cleared and a TOUCH event is produced. If the HOLD timer >0, the state machine goes into the InHoldTime_U state. If there is no active HOLD timer and a TOUCH timeout was received, the state machine goes into the InTouchArea state. If a resistive up event was received and there is no active HOLD timer, the state machine goes into the Dispatch state. The state machine ofFIG. 10 c may have the advantage of eliminating sporadic up/down events during HOLD detection. - In the example of
FIG. 10 d, the filtering of hardware events in the InTouchArea state is shown. If a drag hardware event is received inside the touch area, the event is consumed and the state machine stays in the InTouchArea state. In other words, if drag events that are sufficiently close to the original down event are received, the state machine filters out these events as micro-drag events, as described earlier. If a drag event is received outside the area, or an up event is received, the state machine goes into the Dispatch state. - In the example of
FIG. 10 e, the filtering of accidental up and down hardware events in the Suppress_D state is shown. If a down hardware event is received, the suppress timer is cleared and the event is renamed as a drag hardware event. The state machine then goes into the Dispatch state. If a suppress timeout event is received, the suppress timer is cleared and a RELEASE user interface event is produced. The state machine then goes into the Init state. In other words, the state machine replaces an accidental up event followed by a down event with a drag event. RELEASE is produced if no down event is detected during a timeout. The Suppress_D state may be used for resistive input devices. - In the example of
FIG. 10 f, the filtering of hardware events during hold in the InHoldTime_U state is shown. If a down hardware event is received, the state machine goes into the InHoldTime_D state. If a drag event is received inside the hold area, the event is consumed and the state machine stays in the InHoldTime_U state. If a drag event outside the hold area or a capacitive up event is received, the hold timer is cleared and the state machine goes into the Dispatch state. If an up event from a resistive input device is received, the event is consumed, the suppress timer is initialized, and the state machine goes into the InHoldTime_D state. If a HOLD timeout is received, a HOLD user interface event is produced, and the HOLD timer is restarted. The state machine stays in the InHoldTime_U state. In other words, a HOLD user interface event is produced when the HOLD timer produces a timeout, and HOLD detection is aborted if a drag event is received outside the hold area, or a valid up event is received. - In the example of
FIG. 10 g, the filtering of hardware events during hold in the InHoldTime_D state is shown. If an up hardware event is received, the state machine goes into the InHoldTime_U state. If a timeout is received, a RELEASE user interface event is produced, timers are cleared and the state machine goes into the Init state. If a down hardware event is received, the event is consumed, and the suppress timer is cleared. If the event was received inside the hold area, the state machine goes into the InHoldTime_U state. If the event was received outside the hold area, a MOVE user interface event is produced, the hold timer is cleared and the state machine goes into the Dispatch state. In other words, the InHoldTime_D state is entered if an up event was previously received (in InHoldTime_U). The state waits for a down event for a specified time, and if a timeout is produced, the state produces a RELEASE user interface event. If a down event is received, the state machine returns to the previous state if the event was received inside the hold area, and if the event was received outside the hold area, a MOVE event is produced. - The invention may provide advantages through the abstraction of the hardware or low-level events into higher-level user interface events.
- For example, a resistive touch screen may produce phantom events when the user changes direction of the movement or stops the movement. According to an example embodiment, such low-level phantom events may not reach the gesture recognizers, since the system first generates higher-level user interface events from the low-level events. In this process of generating the user interface events, the phantom events are filtered out through the use of timers and other means as explained earlier. Along the same lines, the higher-level user interface events may be simpler to use in programming applications for the platform where embodiments of the invention are used. The invention may also allow simpler implementation of multi-gesture recognition. Furthermore, switching from one gesture to another may be simpler to detect. For example, the generation of a Hold user interface event may make it unnecessary for the recognizer of Panning or other gestures to detect the end of the movement, since another gesture recognizer takes care of that. Since the user interface events are generated consistently from the low-level events, the invention may also provide predictability and ease of testing for applications. Generally, the different embodiments may simplify the programming and use of applications on a platform where the invention is applied.
- The various embodiments of the invention can be implemented with the help of computer program code that resides in a memory and causes the relevant apparatuses to carry out the invention. For example, a terminal device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the terminal device to carry out the features of an embodiment. Yet further, a network device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of an embodiment.
- It is obvious that the present invention is not limited solely to the above-presented embodiments, but it can be modified within the scope of the appended claims.
Claims (23)
1. A method for receiving user input, comprising:
receiving a low-level event from a user interface input device,
forming a user interface event using said low-level event,
forming information on a modifier for said user interface event,
forming gesture information from said user interface event and said modifier, and
using said gesture information as user input to an apparatus.
2. A method according to claim 1 , further comprising:
forwarding said user interface event and said modifier to a gesture recognizer, and
forming said gesture information by said gesture recognizer.
3. A method according to claim 1 , further comprising:
receiving a plurality of user interface events from a user interface input device,
forwarding said user interface events to a plurality of gesture recognizers, and
forming at least two gestures by said gesture recognizers.
4. A method according to claim 1 , wherein the user interface event is one of the group of touch, release, move and hold.
5. A method according to claim 1 , further comprising:
forming said modifier from at least one of the group of time information, area information, direction information, speed information, and pressure information.
6. A method according to claim 1 , further comprising:
forming a hold user interface event in response to a touch input or key press input being held in place for a predetermined time, and
using said hold event in forming said gesture information.
6. A method according to claim 1 , further comprising:
receiving at least two distinct user interface events from a multi-touch touch input device, and
using said at least two distinct user interface events for forming a multi-touch gesture.
8. A method according to claim 1 , wherein said user interface input device comprises at least one of the group of a touch screen, a touch pad, a pen, a mouse, a haptic input device, a data glove and a data suit.
9. A method according to claim 1 , wherein said user interface event is one of the group of touch down, release, hold and move.
10. An apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least:
receive a low-level event from a user interface input module,
form a user interface event using said low-level event,
form information on a modifier for said user interface event,
form gesture information from said user interface event and said modifier, and
use said gesture information as user input to an apparatus.
11. An apparatus according to claim 10 , further comprising computer program code configured to, with the at least one processor, cause the apparatus to at least:
forward said user interface event and said modifier to a gesture recognizer, and
form said gesture information by said gesture recognizer.
11. An apparatus according to claim 10 , further comprising computer program code configured to, with the processor, cause the apparatus to at least:
receive a plurality of user interface events from a user interface input device,
forward said user interface events to a plurality of gesture recognizers, and
form at least two gestures by said gesture recognizers.
13. An apparatus according to claim 10 , wherein the user interface event is one of the group of touch, release, move and hold.
14. An apparatus according to claim 10 , further comprising computer program code configured to, with the processor, cause the apparatus to at least:
form said modifier from at least one of the group of time information, area information, direction information, speed information, and pressure information.
15. An apparatus according to claim 10 , further comprising computer program code configured to, with the processor, cause the apparatus to at least:
form a hold user interface event in response to a touch input or key press input being held in place for a predetermined time, and
use said hold event in forming said gesture information.
16. An apparatus according to claim 10 , further comprising computer program code configured to, with the processor, cause the apparatus to at least:
receive at least two distinct user interface events from a multi-touch touch input device, and
use said at least two distinct user interface events for forming a multi-touch gesture.
17. An apparatus according to claim 10 , wherein the user interface module comprises at least one of the group of a touch screen, a touch pad, a pen, a mouse, a haptic input device, a data glove and a data suit.
18. An apparatus according to claim 10 , wherein the apparatus is one of a computer, portable communication device, a home appliance, an entertainment device such as a television, a transportation device such as a car, ship or an aircraft, or an intelligent building.
19. A system comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the system to at least
receive a low-level event from a user interface input module,
forming a user interface event using said low-level event,
form information on a modifier for said user interface event,
form gesture information from said user interface event and said modifier, and
use said gesture information as user input to an apparatus.
20. A system according to claim 19 , wherein the system comprises at least two apparatuses arranged in communication connection to each other, and wherein a first apparatus of said at least two apparatuses is arranged to receive said low-level event and a second apparatus of said at least two apparatuses is arranged to form said gesture information in response to receiving a user interface event from said first apparatus.
21. An apparatus comprising, processing means, memory means, and
means for receiving a low-level event from a user interface input means,
means for forming a user interface event using said low-level event,
means for forming information on a modifier for said user interface event,
means for forming gesture information from said user interface event and said modifier, and
means for using said gesture information as user input to an apparatus.
22. A computer program product stored on a computer readable medium and executable in a data processing device, the computer program product comprising:
a computer program code section for receiving a low-level event from a user interface input device,
a computer program code section for forming a user interface event using said low-level event,
a computer program code section for forming information on a modifier for said user interface event,
a computer program code section for forming gesture information from said user interface event and said modifier, and
a computer program code section for using said gesture information as user input to an apparatus.
23. A computer program product according to claim 22 wherein the computer program product is an operating system.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/FI2010/050445 WO2011151501A1 (en) | 2010-06-01 | 2010-06-01 | A method, a device and a system for receiving user input |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130212541A1 true US20130212541A1 (en) | 2013-08-15 |
Family
ID=45066227
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/701,367 Abandoned US20130212541A1 (en) | 2010-06-01 | 2010-06-01 | Method, a device and a system for receiving user input |
Country Status (5)
Country | Link |
---|---|
US (1) | US20130212541A1 (en) |
EP (1) | EP2577436A4 (en) |
CN (1) | CN102939578A (en) |
AP (1) | AP2012006600A0 (en) |
WO (1) | WO2011151501A1 (en) |
Cited By (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120054671A1 (en) * | 2010-08-30 | 2012-03-01 | Vmware, Inc. | Multi-touch interface gestures for keyboard and/or mouse inputs |
US20120053836A1 (en) * | 2010-08-25 | 2012-03-01 | Elektrobit Automotive Gmbh | Technique for screen-based route manipulation |
US20130201161A1 (en) * | 2012-02-03 | 2013-08-08 | John E. Dolan | Methods, Systems and Apparatus for Digital-Marking-Surface Content-Unit Manipulation |
US20140071171A1 (en) * | 2012-09-12 | 2014-03-13 | Alcatel-Lucent Usa Inc. | Pinch-and-zoom, zoom-and-pinch gesture control |
CN103702152A (en) * | 2013-11-29 | 2014-04-02 | 康佳集团股份有限公司 | Method and system for touch screen sharing of set top box and mobile terminal |
US20140324755A1 (en) * | 2013-04-26 | 2014-10-30 | Samsung Electronics Co., Ltd. | Information processing apparatus and control method thereof |
US20140354583A1 (en) * | 2013-05-30 | 2014-12-04 | Sony Corporation | Method and apparatus for outputting display data based on a touch operation on a touch panel |
US20160266775A1 (en) * | 2015-03-12 | 2016-09-15 | Naver Corporation | Interface providing systems and methods for enabling efficient screen control |
EP3103526A1 (en) * | 2015-06-12 | 2016-12-14 | Nintendo Co., Ltd. | Information processing apparatus, information processing system, information processing method, and information processing program |
WO2017027632A1 (en) * | 2015-08-10 | 2017-02-16 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US9602729B2 (en) | 2015-06-07 | 2017-03-21 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US9612741B2 (en) | 2012-05-09 | 2017-04-04 | Apple Inc. | Device, method, and graphical user interface for displaying additional information in response to a user contact |
US9619076B2 (en) | 2012-05-09 | 2017-04-11 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US9632664B2 (en) | 2015-03-08 | 2017-04-25 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9639184B2 (en) | 2015-03-19 | 2017-05-02 | Apple Inc. | Touch input cursor manipulation |
US9645732B2 (en) | 2015-03-08 | 2017-05-09 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US9753639B2 (en) | 2012-05-09 | 2017-09-05 | Apple Inc. | Device, method, and graphical user interface for displaying content associated with a corresponding affordance |
US9778771B2 (en) | 2012-12-29 | 2017-10-03 | Apple Inc. | Device, method, and graphical user interface for transitioning between touch input to display output relationships |
US9785305B2 (en) | 2015-03-19 | 2017-10-10 | Apple Inc. | Touch input cursor manipulation |
US9830048B2 (en) | 2015-06-07 | 2017-11-28 | Apple Inc. | Devices and methods for processing touch inputs with instructions in a web page |
US9880735B2 (en) | 2015-08-10 | 2018-01-30 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9886184B2 (en) | 2012-05-09 | 2018-02-06 | Apple Inc. | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
US9891811B2 (en) | 2015-06-07 | 2018-02-13 | Apple Inc. | Devices and methods for navigating between user interfaces |
US20180111711A1 (en) * | 2015-05-26 | 2018-04-26 | Ishida Co., Ltd. | Production line configuration apparatus |
US9959025B2 (en) | 2012-12-29 | 2018-05-01 | Apple Inc. | Device, method, and graphical user interface for navigating user interface hierarchies |
US20180121000A1 (en) * | 2016-10-27 | 2018-05-03 | Microsoft Technology Licensing, Llc | Using pressure to direct user input |
US9990121B2 (en) | 2012-05-09 | 2018-06-05 | Apple Inc. | Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input |
US9990107B2 (en) | 2015-03-08 | 2018-06-05 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US9996231B2 (en) | 2012-05-09 | 2018-06-12 | Apple Inc. | Device, method, and graphical user interface for manipulating framed graphical objects |
US10037138B2 (en) | 2012-12-29 | 2018-07-31 | Apple Inc. | Device, method, and graphical user interface for switching between user interfaces |
US10042542B2 (en) | 2012-05-09 | 2018-08-07 | Apple Inc. | Device, method, and graphical user interface for moving and dropping a user interface object |
US10048757B2 (en) | 2015-03-08 | 2018-08-14 | Apple Inc. | Devices and methods for controlling media presentation |
US10067653B2 (en) | 2015-04-01 | 2018-09-04 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10073615B2 (en) | 2012-05-09 | 2018-09-11 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US10078442B2 (en) | 2012-12-29 | 2018-09-18 | Apple Inc. | Device, method, and graphical user interface for determining whether to scroll or select content based on an intensity theshold |
US10095396B2 (en) | 2015-03-08 | 2018-10-09 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object |
US10095391B2 (en) | 2012-05-09 | 2018-10-09 | Apple Inc. | Device, method, and graphical user interface for selecting user interface objects |
US10126930B2 (en) | 2012-05-09 | 2018-11-13 | Apple Inc. | Device, method, and graphical user interface for scrolling nested regions |
US10175864B2 (en) | 2012-05-09 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for selecting object within a group of objects in accordance with contact intensity |
US10175757B2 (en) | 2012-05-09 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for touch-based operations performed and reversed in a user interface |
US10200598B2 (en) | 2015-06-07 | 2019-02-05 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10235035B2 (en) | 2015-08-10 | 2019-03-19 | Apple Inc. | Devices, methods, and graphical user interfaces for content navigation and manipulation |
US10248308B2 (en) | 2015-08-10 | 2019-04-02 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interfaces with physical gestures |
US10275087B1 (en) | 2011-08-05 | 2019-04-30 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10346030B2 (en) | 2015-06-07 | 2019-07-09 | Apple Inc. | Devices and methods for navigating between user interfaces |
US20190265882A1 (en) * | 2016-11-10 | 2019-08-29 | Cygames, Inc. | Information processing program, information processing method, and information processing device |
US10416800B2 (en) | 2015-08-10 | 2019-09-17 | Apple Inc. | Devices, methods, and graphical user interfaces for adjusting user interface objects |
US10437333B2 (en) | 2012-12-29 | 2019-10-08 | Apple Inc. | Device, method, and graphical user interface for forgoing generation of tactile output for a multi-contact gesture |
US10496260B2 (en) | 2012-05-09 | 2019-12-03 | Apple Inc. | Device, method, and graphical user interface for pressure-based alteration of controls in a user interface |
US20190369864A1 (en) * | 2018-06-03 | 2019-12-05 | Apple Inc. | Devices and Methods for Processing Inputs Using Gesture Recognizers |
US10620781B2 (en) | 2012-12-29 | 2020-04-14 | Apple Inc. | Device, method, and graphical user interface for moving a cursor according to a change in an appearance of a control icon with simulated three-dimensional characteristics |
US20200151226A1 (en) * | 2018-11-14 | 2020-05-14 | Wix.Com Ltd. | System and method for creation and handling of configurable applications for website building systems |
US10664652B2 (en) | 2013-06-15 | 2020-05-26 | Microsoft Technology Licensing, Llc | Seamless grid and canvas integration in a spreadsheet application |
US10732825B2 (en) * | 2011-01-07 | 2020-08-04 | Microsoft Technology Licensing, Llc | Natural input for spreadsheet actions |
US11061558B2 (en) | 2017-09-11 | 2021-07-13 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Touch operation response method and device |
US11086442B2 (en) | 2017-09-11 | 2021-08-10 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method for responding to touch operation, mobile terminal, and storage medium |
US20210303473A1 (en) * | 2020-03-27 | 2021-09-30 | Datto, Inc. | Method and system of copying data to a clipboard |
US11157691B2 (en) | 2013-06-14 | 2021-10-26 | Microsoft Technology Licensing, Llc | Natural quick function gestures |
US11194425B2 (en) * | 2017-09-11 | 2021-12-07 | Shenzhen Heytap Technology Corp., Ltd. | Method for responding to touch operation, mobile terminal, and storage medium |
US11269499B2 (en) * | 2019-12-10 | 2022-03-08 | Canon Kabushiki Kaisha | Electronic apparatus and control method for fine item movement adjustment |
US20220357842A1 (en) * | 2019-07-03 | 2022-11-10 | Zte Corporation | Gesture recognition method and device, and computer-readable storage medium |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102662576B (en) * | 2012-03-29 | 2015-04-29 | 华为终端有限公司 | Method and device for sending out information based on touch |
US9886794B2 (en) | 2012-06-05 | 2018-02-06 | Apple Inc. | Problem reporting in maps |
US8965696B2 (en) | 2012-06-05 | 2015-02-24 | Apple Inc. | Providing navigation instructions while operating navigation application in background |
US10156455B2 (en) | 2012-06-05 | 2018-12-18 | Apple Inc. | Context-aware voice guidance |
US9182243B2 (en) * | 2012-06-05 | 2015-11-10 | Apple Inc. | Navigation application |
US9482296B2 (en) | 2012-06-05 | 2016-11-01 | Apple Inc. | Rendering road signs during navigation |
US9159153B2 (en) | 2012-06-05 | 2015-10-13 | Apple Inc. | Method, system and apparatus for providing visual feedback of a map view change |
US9418672B2 (en) | 2012-06-05 | 2016-08-16 | Apple Inc. | Navigation application with adaptive instruction text |
US9997069B2 (en) | 2012-06-05 | 2018-06-12 | Apple Inc. | Context-aware voice guidance |
US8983778B2 (en) | 2012-06-05 | 2015-03-17 | Apple Inc. | Generation of intersection information by a mapping service |
US10176633B2 (en) | 2012-06-05 | 2019-01-08 | Apple Inc. | Integrated mapping and navigation application |
US8880336B2 (en) | 2012-06-05 | 2014-11-04 | Apple Inc. | 3D navigation |
CN103529976B (en) | 2012-07-02 | 2017-09-12 | 英特尔公司 | Interference in gesture recognition system is eliminated |
US9785338B2 (en) * | 2012-07-02 | 2017-10-10 | Mosaiqq, Inc. | System and method for providing a user interaction interface using a multi-touch gesture recognition engine |
CN102830818A (en) * | 2012-08-17 | 2012-12-19 | 深圳市茁壮网络股份有限公司 | Method, device and system for signal processing |
JP5700020B2 (en) * | 2012-10-10 | 2015-04-15 | コニカミノルタ株式会社 | Image processing apparatus, program, and operation event determination method |
DE102013216746A1 (en) * | 2013-08-23 | 2015-02-26 | Robert Bosch Gmbh | Method and visualization device for gesture-based data retrieval and data visualization for an automation system |
KR102508833B1 (en) * | 2015-08-05 | 2023-03-10 | 삼성전자주식회사 | Electronic apparatus and text input method for the electronic apparatus |
CN112000247A (en) * | 2020-08-27 | 2020-11-27 | 努比亚技术有限公司 | Touch signal processing method and device and computer readable storage medium |
Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5798758A (en) * | 1995-04-14 | 1998-08-25 | Canon Kabushiki Kaisha | Gesture-based data processing method and apparatus |
US5809267A (en) * | 1993-12-30 | 1998-09-15 | Xerox Corporation | Apparatus and method for executing multiple-concatenated command gestures in a gesture based input system |
US5812697A (en) * | 1994-06-10 | 1998-09-22 | Nippon Steel Corporation | Method and apparatus for recognizing hand-written characters using a weighting dictionary |
US6249606B1 (en) * | 1998-02-19 | 2001-06-19 | Mindmaker, Inc. | Method and system for gesture category recognition and training using a feature vector |
US6304674B1 (en) * | 1998-08-03 | 2001-10-16 | Xerox Corporation | System and method for recognizing user-specified pen-based gestures using hidden markov models |
US6389586B1 (en) * | 1998-01-05 | 2002-05-14 | Synplicity, Inc. | Method and apparatus for invalid state detection |
US20030046658A1 (en) * | 2001-05-02 | 2003-03-06 | Vijaya Raghavan | Event-based temporal logic |
US20030055644A1 (en) * | 2001-08-17 | 2003-03-20 | At&T Corp. | Systems and methods for aggregating related inputs using finite-state devices and extracting meaning from multimodal inputs using aggregation |
US7000200B1 (en) * | 2000-09-15 | 2006-02-14 | Intel Corporation | Gesture recognition system recognizing gestures within a specified timing |
US20060224924A1 (en) * | 2005-03-31 | 2006-10-05 | Microsoft Corporation | Generating finite state machines for software systems with asynchronous callbacks |
US20060235548A1 (en) * | 2005-04-19 | 2006-10-19 | The Mathworks, Inc. | Graphical state machine based programming for a graphical user interface |
US20070177803A1 (en) * | 2006-01-30 | 2007-08-02 | Apple Computer, Inc | Multi-touch gesture dictionary |
US20090006292A1 (en) * | 2007-06-27 | 2009-01-01 | Microsoft Corporation | Recognizing input gestures |
US20090051671A1 (en) * | 2007-08-22 | 2009-02-26 | Jason Antony Konstas | Recognizing the motion of two or more touches on a touch-sensing surface |
US20090273571A1 (en) * | 2008-05-01 | 2009-11-05 | Alan Bowens | Gesture Recognition |
US20100005203A1 (en) * | 2008-07-07 | 2010-01-07 | International Business Machines Corporation | Method of Merging and Incremantal Construction of Minimal Finite State Machines |
US20100031202A1 (en) * | 2008-08-04 | 2010-02-04 | Microsoft Corporation | User-defined gesture set for surface computing |
US20110066984A1 (en) * | 2009-09-16 | 2011-03-17 | Google Inc. | Gesture Recognition on Computing Device |
US20120131513A1 (en) * | 2010-11-19 | 2012-05-24 | Microsoft Corporation | Gesture Recognition Training |
US20120225719A1 (en) * | 2011-03-04 | 2012-09-06 | Mirosoft Corporation | Gesture Detection and Recognition |
US8436821B1 (en) * | 2009-11-20 | 2013-05-07 | Adobe Systems Incorporated | System and method for developing and classifying touch gestures |
US20130141375A1 (en) * | 2011-12-06 | 2013-06-06 | Lester F. Ludwig | Gesteme (gesture primitive) recognition for advanced touch user interfaces |
US9218064B1 (en) * | 2012-09-18 | 2015-12-22 | Google Inc. | Authoring multi-finger interactions through demonstration and composition |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63172325A (en) * | 1987-01-10 | 1988-07-16 | Pioneer Electronic Corp | Touch panel controller |
US5612719A (en) * | 1992-12-03 | 1997-03-18 | Apple Computer, Inc. | Gesture sensitive buttons for graphical user interfaces |
JP2001195187A (en) * | 2000-01-11 | 2001-07-19 | Sharp Corp | Information processor |
US7030861B1 (en) * | 2001-02-10 | 2006-04-18 | Wayne Carl Westerman | System and method for packing multi-touch gestures onto a hand |
KR100720335B1 (en) * | 2006-12-20 | 2007-05-23 | 최경순 | Apparatus for inputting a text corresponding to relative coordinates values generated by movement of a touch position and method thereof |
US9311528B2 (en) * | 2007-01-03 | 2016-04-12 | Apple Inc. | Gesture learning |
US20080165148A1 (en) * | 2007-01-07 | 2008-07-10 | Richard Williamson | Portable Electronic Device, Method, and Graphical User Interface for Displaying Inline Multimedia Content |
WO2009018314A2 (en) * | 2007-07-30 | 2009-02-05 | Perceptive Pixel, Inc. | Graphical user interface for large-scale, multi-user, multi-touch systems |
US8390577B2 (en) * | 2008-07-25 | 2013-03-05 | Intuilab | Continuous recognition of multi-touch gestures |
US8264381B2 (en) * | 2008-08-22 | 2012-09-11 | Microsoft Corporation | Continuous automatic key control |
US20100321319A1 (en) * | 2009-06-17 | 2010-12-23 | Hefti Thierry | Method for displaying and updating a view of a graphical scene in response to commands via a touch-sensitive device |
-
2010
- 2010-06-01 EP EP10852457.0A patent/EP2577436A4/en not_active Withdrawn
- 2010-06-01 CN CN2010800672009A patent/CN102939578A/en active Pending
- 2010-06-01 AP AP2012006600A patent/AP2012006600A0/en unknown
- 2010-06-01 US US13/701,367 patent/US20130212541A1/en not_active Abandoned
- 2010-06-01 WO PCT/FI2010/050445 patent/WO2011151501A1/en active Application Filing
Patent Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5809267A (en) * | 1993-12-30 | 1998-09-15 | Xerox Corporation | Apparatus and method for executing multiple-concatenated command gestures in a gesture based input system |
US5812697A (en) * | 1994-06-10 | 1998-09-22 | Nippon Steel Corporation | Method and apparatus for recognizing hand-written characters using a weighting dictionary |
US5798758A (en) * | 1995-04-14 | 1998-08-25 | Canon Kabushiki Kaisha | Gesture-based data processing method and apparatus |
US6389586B1 (en) * | 1998-01-05 | 2002-05-14 | Synplicity, Inc. | Method and apparatus for invalid state detection |
US6249606B1 (en) * | 1998-02-19 | 2001-06-19 | Mindmaker, Inc. | Method and system for gesture category recognition and training using a feature vector |
US6304674B1 (en) * | 1998-08-03 | 2001-10-16 | Xerox Corporation | System and method for recognizing user-specified pen-based gestures using hidden markov models |
US7000200B1 (en) * | 2000-09-15 | 2006-02-14 | Intel Corporation | Gesture recognition system recognizing gestures within a specified timing |
US20030046658A1 (en) * | 2001-05-02 | 2003-03-06 | Vijaya Raghavan | Event-based temporal logic |
US20030055644A1 (en) * | 2001-08-17 | 2003-03-20 | At&T Corp. | Systems and methods for aggregating related inputs using finite-state devices and extracting meaning from multimodal inputs using aggregation |
US20060224924A1 (en) * | 2005-03-31 | 2006-10-05 | Microsoft Corporation | Generating finite state machines for software systems with asynchronous callbacks |
US20060235548A1 (en) * | 2005-04-19 | 2006-10-19 | The Mathworks, Inc. | Graphical state machine based programming for a graphical user interface |
US20070177803A1 (en) * | 2006-01-30 | 2007-08-02 | Apple Computer, Inc | Multi-touch gesture dictionary |
US20090006292A1 (en) * | 2007-06-27 | 2009-01-01 | Microsoft Corporation | Recognizing input gestures |
US20090051671A1 (en) * | 2007-08-22 | 2009-02-26 | Jason Antony Konstas | Recognizing the motion of two or more touches on a touch-sensing surface |
US20090273571A1 (en) * | 2008-05-01 | 2009-11-05 | Alan Bowens | Gesture Recognition |
US20100005203A1 (en) * | 2008-07-07 | 2010-01-07 | International Business Machines Corporation | Method of Merging and Incremantal Construction of Minimal Finite State Machines |
US20100031202A1 (en) * | 2008-08-04 | 2010-02-04 | Microsoft Corporation | User-defined gesture set for surface computing |
US20110066984A1 (en) * | 2009-09-16 | 2011-03-17 | Google Inc. | Gesture Recognition on Computing Device |
US8436821B1 (en) * | 2009-11-20 | 2013-05-07 | Adobe Systems Incorporated | System and method for developing and classifying touch gestures |
US20120131513A1 (en) * | 2010-11-19 | 2012-05-24 | Microsoft Corporation | Gesture Recognition Training |
US20120225719A1 (en) * | 2011-03-04 | 2012-09-06 | Mirosoft Corporation | Gesture Detection and Recognition |
US20130141375A1 (en) * | 2011-12-06 | 2013-06-06 | Lester F. Ludwig | Gesteme (gesture primitive) recognition for advanced touch user interfaces |
US9218064B1 (en) * | 2012-09-18 | 2015-12-22 | Google Inc. | Authoring multi-finger interactions through demonstration and composition |
Non-Patent Citations (7)
Title |
---|
Ashbrook et al., ?MAGIC: A Motion Gesture Design Tool,? April 10 ? 15, 2010, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 2159-2168, http://dl.acm.org/citation.cfm?id=1753653 * |
Cryan, "Probabilistic Finite State Machines and Hidden Markov Models," November 2004, http://www.inf.ed.ac.uk/teaching/courses/inf1/cl/notes/Comp7.pdf * |
Dey et al., "a CAPpella: Programming by Demonstration of Context-Aware Applications," 2004, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 33-40, http://dl.acm.org/citation.cfm?id=985697 * |
Hartmann et al., "Authoring Sensor-based Interactions by Demonstration with Direct Manipulation and Pattern Recognition," 2007, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 145-154, pages 145-154, http://dl.acm.org/citation.cfm?id=1240646 * |
Rubine, "Specifying Gestures by Example," Computer Graphics, Vol. 25, No. 4, July 1991, pages 329-337, http://dl.acm.org/citation.cfm?id=122753 * |
Rubine, "The Automatic Recognition of Gestures," CMU-CS-91-202, December 1991, Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science at Carnegie Mellon University, 284 pp., https://www.cs.cmu.edu/~music/papers/dean_rubine_thesis.pdf * |
Wright, "Finite State Machines," 2005, http://www4.ncsu.edu/~drwrigh3/docs/courses/csc216/fsm-notes.pdf * |
Cited By (148)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9020759B2 (en) * | 2010-08-25 | 2015-04-28 | Elektrobit Automotive Gmbh | Technique for screen-based route manipulation |
US20120053836A1 (en) * | 2010-08-25 | 2012-03-01 | Elektrobit Automotive Gmbh | Technique for screen-based route manipulation |
US9639186B2 (en) | 2010-08-30 | 2017-05-02 | Vmware, Inc. | Multi-touch interface gestures for keyboard and/or mouse inputs |
US20120054671A1 (en) * | 2010-08-30 | 2012-03-01 | Vmware, Inc. | Multi-touch interface gestures for keyboard and/or mouse inputs |
US9465457B2 (en) * | 2010-08-30 | 2016-10-11 | Vmware, Inc. | Multi-touch interface gestures for keyboard and/or mouse inputs |
US10732825B2 (en) * | 2011-01-07 | 2020-08-04 | Microsoft Technology Licensing, Llc | Natural input for spreadsheet actions |
US10664097B1 (en) | 2011-08-05 | 2020-05-26 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10656752B1 (en) | 2011-08-05 | 2020-05-19 | P4tents1, LLC | Gesture-equipped touch screen system, method, and computer program product |
US10386960B1 (en) | 2011-08-05 | 2019-08-20 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10275087B1 (en) | 2011-08-05 | 2019-04-30 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10540039B1 (en) | 2011-08-05 | 2020-01-21 | P4tents1, LLC | Devices and methods for navigating between user interface |
US10649571B1 (en) | 2011-08-05 | 2020-05-12 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10365758B1 (en) | 2011-08-05 | 2019-07-30 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10338736B1 (en) | 2011-08-05 | 2019-07-02 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10345961B1 (en) | 2011-08-05 | 2019-07-09 | P4tents1, LLC | Devices and methods for navigating between user interfaces |
US20130201161A1 (en) * | 2012-02-03 | 2013-08-08 | John E. Dolan | Methods, Systems and Apparatus for Digital-Marking-Surface Content-Unit Manipulation |
US9823839B2 (en) | 2012-05-09 | 2017-11-21 | Apple Inc. | Device, method, and graphical user interface for displaying additional information in response to a user contact |
US10042542B2 (en) | 2012-05-09 | 2018-08-07 | Apple Inc. | Device, method, and graphical user interface for moving and dropping a user interface object |
US9612741B2 (en) | 2012-05-09 | 2017-04-04 | Apple Inc. | Device, method, and graphical user interface for displaying additional information in response to a user contact |
US11221675B2 (en) * | 2012-05-09 | 2022-01-11 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for operations performed in a user interface |
US11314407B2 (en) | 2012-05-09 | 2022-04-26 | Apple Inc. | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
US20220129076A1 (en) * | 2012-05-09 | 2022-04-28 | Apple Inc. | Device, Method, and Graphical User Interface for Providing Tactile Feedback for Operations Performed in a User Interface |
US11354033B2 (en) | 2012-05-09 | 2022-06-07 | Apple Inc. | Device, method, and graphical user interface for managing icons in a user interface region |
US9753639B2 (en) | 2012-05-09 | 2017-09-05 | Apple Inc. | Device, method, and graphical user interface for displaying content associated with a corresponding affordance |
US11947724B2 (en) * | 2012-05-09 | 2024-04-02 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for operations performed in a user interface |
US11068153B2 (en) | 2012-05-09 | 2021-07-20 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US10114546B2 (en) | 2012-05-09 | 2018-10-30 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US11023116B2 (en) | 2012-05-09 | 2021-06-01 | Apple Inc. | Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input |
US11010027B2 (en) | 2012-05-09 | 2021-05-18 | Apple Inc. | Device, method, and graphical user interface for manipulating framed graphical objects |
US10592041B2 (en) | 2012-05-09 | 2020-03-17 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US10996788B2 (en) | 2012-05-09 | 2021-05-04 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US9886184B2 (en) | 2012-05-09 | 2018-02-06 | Apple Inc. | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
US10969945B2 (en) | 2012-05-09 | 2021-04-06 | Apple Inc. | Device, method, and graphical user interface for selecting user interface objects |
US10942570B2 (en) | 2012-05-09 | 2021-03-09 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for operations performed in a user interface |
US10496260B2 (en) | 2012-05-09 | 2019-12-03 | Apple Inc. | Device, method, and graphical user interface for pressure-based alteration of controls in a user interface |
US10126930B2 (en) | 2012-05-09 | 2018-11-13 | Apple Inc. | Device, method, and graphical user interface for scrolling nested regions |
US10908808B2 (en) | 2012-05-09 | 2021-02-02 | Apple Inc. | Device, method, and graphical user interface for displaying additional information in response to a user contact |
US10775994B2 (en) | 2012-05-09 | 2020-09-15 | Apple Inc. | Device, method, and graphical user interface for moving and dropping a user interface object |
US9971499B2 (en) | 2012-05-09 | 2018-05-15 | Apple Inc. | Device, method, and graphical user interface for displaying content associated with a corresponding affordance |
US9990121B2 (en) | 2012-05-09 | 2018-06-05 | Apple Inc. | Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input |
US10775999B2 (en) | 2012-05-09 | 2020-09-15 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US9996231B2 (en) | 2012-05-09 | 2018-06-12 | Apple Inc. | Device, method, and graphical user interface for manipulating framed graphical objects |
US10191627B2 (en) | 2012-05-09 | 2019-01-29 | Apple Inc. | Device, method, and graphical user interface for manipulating framed graphical objects |
US10481690B2 (en) | 2012-05-09 | 2019-11-19 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for media adjustment operations performed in a user interface |
US9619076B2 (en) | 2012-05-09 | 2017-04-11 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US10175757B2 (en) | 2012-05-09 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for touch-based operations performed and reversed in a user interface |
US10175864B2 (en) | 2012-05-09 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for selecting object within a group of objects in accordance with contact intensity |
US10884591B2 (en) | 2012-05-09 | 2021-01-05 | Apple Inc. | Device, method, and graphical user interface for selecting object within a group of objects |
US10073615B2 (en) | 2012-05-09 | 2018-09-11 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US10168826B2 (en) | 2012-05-09 | 2019-01-01 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US10782871B2 (en) | 2012-05-09 | 2020-09-22 | Apple Inc. | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
US10095391B2 (en) | 2012-05-09 | 2018-10-09 | Apple Inc. | Device, method, and graphical user interface for selecting user interface objects |
US20140071171A1 (en) * | 2012-09-12 | 2014-03-13 | Alcatel-Lucent Usa Inc. | Pinch-and-zoom, zoom-and-pinch gesture control |
US10037138B2 (en) | 2012-12-29 | 2018-07-31 | Apple Inc. | Device, method, and graphical user interface for switching between user interfaces |
US9996233B2 (en) | 2012-12-29 | 2018-06-12 | Apple Inc. | Device, method, and graphical user interface for navigating user interface hierarchies |
US10437333B2 (en) | 2012-12-29 | 2019-10-08 | Apple Inc. | Device, method, and graphical user interface for forgoing generation of tactile output for a multi-contact gesture |
US10915243B2 (en) | 2012-12-29 | 2021-02-09 | Apple Inc. | Device, method, and graphical user interface for adjusting content selection |
US10078442B2 (en) | 2012-12-29 | 2018-09-18 | Apple Inc. | Device, method, and graphical user interface for determining whether to scroll or select content based on an intensity theshold |
US10175879B2 (en) | 2012-12-29 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for zooming a user interface while performing a drag operation |
US9778771B2 (en) | 2012-12-29 | 2017-10-03 | Apple Inc. | Device, method, and graphical user interface for transitioning between touch input to display output relationships |
US9959025B2 (en) | 2012-12-29 | 2018-05-01 | Apple Inc. | Device, method, and graphical user interface for navigating user interface hierarchies |
US10101887B2 (en) | 2012-12-29 | 2018-10-16 | Apple Inc. | Device, method, and graphical user interface for navigating user interface hierarchies |
US10185491B2 (en) | 2012-12-29 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for determining whether to scroll or enlarge content |
US9857897B2 (en) | 2012-12-29 | 2018-01-02 | Apple Inc. | Device and method for assigning respective portions of an aggregate intensity to a plurality of contacts |
US10620781B2 (en) | 2012-12-29 | 2020-04-14 | Apple Inc. | Device, method, and graphical user interface for moving a cursor according to a change in an appearance of a control icon with simulated three-dimensional characteristics |
US9965074B2 (en) | 2012-12-29 | 2018-05-08 | Apple Inc. | Device, method, and graphical user interface for transitioning between touch input to display output relationships |
US20140324755A1 (en) * | 2013-04-26 | 2014-10-30 | Samsung Electronics Co., Ltd. | Information processing apparatus and control method thereof |
US9377943B2 (en) * | 2013-05-30 | 2016-06-28 | Sony Corporation | Method and apparatus for outputting display data based on a touch operation on a touch panel |
US20140354583A1 (en) * | 2013-05-30 | 2014-12-04 | Sony Corporation | Method and apparatus for outputting display data based on a touch operation on a touch panel |
US11157691B2 (en) | 2013-06-14 | 2021-10-26 | Microsoft Technology Licensing, Llc | Natural quick function gestures |
US10664652B2 (en) | 2013-06-15 | 2020-05-26 | Microsoft Technology Licensing, Llc | Seamless grid and canvas integration in a spreadsheet application |
CN103702152A (en) * | 2013-11-29 | 2014-04-02 | 康佳集团股份有限公司 | Method and system for touch screen sharing of set top box and mobile terminal |
US10180772B2 (en) | 2015-03-08 | 2019-01-15 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10613634B2 (en) | 2015-03-08 | 2020-04-07 | Apple Inc. | Devices and methods for controlling media presentation |
US9645732B2 (en) | 2015-03-08 | 2017-05-09 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US10338772B2 (en) | 2015-03-08 | 2019-07-02 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9645709B2 (en) | 2015-03-08 | 2017-05-09 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9990107B2 (en) | 2015-03-08 | 2018-06-05 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US10268342B2 (en) | 2015-03-08 | 2019-04-23 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US11112957B2 (en) | 2015-03-08 | 2021-09-07 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object |
US9632664B2 (en) | 2015-03-08 | 2017-04-25 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10048757B2 (en) | 2015-03-08 | 2018-08-14 | Apple Inc. | Devices and methods for controlling media presentation |
US10387029B2 (en) | 2015-03-08 | 2019-08-20 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US10067645B2 (en) | 2015-03-08 | 2018-09-04 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10402073B2 (en) | 2015-03-08 | 2019-09-03 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object |
US10095396B2 (en) | 2015-03-08 | 2018-10-09 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object |
US10860177B2 (en) | 2015-03-08 | 2020-12-08 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10268341B2 (en) | 2015-03-08 | 2019-04-23 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10318127B2 (en) * | 2015-03-12 | 2019-06-11 | Line Corporation | Interface providing systems and methods for enabling efficient screen control |
US20160266775A1 (en) * | 2015-03-12 | 2016-09-15 | Naver Corporation | Interface providing systems and methods for enabling efficient screen control |
US10222980B2 (en) | 2015-03-19 | 2019-03-05 | Apple Inc. | Touch input cursor manipulation |
US11054990B2 (en) | 2015-03-19 | 2021-07-06 | Apple Inc. | Touch input cursor manipulation |
US10599331B2 (en) | 2015-03-19 | 2020-03-24 | Apple Inc. | Touch input cursor manipulation |
US9785305B2 (en) | 2015-03-19 | 2017-10-10 | Apple Inc. | Touch input cursor manipulation |
US11550471B2 (en) | 2015-03-19 | 2023-01-10 | Apple Inc. | Touch input cursor manipulation |
US9639184B2 (en) | 2015-03-19 | 2017-05-02 | Apple Inc. | Touch input cursor manipulation |
US10067653B2 (en) | 2015-04-01 | 2018-09-04 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10152208B2 (en) | 2015-04-01 | 2018-12-11 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US20180111711A1 (en) * | 2015-05-26 | 2018-04-26 | Ishida Co., Ltd. | Production line configuration apparatus |
US10455146B2 (en) | 2015-06-07 | 2019-10-22 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10841484B2 (en) | 2015-06-07 | 2020-11-17 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US11240424B2 (en) | 2015-06-07 | 2022-02-01 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10705718B2 (en) | 2015-06-07 | 2020-07-07 | Apple Inc. | Devices and methods for navigating between user interfaces |
US9860451B2 (en) | 2015-06-07 | 2018-01-02 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US11231831B2 (en) | 2015-06-07 | 2022-01-25 | Apple Inc. | Devices and methods for content preview based on touch input intensity |
US9602729B2 (en) | 2015-06-07 | 2017-03-21 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10200598B2 (en) | 2015-06-07 | 2019-02-05 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US9891811B2 (en) | 2015-06-07 | 2018-02-13 | Apple Inc. | Devices and methods for navigating between user interfaces |
US9830048B2 (en) | 2015-06-07 | 2017-11-28 | Apple Inc. | Devices and methods for processing touch inputs with instructions in a web page |
US10303354B2 (en) | 2015-06-07 | 2019-05-28 | Apple Inc. | Devices and methods for navigating between user interfaces |
US9706127B2 (en) | 2015-06-07 | 2017-07-11 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10346030B2 (en) | 2015-06-07 | 2019-07-09 | Apple Inc. | Devices and methods for navigating between user interfaces |
US9916080B2 (en) | 2015-06-07 | 2018-03-13 | Apple Inc. | Devices and methods for navigating between user interfaces |
US11681429B2 (en) | 2015-06-07 | 2023-06-20 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US11835985B2 (en) | 2015-06-07 | 2023-12-05 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10353573B2 (en) * | 2015-06-12 | 2019-07-16 | Nintendo Co., Ltd. | Information processing apparatus, information processing system, information processing method, and non-transitory computer-readable storage medium for skipping information processing in execution |
US20160364133A1 (en) * | 2015-06-12 | 2016-12-15 | Nintendo Co., Ltd. | Information processing apparatus, information processing system, information processing method, and non-transitory computer-readable storage medium storing information processing program |
EP3103526A1 (en) * | 2015-06-12 | 2016-12-14 | Nintendo Co., Ltd. | Information processing apparatus, information processing system, information processing method, and information processing program |
US10209884B2 (en) | 2015-08-10 | 2019-02-19 | Apple Inc. | Devices, Methods, and Graphical User Interfaces for Manipulating User Interface Objects with Visual and/or Haptic Feedback |
US11182017B2 (en) | 2015-08-10 | 2021-11-23 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US11327648B2 (en) | 2015-08-10 | 2022-05-10 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10963158B2 (en) | 2015-08-10 | 2021-03-30 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10235035B2 (en) | 2015-08-10 | 2019-03-19 | Apple Inc. | Devices, methods, and graphical user interfaces for content navigation and manipulation |
WO2017027632A1 (en) * | 2015-08-10 | 2017-02-16 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10884608B2 (en) | 2015-08-10 | 2021-01-05 | Apple Inc. | Devices, methods, and graphical user interfaces for content navigation and manipulation |
US11740785B2 (en) | 2015-08-10 | 2023-08-29 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9880735B2 (en) | 2015-08-10 | 2018-01-30 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10416800B2 (en) | 2015-08-10 | 2019-09-17 | Apple Inc. | Devices, methods, and graphical user interfaces for adjusting user interface objects |
US10162452B2 (en) | 2015-08-10 | 2018-12-25 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10248308B2 (en) | 2015-08-10 | 2019-04-02 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interfaces with physical gestures |
US10698598B2 (en) | 2015-08-10 | 2020-06-30 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10203868B2 (en) | 2015-08-10 | 2019-02-12 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10754542B2 (en) | 2015-08-10 | 2020-08-25 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US20180121000A1 (en) * | 2016-10-27 | 2018-05-03 | Microsoft Technology Licensing, Llc | Using pressure to direct user input |
US20190265882A1 (en) * | 2016-11-10 | 2019-08-29 | Cygames, Inc. | Information processing program, information processing method, and information processing device |
US10990274B2 (en) * | 2016-11-10 | 2021-04-27 | Cygames, Inc. | Information processing program, information processing method, and information processing device |
US11194425B2 (en) * | 2017-09-11 | 2021-12-07 | Shenzhen Heytap Technology Corp., Ltd. | Method for responding to touch operation, mobile terminal, and storage medium |
US11086442B2 (en) | 2017-09-11 | 2021-08-10 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method for responding to touch operation, mobile terminal, and storage medium |
US11061558B2 (en) | 2017-09-11 | 2021-07-13 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Touch operation response method and device |
US10877660B2 (en) * | 2018-06-03 | 2020-12-29 | Apple Inc. | Devices and methods for processing inputs using gesture recognizers |
US11567658B2 (en) | 2018-06-03 | 2023-01-31 | Apple Inc. | Devices and methods for processing inputs using gesture recognizers |
US11294564B2 (en) | 2018-06-03 | 2022-04-05 | Apple Inc. | Devices and methods for processing inputs using gesture recognizers |
US20190369864A1 (en) * | 2018-06-03 | 2019-12-05 | Apple Inc. | Devices and Methods for Processing Inputs Using Gesture Recognizers |
US20200151226A1 (en) * | 2018-11-14 | 2020-05-14 | Wix.Com Ltd. | System and method for creation and handling of configurable applications for website building systems |
US11698944B2 (en) * | 2018-11-14 | 2023-07-11 | Wix.Com Ltd. | System and method for creation and handling of configurable applications for website building systems |
US20220357842A1 (en) * | 2019-07-03 | 2022-11-10 | Zte Corporation | Gesture recognition method and device, and computer-readable storage medium |
US11269499B2 (en) * | 2019-12-10 | 2022-03-08 | Canon Kabushiki Kaisha | Electronic apparatus and control method for fine item movement adjustment |
US20210303473A1 (en) * | 2020-03-27 | 2021-09-30 | Datto, Inc. | Method and system of copying data to a clipboard |
Also Published As
Publication number | Publication date |
---|---|
WO2011151501A1 (en) | 2011-12-08 |
AP2012006600A0 (en) | 2012-12-31 |
EP2577436A1 (en) | 2013-04-10 |
EP2577436A4 (en) | 2016-03-30 |
CN102939578A (en) | 2013-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20130212541A1 (en) | Method, a device and a system for receiving user input | |
US11836296B2 (en) | Devices, methods, and graphical user interfaces for providing a home button replacement | |
US11550447B2 (en) | Application menu for video system | |
US20210019028A1 (en) | Method, device, and graphical user interface for tabbed and private browsing | |
AU2018204236B2 (en) | Device, method, and graphical user interface for selecting user interface objects | |
US11086368B2 (en) | Devices and methods for processing and disambiguating touch inputs using intensity thresholds based on prior input intensity | |
RU2582854C2 (en) | Method and device for fast access to device functions | |
US10831337B2 (en) | Device, method, and graphical user interface for a radial menu system | |
EP2511812B1 (en) | Continuous recognition method of multi-touch gestures from at least two multi-touch input devices | |
EP3087456B1 (en) | Remote multi-touch control | |
US10353550B2 (en) | Device, method, and graphical user interface for media playback in an accessibility mode | |
US9465470B2 (en) | Controlling primary and secondary displays from a single touchscreen | |
US20160299657A1 (en) | Gesture Controlled Display of Content Items | |
AU2017100980A4 (en) | Devices and methods for processing and disambiguating touch inputs using intensity thresholds based on prior input intensity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NOKIA CORPORATION, FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DOLENC, ANDRE;RIEKKOLA, ERKKI;REEL/FRAME:030287/0657 Effective date: 20130419 |
|
AS | Assignment |
Owner name: NOKIA TECHNOLOGIES OY, FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOKIA CORPORATION;REEL/FRAME:035500/0928 Effective date: 20150116 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |