WO2013188893A2 - Method and mechanism for human computer interaction - Google Patents

Method and mechanism for human computer interaction Download PDF

Info

Publication number
WO2013188893A2
WO2013188893A2 PCT/ZA2013/000042 ZA2013000042W WO2013188893A2 WO 2013188893 A2 WO2013188893 A2 WO 2013188893A2 ZA 2013000042 W ZA2013000042 W ZA 2013000042W WO 2013188893 A2 WO2013188893 A2 WO 2013188893A2
Authority
WO
WIPO (PCT)
Prior art keywords
interaction
space
objects
virtual
processor
Prior art date
Application number
PCT/ZA2013/000042
Other languages
French (fr)
Other versions
WO2013188893A3 (en
Inventor
Willem Morkel Van Der Westhuizen
Filippus Lourens Andries Du Plessis
Hendrik Frans Verwoerd BOSHOFF
Jan POOL
Original Assignee
Willem Morkel Van Der Westhuizen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Willem Morkel Van Der Westhuizen filed Critical Willem Morkel Van Der Westhuizen
Priority to US14/407,917 priority Critical patent/US20150169156A1/en
Priority to AU2013273974A priority patent/AU2013273974A1/en
Priority to EP13753509.2A priority patent/EP2862043A2/en
Publication of WO2013188893A2 publication Critical patent/WO2013188893A2/en
Publication of WO2013188893A3 publication Critical patent/WO2013188893A3/en
Priority to ZA2015/00171A priority patent/ZA201500171B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]

Definitions

  • This invention relates to human-computer interaction.
  • HCI human-computer interaction
  • Action involves objects, including the human body or limbs used in carrying out the action. Objects without action may be uninteresting, but action without objects seems impossible. Actions take time and objects occupy space, so both time and space enter into interaction.
  • Fitts refers to "the entire receptor-neural-effector system” [6]
  • experimental controls have been devised to tease apart their effects.
  • Fitts had his subject "make rapid and uniform responses that have been highly overlearned” while he held “all relevant stimulus conditions constant,” to create a situation where it was "reasonable to assume that performance was limited primarily by the capacity of the motor system” [6].
  • Hick had his subject's fingers resting on ten Morse keys while awaiting a stimulus, in order to minimise the required movement for indicating any particular response [8].
  • the interaction between human and computer takes the form of a repeating cycle of reciprocal actions on both sides, constituting the main human- computer interaction loop.
  • Figure 2 shows this view, where Norman's "world” is narrowed to the computer, while visual perception is emphasized.
  • the human action has been analysed into the three stages or low-level actions look-decide- move, with the computer action mirroring that to some extent with track- interpret-display. Although each stage feeds information to the next in the direction indicated by the arrows, all six low-level actions proceed simultaneously and usually without interruption.
  • the stages linked together as shown comprise a closed feedback loop, which forms the main conduit for information flow between human and computer.
  • the human may see the mouse while moving it or change the way of looking based on a decision, thereby creating other feedback channels inside this loop, but such channels will be regarded as secondary.
  • the given main HCI loop proceeds inside a wider context, not shown in Figure 2.
  • the stage labelled decide is also informed by a different loop involving his or her intentions, while that loop has further interaction with other influences, including people and the physical environment.
  • the stage labelled interpret is also informed by a further loop involving computer content, while that loop in its turn may have interaction with storage, networks, sensors, other people, etc. Even when shown separately as in Figure 2, the main interaction loop should therefore never be thought of as an isolated or closed system. In this context, closed loop is not the same as closed system.
  • the human action may be regarded as control of the computer, using some form of movement, while the computer provides visual feedback of its response, enabling the human to decide on further action.
  • the cycle repeats at a display rate (about 30 to 120 Hz), which is high enough to create the human illusion of being able to directly and continuously control the movement of objects on the screen.
  • the computer may be programmed to suspend its part of the interaction when the tracking of human movement yields a null result for long enough, but otherwise the loop continues indefinitely.
  • the input and output devices are physical objects, while the processing is determined by data and software.
  • Input devices may range from keyboard, mouse, joystick and stylus to microphone and touchpad or pick-up for eyegaze and electrical signals generated by neurons.
  • Output devices usually target vision, hearing or touch, but may also be directed to other senses like smell and heat.
  • Visual display devices have long dominated what most users consider as computer output.
  • Coomans & Timmermans [12] shown in Figure 4.
  • the interface includes most parts of the computer accessible to the casual user, in particular the input and output devices, but also other more abstract parts, as will be explained below. It excludes all computer subsystems not directly related to human interaction.
  • This objectification of the interface actually implies the introduction of something that may more properly be called an extended interface object, in this case an interface computer or an interface engine.
  • This specification will mostly continue to refer to the object in the middle as the interface, even though it creates a certain paradox, in that two new interfaces inevitably appear, one between the human and the interface (object) and another between the interface (object) and the computer.
  • each of the three extended objects of interest straddles at least two different spaces.
  • the (digital) computer's second space is an abstract and discrete data space, while the cognitive space of the human is also tentatively taken to be discrete.
  • Information transfer or communication between two extended objects takes place in a space shared by both, while intra-object information or messages flow between different parts (sub-objects) of the extended object, where the parts may function in different spaces.
  • Figure 6 shows the same major spaces as Figure 5, but populated with objects that form part of the three extended objects. This is meant to fill in some details of the model, but also to convey a better sense of how the spaces are conceived.
  • the human objects shown, for example, are the mind in cognitive space, and the brain, hand and eye in physical space.
  • the position, orientation, size and abilities of a human body create its associated motor space.
  • This space is the bounded part of physical space in which human movement can take place, eg in order to touch or move an object.
  • a visual space is associated with the human eyes and direction of gaze.
  • the motor and perceptual spaces may be called private, as they belong to, move with and may be partially controlled by a particular individual. Physical space in contrast, is public. By its nature, motor space is usually much smaller than the perceptual spaces.
  • the position, orientation, size and abilities of a computer input device create its associated control space. It is the bounded part of physical space in which the computer can derive information from the human body by tracking some human movement or its effect.
  • the limited physical area of the computer display device constitutes display space, where the human can in turn derive information from the computer by observing the display.
  • a special graphical pointer or cursor in display space is often used to represent a single point of human focus.
  • the pointer forms one of the four pillars of the classic WIMP graphical user interface (GUI), the others being windows, icons and menus.
  • GUI graphical user interface
  • a physical pointing device in control space may be used to track human movement, which the computer then maps to pointer movement in display space. Doing something in one space and expecting a result in another space at a different physical location is an example of indirection; for instance moving a mouse (horizontally) in control space on the table and observing pointer movement (vertically) in display space on the screen. Another example is the use of a switch or a remote control, which achieves indirect action at a distance. Perhaps more natural is the arrangement found in touch sensitive displays, where the computer's control and display spaces are physically joined together at the same surface. One drawback of this is the occlusion of the display by the fingers, incidentally highlighting an advantage of spatial indirection.
  • the C-D function The HCI architect can try to teach and seduce, but do not control the human, and therefore only gets to design the computer side. Thus, of the four spaces, only the computer's control and display spaces are up for manipulation. With computer hardware given, even these are mostly fixed. So the software architect is constrained to designing the way in which the computer's display output will change in response to its control input. This response is identical to the stage labeled "interpret" in Figure 2, and is characterized by a relation variously called the input-output, control-display or C-D function.
  • the computer When defining the C-D function, the computer is often treated as a black box, completely described from the outside by the relation between its outputs and its inputs. Realization of the C-D function is achieved inside the computer by processing of the input data derived from tracking in the context of the computer's internal state.
  • C-D functions for example ones that create pointer acceleration effects on the display which are not present in control space, but which depend on pointing device speed or total distance moved.
  • Figure 7 contains a schematic model of the classic GUI, which shows a simplified concept of what happens inside the black box, when assuming the abovementioned separation between the interface and the computer beyond it.
  • the input data derived from control space is stored inside the machine in an input or control buffer.
  • a display buffer is a special part of memory that stores a bitmap of what is displayed on the screen. Any non-linear effect of the input transducers is usually counteracted by an early stage of processing.
  • the mapping between the physical control space and its control or input buffer is therefore shown as an isomorphism. The same goes for the mapping between the display buffer and the physical display space.
  • the GUI processing of interaction effects are taken to include the C-D function and two other elements called here the Visualizer and the Interpreter.
  • the Visualizer is responsible for creating visualizations of abstract data, e.g. in the form of icons, pictures or text, while the Interpreter generates commands to be processed by the computer beyond the interface.
  • Input processing in this scheme is neatly separated from interaction processing, but an overlap exists between interaction processing and display processing.
  • the locus of this overlap is the display buffer, which contains an isomorphic representation of what appears on the screen. This approach was probably adopted to save memory during the early days of GUI development in the 1980's.
  • the overlap currently creates some constraints on interaction processing, especially in terms of resolution.
  • the display space is the ultimate reference for all objects and actions performed by either human or computer in any space that eventually maps to display space.
  • a model for a generic game engine from the current perspective is shown in Figure 8.
  • a game engine provides a reusable software middleware framework, which may be platform independent, and which simplifies the construction of computer based games.
  • a game engine framework is typically built around a component-based architecture, where developers may have the option to replace or extend individual components. Typical components may include high-level abstractions to input devices, graphics, audio, haptic feedback, physics simulation, artificial intelligence, network communication, scripting, parallel computing and user interaction.
  • a game engine is responsible for creating the game world (game state) from a description of the game and game object models. The game engine dynamically updates the game state based on the game rules, player interaction and the response of real opponents and numerous simulators (e.g. physics simulator and artificial intelligence).
  • GUI elements for interaction in parts of the game (e.g. configuration and control panels), but the majority of games rely on well-defined game data and objects, custom interactions in reaction to player input, actions of opponents (real or artificial) and the current game state.
  • the Apple Dock allows interaction based on a one-dimensional fish- eye distortion.
  • the distortion visually magnifies some icons close to the pointer. This has some perceptual advantages, but no motor or Fitts advantage [14].
  • the cursor movement is augmented by movement of the magnified icon in the opposite direction. Therefore this method provides no motor advantage to a user apart from that of a visual aid.
  • the Apple Dock can thus be classified as a visualising tool.
  • PCT/FI2006/050054 describes a GUI selector tool, which divides up an area about a central point into sectors in a pie menu configuration. Some or all of the sectors are scaled in relation to its relative distance to a pointer. Distance is presumably measured by means of an angle and the tool allows circumferential scrolling.
  • the scaling can be either enlarging or shrinking the sector. The whole enlarged area seems to be selectable and therefore provides a motor advantage to the user.
  • This invention appears aimed at solving the problem of increasing the number of selectable objects on a small screen, such as that of a handheld device.
  • Figure 1 shows Norman's seven stages of human action
  • Figure 2 shows a new analysis of the main Human-Computer Interaction loop, for the purposes of the invention
  • Figure 3 shows the standard ACM model for Human-Computer Interaction in context
  • Figure 4 shows the Coomans & Timmermans model of Human-Computer interaction, as developed for virtual reality interfaces
  • FIG. 5 shows diagrammatically the spatial context of human-computer interaction (HCI), in accordance with the invention
  • Figure 6 shows diagrammatically the Spaces of HCI populated with objects, according to the invention
  • Figure 7 shows diagrammatically a model of the well-known GUI, as viewed from the current perspective
  • Figure 8 shows diagrammatically a model of a generic games engine, as viewed from the current perspective
  • FIG. 9 shows diagrammatically the proposed new model of HCI, according to the invention.
  • FIG 10 shows diagrammatically details of the proposed new interaction engine, according to the invention
  • Figure 11 shows diagrammatically the Virtual Interaction Space (vIS) , according to the invention
  • Figure 12 shows diagrammatically details of the new interaction engine, expanded with more processors and adaptors, according to the invention
  • Figures 13.1 to 13.3 show diagrammatically a first example of the invention
  • Figures 14.1 to 14.4 shows diagrammatically a second example of the invention
  • Figures 15.1 to 15.2 shows diagrammatically a third example of the invention
  • Figures 16.1 to 16.3 shows diagrammatically a fourth example of the invention
  • Figures 17.1 to 17.4 shows diagrammatically a fifth example of the invention.
  • Figures 18.1 to 18.6 shows diagrammatically a sixth example of the invention.
  • FIG 9 shows in context the proposed new interaction engine that is based on a new model of HCI called space-time interaction (STi).
  • STi space-time interaction
  • vIS virtual Interaction Space
  • HCI Space-time Interaction Engine
  • an engine for processing human-computer interaction on a GUI, which engine includes:
  • Figure 9 shows the context of both the physical control space (the block labelled “C”) and the control buffer or virtual control space (the block labelled “C buffer”) in the new space-time model for human-computer interaction.
  • the position and/or movement of a user's body or part of it relative to and/or with an input device is tracked in the physical control space and the tracking may be represented or stored as a real vector function of time in the control buffer as user input data.
  • the sampling rate in time and space of the tracking may preferably be so high that the tracking appears to be continuous.
  • More than one part of the user's body or input device may be tracked in the physical control space and all the tracks may be stored as user input data in the control buffer.
  • the user input data may be stored over time in the control buffer.
  • the tracking may be in one or more dimensions.
  • An input device may also be configured to provide inputs other than movement.
  • such an input may be a discrete input, such as a mouse click, for example.
  • These inputs should preferably relate to the virtual objects with which there is interaction and more preferably to virtual objects which are prioritised. Further examples of such an input may be the touch area or pressure of a person's finger on a touch-sensitive pad or screen.
  • movement is used to describe what is tracked by an input device, it will be understood to also include tracking of indirect movement derived from sound or changes in electrical currents in neurons, as in the case of a Brain Computer Interface. VIRTUAL INTERACTION SPACE (vIS)
  • Figure 11 shows a more detailed schematic representation of the virtual interaction space (vIS) and its contents.
  • the virtual interaction space may be equipped with geometry and a topology.
  • the geometry may preferably be Euclidean and the topology may preferably be the standard topology of Euclidean space.
  • the virtual interaction space may have more than one dimension.
  • a coordinate or reference system may be established in the virtual interaction space, comprising a reference point as the origin, an axis for every dimension and a metric to determine distances between points, preferably based on real numbers. More than one such coordinate system can be created.
  • the objects in the virtual interaction space are virtual data objects and may typically be WIM type objects (window, icon, menu) or other interactive objects.
  • Each object may be referenced at a point in time in terms of a coordinate system, determining its coordinates.
  • Each object may be configured with an identity and a state, the state representing its coordinates, function, behaviour, and other characteristics.
  • a focal point may be established in the virtual interaction space in relation to the user input data in the control buffer.
  • the focal point may be an object and may be referenced at a point in time in terms of a coordinate system, determining its coordinates.
  • the focal point may be configured with a state, representing its coordinates, function, behaviour, and other characteristics.
  • the focal point state may determine the interaction with the objects in the interaction space.
  • the focal point state may be changed in response to user input data.
  • More than one focal point may be established and referenced in the virtual interaction space, in which case each focal point may be configured with an identity.
  • the states of the objects in the virtual interaction space may be changed in response to a change in the state of a focal point and/or object state of other objects in the interaction space.
  • a scalar or vector field may be defined in the virtual interaction space.
  • the field may, for example, be a force field or a potential field that may contribute to the interaction between objects and focal points in the virtual interaction space.
  • Figure 9 shows the context of both the physical sensory feedback space (the block labelled “F”) and the feedback buffer, or virtual feedback space (the block labelled “F buffer”) in the new space-time model for human-computer interaction.
  • An example of a feedback space may be a display device or screen.
  • the content in the virtual interaction space to be observed may be mapped into the display buffer and from there be mapped to the physical display device.
  • the display device may be configured to display feedback in three dimensions.
  • Another example of a feedback space may be a sound reproduction system.
  • the computer may be configured with one or more physical processors, whose processing power may be used to run many processes, either simultaneously in a parallel processing setup, or sequentially in a time-slice setup.
  • An operating system schedules processing power in such a way that processes appear to run concurrently in both these cases, according to some scheme of priority.
  • processor in the following, it may include a virtual processor, whose function is performed either by some dedicated physical processor, or by a physical processor shared in the way described above.
  • Figure 10 shows the Space-time interaction engine for example, containing a number of processors, which may be virtual processors, and which are discussed below.
  • HiC PROCESSOR Human interaction Control Processor and Control functions
  • the step of establishing and referencing one or more focal points in the interaction space in relation to the tracked position and/or movement in the control space may be effected by a processor that executes one or more Control functions or algorithms, named a Human interaction Control or HiC processor.
  • the HiC processor may take user input data from the virtual control space to give effect to the reference of the focal point in the interaction space.
  • the HiC processor may further be configured to also use other inputs such as a discrete input, a mouse click for example, which can also be used as a variable by a function to interact with objects in the interaction space or to change the characteristics of the focal point.
  • Interaction functions The function or functions and/or algorithms which determine the interaction of the focal point and objects in the interaction space, and possibly the effect of a field in the interaction space on the objects, will be called Interaction functions and may be executed by an Interaction processor or Ip processor.
  • One or more Interaction functions or algorithms may include interaction between objects in the interaction space.
  • the interaction may preferably be bi-directional, i.e. the focal point may interact with an object and the object may interact with the focal point.
  • the interaction between the focal point and the objects in the interaction space may preferably be nonlinear.
  • the mathematical function or algorithm that determines the interaction between the focal point and the objects in the interaction space may be configured for navigation between objects to allow navigation through the space between objects.
  • the interaction between the focal point and objects relates to spatial interaction.
  • an object in the form of an icon may transform to a window and vice versa, for example, in relation to a focal point, whereas in the known GUI these objects are distinct until the object is aligned with the pointer and clicked. This embodiment will be useful for navigation to an object and to determine actions to be performed on the object during navigation to that object.
  • the mathematical function or algorithm which determines the interaction between the focal point and the objects in the interaction space may be specified so that the interaction of the focal point with the objects is in the form of interacting with all the objects or a predetermined collection of objects according to a degree of selection and/or a degree of interaction.
  • the degree of selection or interaction may, for example, be in relation to the relative distance of the focal point to each of the objects in the interaction space.
  • the degree of selection may preferably be in terms of a number between 0 and 1. The inventors wish to call this Fuzzy Selection.
  • the mathematical function or algorithm to determine the content of the interaction space to be observed is called the Feedback function and may be executed by the Human interaction Feedback or HiF processor.
  • the Feedback function may be adapted to map or convert the contents to be displayed in a virtual display space or display buffer in which the coordinates are integers. There may be a one-to-one mapping of bits in the display buffer and the pixels on the physical display.
  • the Feedback function may also be adapted to include a scaling function to determine the number of objects or the collection of objects in the interaction space to be displayed.
  • the scaling function may be user configurable. It will be appreciated that the Feedback function is, in effect, an output function or algorithm and the function or algorithm may be configured to also effect outputs other than visual outputs, such as sound, vibrations and the like.
  • a mathematical function or algorithm which determines the selection and use of data stored in memory to establish and compose the virtual interaction space and/or objects in it can be called the Response function and may be executed by the Computer interaction Response or CiR processor.
  • CiC PROCESSOR Computer interaction Command processor and Command functions
  • ADAPTORS A mathematical function or algorithm that determines the data to be stored in memory and/or the commands to be executed, can be called the Command function and may be executed by the Computer interaction Command or CiC processor.
  • An adaptor will be understood to mean a processor configured to change or affect any one or more of the parameters, functional form, algorithms, application domain, etc. of another processor, thereby dynamically redefining the functioning of the other processor.
  • HiCa Human interaction Control adaptor
  • vIS virtual interaction space
  • the HiCa may change the Control function to determine or define the position, size or functionality of the control space in relation to the position of the focal point in the interaction space and/or in relation to the position or dimensions of objects in the interaction space.
  • the determination or definition of the control space may be continuous or discrete.
  • CiR ADAPTOR (CiRa) Another adaptor, which will be called the Computer interaction Response adaptor (CiRa) uses information from the virtual interaction space (vIS) to dynamically redefine the functioning of the CiR processor.
  • the CiRa is a feedback type processor.
  • HiF ADAPTOR HiFa
  • HiFa Human interaction Feedback adaptor
  • vIS virtual interaction space
  • CiC ADAPTOR CiCa
  • CiCa Computer interaction Command adaptor
  • vIS virtual interaction space
  • Ipa Interaction Processor adaptor
  • vIS virtual interaction space
  • space separation can be conceptually achieved by assigning two separate coordinates or positions to each object; an interaction position and a display position. Typically one would be a stationary reference coordinate or position and the other would be a dynamic coordinate that changes according to the interaction of the focal point or pointer with each object. Both coordinates may be of a typical Feedback buffer format and the mathematical function or algorithm that determines the interaction between the focal point or pointer and the objects may use the coordinates from there.
  • the focal point may be provided with two coordinates, which may be in a Control buffer format or a Feedback buffer format.
  • the method may include providing for the virtual interaction and display spaces to overlap in the way described above, and the step of establishing two separate states for every object, namely an interaction state and a display state.
  • object states may include the object position, sizes, colours and other attributes.
  • the method may include providing for the virtual interaction and virtual display spaces to overlap and thereby establishing a separate display position for each object based on interaction with a focal point or tracked pointer.
  • the display position can also be established based on interaction between a dynamic focal point and a static reference focal point.
  • the method may include providing for the virtual interaction and virtual display spaces to overlap and to use the relative distances between objects and one or more focal points to establish object positions and/or states.
  • This method may include the use of time derivatives.
  • One embodiment may include applying one or more mathematical functions or algorithms to determine distant interaction between a focal point and the virtual objects in the interaction space, which interaction at/from a distance may include absence of contact, for example between the focal point and any object with which it is interacting.
  • the method may include a non-isomorphic function or algorithm that determines the mapping of object positions from virtual interaction space to display space. Mapping in this context is taken to be the calculation of the display position coordinates based on the known interaction position coordinates. In one embodiment, the method may include a non-isomorphic function or algorithm that uses focal point positions and object point positions to determine the mapping of object sizes from virtual interaction space to display space.
  • the method may include a non-isomorphic function or algorithm that determines the mapping of object positions and sizes from virtual interaction space to display space.
  • the method may include a non-isomorphic function or algorithm that determines the mapping of object state from virtual interaction space to display space.
  • the method may include a non-isomorphic function or algorithm that uses a focal point position and object positions in virtual interaction space to update object positions in the virtual interaction space.
  • the method may include a non-isomorphic function or algorithm that uses a focal point position and object positions in virtual interaction space to update object sizes in the virtual interaction space.
  • the method may include a non-isomorphic function or algorithm that uses a focal point position and object positions in virtual interaction space to update object positions and sizes in the virtual interaction space.
  • the method may include a non-isomorphic function or algorithm that uses a focal point position and object positions in virtual interaction space to update object states in the virtual interaction space.
  • the method may include a non-isomorphic function or algorithm that uses a focal point position and object positions to determine the mapping of object positions from virtual interaction space to display space as well as to update object positions in the virtual interaction space.
  • the method may include a non-isomorphic function or algorithm that determines the mapping of object sizes from virtual interaction space to display space.
  • the method may include a non-isomorphic function or algorithm that determines the mapping of object positions and sizes from virtual interaction space to display space.
  • the method may include a non-isomorphic function or algorithm that determines the mapping of object state from virtual interaction space to display space.
  • the method may include using the position of a focal point in relation to the position of the boundary of one or more objects in the virtual interaction space to effect crossing-based interaction.
  • An example of this may be where object selection is automatically effected by the system when the focal point crosses the boundary of the object, instead of requiring the user to perform, for example, a mouse click for selection.
  • the method may include the calculation and use of time derivatives of the user input data in the control buffer to create augmented user input data.
  • the method may include dynamically changing the state of objects in the virtual interaction space, based on the augmented user input data.
  • the method may include dynamically changing the properties of the scalar and/or vector fields in the virtual interaction space, based on the augmented user input data.
  • the method may include dynamically changing the properties of the scalar and/or vector fields in the virtual interaction space, based on the position and/or state of one or more objects in the virtual interaction space.
  • the method may include dynamically changing the properties of the scalar and/or vector fields in the virtual interaction space, based on data received from or via the part of the computer beyond the interface.
  • the method may include dynamically changing the geometry and/or topology of the virtual interaction space itself, based on the augmented user input data.
  • the method may include dynamically changing the geometry and/or topology of the virtual interaction space itself, based on the position and/or properties of one or more objects in the virtual interaction space.
  • the method may include dynamically changing the geometry and/or topology of the virtual interaction space itself, based on data received from or via the computer.
  • the method may include interaction in the virtual interaction space between the focal point or focal points and more than one of the objects simultaneously.
  • the method may include the step of utilizing a polar coordinate system in such a way that the angular coordinate of the focal point affects navigation and the radial coordinate affects selection.
  • the method may include the step of utilizing any pair of orthogonal coordinates of the focal point to determine whether the user intends to navigate or to perform selection.
  • the vertical Cartesian coordinate may be used for navigation and the horizontal coordinate for selection.
  • the method may preferably use the HiC processor to apply the Control function or algorithm. This may include the non-isomorphic mapping augmented user input from the control buffer to the virtual interaction space.
  • the method may preferably use the HiF processor to apply the Feedback function or algorithm. This may include the non-isomorphic mapping of relative object positions from virtual interaction space to display space.
  • the method may preferably use the CiR processor to apply the Response function or algorithm. This may include the establishment of relative object positions in virtual interaction space.
  • the method may preferably use the CiC processor to apply the Command function or algorithm. This may include a command to play a song, for example.
  • the method may preferably use the Ip processor to apply the Interaction function or algorithm. This may include using the state of an object in virtual interaction space to change the state of another object or objects in the virtual interaction space.
  • the method may preferably use the HiCa to adapt the functioning of the HiC processor. This may include the HiCa execution of a function or algorithm to adapt the free parameters of a Control function.
  • the method may preferably use the HiFa to adapt the functioning of the HiF processor. This may include the HiFa execution of a function or an algorithm to adapt the free parameters of a Feedback function.
  • the method may use the CiRa to adapt the functioning of the CiR processor. This may include the CiRa execution of a function or an algorithm that determines which objects to insert in virtual interaction space. of the CiC algorithm to
  • the method may use the Ipa to adapt the functioning of the Ip processor. This may include the Ipa execution of a function or algorithm to adapt the free parameters of an Interaction function.
  • the method may use one or more in any combination of the HiC processor, CiC processor, CiR processor, Ip processor, HiF processor, HiCa, CiCa, CiRa, Ipa and/or HiFa to facilitate continuous human- computer interaction.
  • the method may include a Feedback function or algorithm that uses the spatial relations between one or more focal points and the objects in virtual interaction space to establish a different spatial relation between objects in display space.
  • the method may include a further Feedback function or algorithm that uses the spatial relations between one or more focal points and the objects in virtual interaction space to establish different state values for each object in display space.
  • the method may include an Interaction function or algorithm that uses the spatial relations between one or more focal points and the objects in virtual interaction space to establish a different spatial relation between objects in virtual interaction space.
  • the method may include an Interaction function or algorithm that uses the spatial relations between one or more focal points and the objects in virtual interaction space to establish different state values for each object in virtual interaction space.
  • the method may include allowing or controlling the relative position of some or all of the objects in the virtual interaction space to have a similar relative position in the display space when the focal point or focal object has the same relative distance distribution between all the objects.
  • a further method may include allowing or controlling the relative positions of some or all of the objects to change in relation to the relative positions when comparing the interaction and the display space in such a way that the change in relative position of the focal point or focal object is a function of the said change.
  • the relative object positions may differ in the display space when compared with their relevant positions in the interaction space.
  • the method may include allowing or controlling the relative size of some or all of the objects in the vIS to have a similar size in the display space when the focal point or focal object has the same relative distance distribution between all the objects.
  • a further method may include allowing or controlling the relative size of some or all of the objects to change in relation to the relative sizes when comparing the interaction and the display space in such a way that the change in relative position of the focal point or focal object is a function of the said change.
  • the relative object size may differ in the display space when compared with its relevant positions in the interaction space.
  • the method may include allowing or controlling the relative position and size of some or all of the objects in the vIS to have a similar relative position and size in the display space when the focal point or focal object has the same relative distance distribution between all the objects.
  • a further method may include allowing or controlling the relative positions and sizes of some or all of the objects to change in relation to the relative positions when comparing the interaction and the display space in such a way that the change in relative position of the focal point or focal object is a function of the said change.
  • the relative object positions and sizes may differ in the display space when compared with their relevant positions in the interaction space.
  • the interaction of the focal point in the control space with objects in the interaction space occurs non-linearly, continuously and dynamically according to an algorithm of which the focal point position in its control space is a function.
  • Example 1 In a first, most simplified, example of the invention, as shown in Figures
  • the method for human-computer interaction (HCI) on a graphic user interface (GUI) includes the step of tracking movement of pointing object, a person's finger 40 in this case, on a touch sensitive pad 10, the control space.
  • Human-computer interaction is facilitated by means of an interaction engine 29, which establishes a virtual interaction space 12 and referencing 8 objects 52 in a the space.
  • a CiR processor 23 determines the objects to be composed the virtual interaction space 12.
  • the interaction engine 29 further establishes and references a focal point 42 in the interaction space 12 in relation to the tracked movement of the person's finger 40 and reference point 62.
  • the engine 29 then uses the Ip processor 25 to determine the interaction of the focal point 42 in the interaction space 12 and objects 52 in the interaction space.
  • the object, 52.1 in this case, closest to the focal point at any point in time will move closer to the focal point and the rest of the objects will remain stationary.
  • the HiF processor 22 determines the content of the interaction space 12 to be observed by a user and the content is isomorphically mapped and displayed on the visual display feedback buffer 14.
  • the reference point is represented by the dot marked 64.
  • the person's finger 40 in the control space 0 is represented by a pointer 44.
  • the objects are represented by 54.1 to 54.8.
  • the tracking of the person's finger is repeated within short intervals and appear to be continuous.
  • the tracked input device or pointer object input data is stored over time in the virtual control space or control buffer.
  • the method for human-computer interaction (HCI) on a graphical user interface (GUI) includes the step of tracking the movement of a pointing object, a person's finger 40 in this case, on a touch sensitive pad 10, in the control space (C).
  • the tracked pointing object input data (coordinates or changes in coordinates) 41 is stored over time in the virtual control (vC), space or control buffer 11 , after being mapped isomorphically by processor 20.
  • Reference point 62 is established by the CiR processor 23 inside the virtual interaction space 12.
  • the CiR processor 23 further assigns regularly spaced positions ⁇ on a circle of radius one centred on the reference point 62, and uniform sizes Wj to the circular virtual objects 52.
  • the HiC processor 21 establishes a focal point 42 and calculates, and continually updates, its position in relation to the reference point 62, using a function or algorithm based on the input data 41.
  • the Ip processor 25 calculates the distance r p between the focal point 42 and the reference point 62, as well as the relative distances r ip between all virtual objects 52. i and the focal point 42, based on the geometry and topology of the virtual interaction space 12, and updates these values whenever the position of the focal point 42 changes.
  • the HiF processor 22 establishes a reference point 63, a virtual pointer 43 and virtual objects 53.
  • Processor 27 establishes and continually updates a reference point 64, a pointer 44 and pixel objects 54. i in the feedback space, a display device 14 in this case, isomorphically mapping from 63, 43 and 53. i respectively.
  • Figure 14.1 shows the finger 40 in a neutral position in control space 10, which is the position mapped by the combined effect of processors 20 and 21 to the reference point 62 in the virtual interaction space 12, where the focal point 42 and reference point 62 therefore coincide for this case.
  • the combined effect of processors 22 and 27 therefore in this case preserves the equal sizes and the symmetry of object placement in the mapping from the virtual interaction space 12 to the feedback or display space 14, where all circles have the same diameter W.
  • Figure 14.2 shows the focal point 42 mapped halfway between the reference point 62 and the virtual object 52.1 in the virtual interaction space 12. Note that the positions of the virtual objects 52. i never change in this example. The relative distance r ip with respect to the focal point 42 is different for every virtual object 52. i however, and the mapping by the HiF processor 22 yields different sizes and shifted positions for the objects 54.i in the feedback or display space 14.
  • the function used for calculating display size is
  • the function family used for calculating relative angular positions may be sigmoidal, as follows: 9j P is the relative angular position of virtual object 52. i with respect to the line connecting the reference point 62 to the focal point 42 in the virtual interaction space 12. The relative angular position is normalised to a value between -1 and 1 by calculating
  • v ip is determined as a function of u ip and r p , using a piecewise function based on for l / v , a straight line for ⁇ U ⁇ 2 I S and
  • the functions or algorithms implemented by the HiC processor 21 and the HiF processor 22 may be sufficient to completely and uniquely determine the configurations of the pixel objects 54. i in display space 14 for any position of the person's finger 40 in the control space 10.
  • the tracking of the person's finger 40 is repeated within short intervals of time and the sizes and positions of pixel objects 54.i appear to change continuously due to image retention on the human retina. If the necessary calculations are completed in real time, the person has the experience of continuously and simultaneously controlling all the displayed objects 54. i by moving his finger 40.
  • the controller (C) is in the form of a three-dimensional multi-touch (3D-MT) input device.
  • the 3D-MT device provides the position of multiple pointing objects (such as fingers) as a set of 3D coordinates (projected positions) in the touch (x-y) plane, along with the height of the objects (z) above the touch plane.
  • the method for human-computer interaction (HCI) on a graphical user interface (GUI) includes the step of tracking the movement of multiple pointer objects 40. i, in the form of a person's fingers, where i can be 1 to N, on or over a three-dimensional multi-touch (3D-MT) input device (C) 10.
  • the tracked pointer input data (3D coordinates or change in coordinates) 41. i are stored over time in the virtual control space (vC) 1 1 .
  • the HiC processor 21 establishes a focal point 42. i for each pointer object in the virtual interaction space (VIS) 12 as a function of each pointer's previous position, its current position, so that objects that move the same distance over 1 1 and 12's x-y plane, but at different heights (different z coordinate values) above the touch plane, result in different distances moved for each 42 i in VIS 12.
  • the HiF processor 22 establishes for each focal point 42. i a virtual pointer 43. i in the virtual feedback buffer (vF) 13 using isomorphic mapping. Each virtual pointer 43. i is again mapped isomorphically to a visual pointer 44. i in the feedback space (F) 14.
  • MR infinite impulse response
  • Equation 103.1 Equation 103.1
  • Equation 103.3 Equation 103.3
  • Figure 15.1 shows two pointer objects, in this case fingers 40.1 and 40.2, in an initial position, so that the height ⁇ Q of pointer object 40.1 is higher above the touch plane of 10 than the height Z402 of pointer object 40.2, i.e. z ⁇ t0 1 > Z402 .
  • the pointer objects are isomorphically mapped to establish pointers 41.1 and 41.2.
  • the pointers are mapped by the HiC 21 processor, using in this case
  • Equation 103.3 as the scaling function in Equation 103.1 and with z * 0 ] > z * and z Ao.
  • ⁇ z a t to establish focal points 42.1 and 42.2 in 12.
  • the focal points are mapped by HiF 22 to establish virtual pointers 43.1 and 43.2 in 13.
  • the virtual pointers are isomorphically mapped to display pointers 44.1 and 44.2 in 14.
  • Figure 15.2 shows the displacement of pointer objects 40.1 and 40.2 to new positions.
  • the pointer objects moved the same relative distance over the touch plane, while maintaining their initial height values.
  • the pointer objects are isomorphically mapped to 11 as before. Note that 41.1 and 41.2 moved the same relative distance and maintained their respective ⁇ coordinate values.
  • the pointers in 11 are mapped by the HiC 21 processor, while still using Equation 103.3 as the scaling function in Equation 103.1 , to establish new positions for focal points 42.1 and 42.2 in 12.
  • the relative distances that the focal points moved are no longer equal, with 42.2 travelling half the relative distance of 42.1 in this case.
  • the focal points are mapped by HiF 22 to establish virtual pointers 43.1 and 43.2 in 13 and virtual pointers, in turn are isomorphically mapped to display pointers 44.1 and 44.2 in 14.
  • the effect of the proposed transformation is to change a relative pointer object 40. i movement in the controller 11 space to scaled relative movement of a display pointer 44.i in the feedback 14 space, so that the degree of scaling may cause the display pointer 44. i to move slower, at the same speed or faster than the relative movement of pointer object 40. i.
  • a controller 10 that provides at least a two-dimensional input coordinate can be used.
  • the method for human-computer interaction (HCI) on a graphical user interface (GUI) includes the step of tracking the movement of a pointer object 40 on a touch sensitive input device (C) 10.
  • the tracked pointer object is isomorphically mapped to establish a pointer input data coordinate 41 in the virtual control space (vC) 11.
  • the HiC processor 21 establishes a focal point 42 for the pointer coordinate in the virtual interaction space (VIS) 12.
  • the CiR processor 23 establishes a grid-based layout object 52.1 that contains N cells. Each cell may be populated with a virtual object 52. i, where 2 ⁇ ⁇ 10 i which contains a fixed interaction coordinate centred within the cell, by the CiR processor 23.
  • the Ip processor 25 calculates, for each cell, a normalised relative distance ⁇ between the focal points 42 and interaction coordinate of virtual object 52. i, based on the geometry and topology of VIS 12, and updates these values whenever the position of the focal point 42 changes.
  • the virtual pointer 43 and virtual objects 53.i are mapped isomorphically to a visual pointer 44 and visual objects 54. i in the visual display feedback space (F) 14.
  • Figure 16.1 shows a case where no pointer object is present in 10.
  • the isomorphic transformation does not establish a pointer coordinate in 11 and the HiC processor 21 does not establish a focal point in VIS 12.
  • the CiR processor
  • Ip processor sets Ip for all values of 1 .
  • the HIF processor 22 may perform an algorithm, such as the following, to establish virtual objects 53. i in the virtual feedback buffer 13:
  • the grid-based layout container is mapped to a virtual container object that consumes the entire space available in 14.
  • the virtual container object is not visualised, but its width W53 1 and height ⁇ 53 1 are used to calculate the location and size for each cell's virtual object 53. i.
  • Equation 104.1 Equation 104.1 where is the minimum allowable relative size factor with a range of values ⁇ 1 , j s the maximum allowable relative size factor with a sf > 1
  • range of values •/max ⁇ and 1 is a free parameter determining how strongly the relative size factor magnification depends upon the normalised relative distance r ' p .
  • « is the index of the first cell in a row and is the index of the last index in a row.
  • a is the index of the first cell in a column and b j s the index of the last index in a column.
  • focal points 42 is absent and ip for all values of the HiF processor 22 assigns equal widths and equal heights to each virtual object.
  • the result is a grid with equally distributed virtual objects.
  • the virtual pointer 43 and virtual objects 53. i are mapped isomorphically to a visual pointer 44 and visual objects 54. i in the visual display feedback space (F) 14.
  • F visual display feedback space
  • a focal points 42 and virtual objects 52. i are established and normalised relative distances r ' p are calculated in VIS 12 through the process described above.
  • the location of visual pointer 44 and the size and locations of visual objects 54. i are updated as changes to pointer object 40 are tracked, so that the resulting visual effect is that visual objects compete for space based on proximity to visual pointer 44, so that visual objects closer to the visual pointer 44 are larger than objects farther from 44. Note that by independently calculating the width and height of a virtual object 53. i, objects may overlap in the final layout in 13 and 14.
  • Any controller 10 that provides at least a three-dimensional multi-touch (3D-MT) input device can be used.
  • the method for human-computer interaction (HCI) on a graphical user interface (GUI) includes a method, function or algorithm that combines the passage of time with the movement of a pointer object in the z-axis to dynamically navigate through a hierarchy of visual objects.
  • the movement of a pointer object 40 is tracked on a 3D multi-touch input device (C) 10.
  • the tracked pointer object is isomorphically mapped to establish a pointer input data coordinate 41 in the virtual control space (vC) 11.
  • the HiC processor 21 establishes a focal point 42 for the pointer coordinate in the virtual interaction space (VIS) 12.
  • the CiR processor 23 establishes a hierarchy of cells in VIS 12. Each cell may be populated with a virtual object, which contains a fixed interaction coordinate centered within the cell, by the CiR processor 23.
  • the hierarchy of virtual objects is established so that a virtual object 52. i contains virtual objects 52.i.j.
  • the virtual objects to be included in VIS 12 may be determined by using the CiRa 33 to modify the free parameters, functions or algorithms of the CiR processor 23.
  • One such algorithm may be the following set of rules:
  • a pointer object is present in control space 10, with an associated focal point in VIS 12, establish positions and sizes in VIS 12 for all, or a subset, of the virtual objects and their children based on the z coordinate of the focal point and the following rules: a. If z ⁇ z ", where z is the hierarchical expansion threshold, select the virtual object under the focal points object and let it, and its children, expand to occupy all the available space in VIS 12. i. If an expansion occurs, do not process another expansion unless:
  • the HiF processor 22 establishes a virtual pointer 43 and virtual objects 53. i and 53.i.j in the feedback buffer 13.
  • the virtual pointer 43 and virtual objects 53. i and 53.i.j are mapped isomorphically to a visual pointer 44 and visual objects 54. i and 54.i.j in the visual display feedback space 14.
  • Figure 17.1 shows an initial case where no pointer object is present in 10. This condition triggers Rule 1.
  • the hierarchy of virtual objects 52. i and 52.i.j in VIS 12 leads to the arrangements of visual objects 54. i and 54.i.j in the visual display feedback space 14.
  • a pointer object 40 is introduced in control space 10 with coordinate positions x , y and 2 so that z « > z « ⁇ This condition triggers Rule 2.
  • the pointer object 40 in control space 10 is mapped to visual pointer 44 in the visual display feedback space 14.
  • the hierarchy of virtual objects 52. i and 52.i.j in VIS 12 are mapped to rearrange visual objects 54. i and 54.i.j in the visual display feedback space 14 as shown. In this case, all the initial virtual objects are visible.
  • Visual object 54.1 is much larger than its siblings 54.2 - 54.4, due to its proximity to the visual pointer 44.
  • Figure 17.3 shows a displaced pointer object 40 in control space 10 with
  • the CiRa 33 modifies the free parameters, functions or algorithms of the CiR processor 23 so that it now establishes new positions and sizes only for the hierarchy of cells that contains virtual object 54.1 and its children 52.1.j. The effect is that virtual objects 52.2 - 52.4 are removed from VIS 12, while virtual object 52.1 and its children 52.1.j are expanded to occupy all the available space in VIS 12. Using the methods, functions and algorithms described in Example 4, the pointer object 40 in control space 10 is mapped to visual pointer 44 in the visual display feedback space 14.
  • the hierarchy of virtual objects 52.1 and 52.1.j in VIS 12 are mapped to rearrange visual objects 54.1 and 54.1.j in the visual display feedback space 14 as shown. In this case, only visual object 54.1 and its children 54.1.j are visible.
  • Visual object 54.1.1 is much larger than its siblings (54.1.2 - 54.1.4) due to its proximity to the visual pointer 44.
  • Figure 17.4 shows pointer object 40 in control space 10 at the same
  • the hierarchy of virtual objects 52.1.1 in VIS 12 is mapped to rearrange visual objects 54.1 and 54.1.1 in the visual display feedback space 14 as shown. In this case, only visual objects 54.1 and 54.1.1 are visible and occupy all the available space in in the visual display feedback space 14.
  • a pointer object 40 is introduced in control space 10 coordinate positions * , y and z ° , so that ⁇ a > z,e .
  • the pointer object 40 is next displaced in control space 10 to coordinate positions , and ⁇ b , so that ⁇ b ⁇ a and ⁇ b ⁇ te .
  • the pointer object 40 displacement direction is now reversed to coordinate positions* , y and c , so that " b ° a and c " ,e " ,d .
  • the pointer object 40 displacement direction is again reversed to coordinate positions* , y and ⁇ b , so that ⁇ * ⁇ ⁇ .
  • This condition triggers Rule 2.a.i.2, which leads to the arrangement of visual pointer 44 and visual objects 54.1 and 54.1.1 in the visual display feedback space 14 as shown before in Figure 17.4.
  • the method for human computer interaction (HCI) on a graphical user interface (GUI) includes the step of tracking the movement of a pointer object 40 on a touch sensitive input device 10.
  • any controller 10 that provides at least a two-dimensional input coordinate can be used.
  • the tracked pointer object is isomorphically mapped to establish a pointer input data coordinate 41 in the virtual control space 11.
  • the HiC processor 21 establishes a focal point 42 for the pointer coordinate in the virtual interaction space (VIS) 12.
  • the CiR processor 23 populates VIS 12 with N virtual objects 52. i and establishes for each object a location and size, so that the objects are distributed equally over VIS 12.
  • the CIR processor 23 also establishes a fixed interaction coordinate centred within each object.
  • the HiF processor 22 establishes a virtual pointer 43 and virtual objects 53. i in the feedback buffer 13, and calculates and updates the size and position of the feedback objects 53. i to maintain the equal distribution of objects in the feedback buffer 13.
  • the virtual pointer 43 and virtual objects 53. i are mapped isomorphically to a visual pointer 44 and visual objects 54. i in the visual display feedback space 14.
  • Figure 18.1 shows a case where no pointer object is present in 10.
  • the isomorphic transformation does not establish a pointer coordinate in 1 and the HiC processor 21 does not establish a focal point in VIS 12.
  • the CiR processor 23 establishes 16 virtual objects 52. i, where 1 ⁇ / ⁇ 16 , each with a fixed interaction coordinate, location and size so that the virtual objects are distributed equally over VIS 12.
  • HiF processor 22 assigns the size and position of the feedback objects 53. i to maintain the equal distribution of objects in the feedback buffer 13.
  • the feedback objects 53 i are mapped isomorphically to visual objects 54.i in the visual display feedback space 14.
  • a focal point 42 and virtual objects 52. i are established through the process described above.
  • the HiF processor 22 assigns the size and position of the virtual objects 53. i to maintain the equal distribution of objects in the feedback buffer 13, but if the focal point 42 falls within the bounds of a virtual object, thereby selecting the virtual object, the HiF processor will emphasize the selected virtual object's corresponding feedback object in the feedback buffer 13 and de-emphasize all other virtual object's corresponding feedback objects.
  • Figure 18.2 demonstrates a case where the focal point 42 falls within the bounds of virtual object 52.16.
  • the corresponding feedback object 53.16 will be emphasised by increasing its size slightly, while all other feedback objects 53.1 to 53.15 will be de-emphasised by increasing their grade of transparency.
  • the feedback objects 53. i are mapped isomorphically to visual objects 54. i in the visual display feedback space 14.
  • the CiC processor 24 continuously checks if the focal point 42 falls within the bounds of one of the virtual objects 52. i. If the focal point stays within the bounds of the same virtual object for longer than a short time period t d , a command to prepare additional objects and data is send to the computer.
  • the CiR and CiRa processors process the additional data and object information to determine if some virtual objects should no longer be present in VIS 12 and/or if additional objects should be introduced in VIS 12.
  • Figure 18.3 shows a case where the focus point 42 stayed within the bounds of virtual object 52.16 for longer than t d seconds.
  • virtual objects 52.1 to 52.15 will no longer be introduced in VIS 12, while new secondary objects 52.16J, where l ⁇ / ⁇ 3 , with virtual reference point 62.1 , located on the virtual object 52.16's virtual interaction coordinate, are introduced in VIS 12 at a constant radius r d from virtual reference point 62.1 , and at fixed angles 0 j .
  • Tertiary objects 52.16J.1 representing the virtual objects for each secondary virtual object, along with a second virtual reference point 62.2, located in the top left corner, are also introduced in VIS 12.
  • the Ip 25 calculates, based on the geometry and topology of VIS 12:
  • the Ip continuously updates vectors r ⁇ P , ri P and r P] ⁇ whenever the position of the focal point 42 changes.
  • the HIF processor 22 maps the focal point 42 and the remaining primary virtual objects 52. i as before and isomorphically maps virtual reference point 62.1 to feedback. It then uses projection vectors " r P) ⁇ to perform a function or an algorithm to establish the size and location for the secondary feedback objects 53.16J in the virtual feedback buffer 13.
  • a function or algorithm may be:
  • Isomorphically map an object's size to its representation in VIS 12.
  • the HiF processor 22 also uses r dj to determine if a tertiary virtual object should be mapped to feedback buffer 13 and what the object's size should be.
  • r dj may be:
  • the focal point located at the same position as virtual reference point 62.1 , the secondary visual objects 54.16.j are placed a constant radius r d away from feedback reference point 63.1 and at fixed angles #, ⁇ , while no tertiary visual objects 54.16J.1 are visible.
  • Figure 18.4 shows a displaced pointer object 40 in control space 10.
  • the position of focal point 42 is updated, while virtual objects 52. i and 52.i.j are established, and vectors P , r 2p and r Pj are calculated as before.
  • the application of the algorithm and functions implemented by the HiF processor 22 as describe above, leads to the arrangements of visual objects 54.16, 54.16J, 54.16.3.1 in the visual display feedback space 14 as shown in Figure 18.4.
  • Visual object 54.16.1 almost did not move, visual object 54.16.2 moved closer to visual object 54.16 and visual object 54.16.3 moved even closer than visual object 54.16.2 to visual object 54.16.
  • Tertiary visual object 54.16.3.1 is visible and becomes larger, while all other tertiary visual object 54.16.3. k are not visible.
  • Figure 18.5 shows a further displacement of pointer object 40 in control space 10, so that the focal point crossed secondary virtual object 52.16.3 and then continued on towards tertiary virtual object 52.16.3.1.
  • the position of focal point 42 and all calculated values are updated.
  • the CiRa 33 adapts the CiR processor 23 to now only load the preciously selected primary virtual object, the currently selected secondary virtual object and its corresponding tertiary virtual object. In this case, only primary virtual object 52.16, secondary virtual object 52.16.3 and tertiary virtual object 52.16.3.1 are loaded.
  • the HiF processor 22 may now change so that:
  • the selected secondary virtual object's tertiary virtual object further adjust its position so that if the focal point 42 moves towards the virtual reference point 62.2, the tertiary virtual object moves upwards, while if the focal point 42 moves away from virtual reference point 62.2, the tertiary virtual object moves downwards.
  • Visual objects 54.16 and 54.16J is no longer visible and visual object 54,16.3.1 expanded to take up the available visual feedback buffer space.
  • Figure 18.6 shows a further upward displacement of pointer object 40 in control space 10.
  • the position of focal point 42 and all calculated values are updated.
  • the application of the algorithm and functions implemented by the HiF processor 22 as describe above, leads to the arrangements of visual object 54,16.3.1 in the visual display feedback space 14 as shown in Figure 18.6.
  • Visual object 54,16.3.1 moved downwards, so that more of its object is shown, in response the focal point moving closer to virtual reference point 62.2 in VIS 14.

Abstract

The method provides a method and engine for human-computer interaction (HCI) on a graphical user interface (GUI). The method includes the step of tracking the position and/or movement of a user's body or part of it relative to and/or with an input device in a control space, facilitating human-computer interaction by means of an interaction engine and providing feedback to the user in a sensory feedback space. Facilitation includes the steps of: establishing a virtual interaction space(vIS); establishing and referencing one or more virtual objects with respect to the interaction space; establishing and referencing one or more focal points in the interaction space in relation to the tracked position and/or movement in the control space; applying one or more mathematical functions or algorithms to determine the interaction between one or more focal points and the virtual objects in the interaction space, and/or to determine one or more commands to be executed; and applying a mathematical function or algorithm to determine what content of the interaction space is to be presented to the user as feedback, and in which way the content is to be displayed.

Description

Title: Method and Mechanism for Human-computer Interaction
Technical field of the invention
This invention relates to human-computer interaction.
Background to the invention
The fundamental concepts of human-computer interaction (HCI) have been addressed in many ways and from various perspectives [1-4]. Norman [5] separates human action into the seven stages appearing in Figure 1. He derives the stages from two "aspects" of human action, namely execution and evaluation. These aspects have human goals in common>and they are repeated in a cycle closed by the effects of the action on the state of what he labels as "the world."
Action involves objects, including the human body or limbs used in carrying out the action. Objects without action may be uninteresting, but action without objects seems impossible. Actions take time and objects occupy space, so both time and space enter into interaction.
Action and interaction in time
The motor part of human action (roughly Norman's execution aspect) is widely modelled by "Fitts" Law" [6]. It is an equation for the movement time (MT) needed to complete a simple motor task, such as reaching for and touching a designated target of given size over some distance [7]. For one dimensional movement, this equation has two variables: the distance or movement amplitude (A) and the target size or width (W); and also two free parameters: a and b, chose t of empirical data:
Figure imgf000002_0001
The perceptual and choice part of human action (roughly Norman's evaluation aspect) is modelled by "Hick's Law" [8], an equation for the reaction time (RT) needed to indicate the correct choice among N available responses to a stimulus, randomly selected from a set of N equally likely alternatives. This equation only has one variable, the number N, and one parameter K for fitting the data:
RT = £log2 (l+N)
No human performance experiment can be carried out without a complete human action involving both execution and evaluation (Fitts refers to "the entire receptor-neural-effector system" [6]), but experimental controls have been devised to tease apart their effects. For example, Fitts had his subject "make rapid and uniform responses that have been highly overlearned" while he held "all relevant stimulus conditions constant," to create a situation where it was "reasonable to assume that performance was limited primarily by the capacity of the motor system" [6]. On the other hand, Hick had his subject's fingers resting on ten Morse keys while awaiting a stimulus, in order to minimise the required movement for indicating any particular response [8].
The studies of both Fitts [6] and Hick [8] were inspired by and based on then fresh concepts from information theory as disseminated by Shannon and Weaver [9]. While Fitts' Law continues to receive attention, Hick's Law remains in relative obscurity [10].
The interaction between human and computer takes the form of a repeating cycle of reciprocal actions on both sides, constituting the main human- computer interaction loop. Figure 2 shows this view, where Norman's "world" is narrowed to the computer, while visual perception is emphasized. The human action has been analysed into the three stages or low-level actions look-decide- move, with the computer action mirroring that to some extent with track- interpret-display. Although each stage feeds information to the next in the direction indicated by the arrows, all six low-level actions proceed simultaneously and usually without interruption. The stages linked together as shown comprise a closed feedback loop, which forms the main conduit for information flow between human and computer. The human may see the mouse while moving it or change the way of looking based on a decision, thereby creating other feedback channels inside this loop, but such channels will be regarded as secondary.
The given main HCI loop proceeds inside a wider context, not shown in Figure 2. On the human side for example, the stage labelled decide is also informed by a different loop involving his or her intentions, while that loop has further interaction with other influences, including people and the physical environment. On the computer side, the stage labelled interpret is also informed by a further loop involving computer content, while that loop in its turn may have interaction with storage, networks, sensors, other people, etc. Even when shown separately as in Figure 2, the main interaction loop should therefore never be thought of as an isolated or closed system. In this context, closed loop is not the same as closed system. The human action may be regarded as control of the computer, using some form of movement, while the computer provides visual feedback of its response, enabling the human to decide on further action. The cycle repeats at a display rate (about 30 to 120 Hz), which is high enough to create the human illusion of being able to directly and continuously control the movement of objects on the screen. The computer may be programmed to suspend its part of the interaction when the tracking of human movement yields a null result for long enough, but otherwise the loop continues indefinitely.
A more comprehensive description of HCI and its context is provided by the ACM model from the SIGCHI Curricula for human-computer interaction [11], shown in Figure 3. The computer side may be divided into three parts which map directly to the three computer actions of Figure 2:
• Input — human control movements are tracked and converted into input data
· Processing — the input data is interpreted in the light of the current computer state, and output data is calculated based on both the input data and the state • Output — the output data is presented to the human as feedback (e.g. as a visual display)
The input and output devices are physical objects, while the processing is determined by data and software. Input devices may range from keyboard, mouse, joystick and stylus to microphone and touchpad or pick-up for eyegaze and electrical signals generated by neurons. Output devices usually target vision, hearing or touch, but may also be directed to other senses like smell and heat. Visual display devices have long dominated what most users consider as computer output.
A model of human-computer interaction that contains less context but a more detailed internal structure than that of the ACM, is the one of Coomans & Timmermans [12] shown in Figure 4. In their intended application domain of virtual reality user interfaces, they claim that a two-step transformation is always required for computer input (namely abstraction and interpretation) and computer output (namely representing and rendering).
Objects and spaces
The inventors' view of the spatial context of HCI is presented in Figure 5, where the three extended objects human, interface and computer are shown in relation to four major spaces: physical, cognitive, data and virtual. The inventors' segmentation of the problem exhibits some similarities to that of Figure 4, but the boxes containing the term representation are paralleled by spaces for purposes of the invention.
In contrast with the previously shown models, a complete conceptual separation is made here between the interface and the computer on which it may run. The interface includes most parts of the computer accessible to the casual user, in particular the input and output devices, but also other more abstract parts, as will be explained below. It excludes all computer subsystems not directly related to human interaction. This objectification of the interface actually implies the introduction of something that may more properly be called an extended interface object, in this case an interface computer or an interface engine. This specification will mostly continue to refer to the object in the middle as the interface, even though it creates a certain paradox, in that two new interfaces inevitably appear, one between the human and the interface (object) and another between the interface (object) and the computer. In this model, the human does not interact directly with the computer, but only with the interface (object). From the point of view of the end user, such a separation between the interface computer and the computer proper may be neither detectable nor interesting. For the system architect however, it may provide powerful new perspectives. Separately, the two computers may be differently optimised for their respective roles, either in software or hardware or both. The potential roles of networking, cloud storage and server side computing are also likely to be different. The possibility exists that, like GPUs vs CPUs, the complexity of the interface computer or interaction processing unit (IPU) may rival that of the computer itself. Everything in Figure 5 is assumed to exist in the same encompassing physical space, which is apparently continuous in the sense of having infinitely divisible units of distance. Furthermore each of the three extended objects of interest straddles at least two different spaces. The (digital) computer's second space is an abstract and discrete data space, while the cognitive space of the human is also tentatively taken to be discrete. One may recognize a certain thirdness about our interface object, not only in its explicit role as mediator between human and computer, but also in its use of a third category of virtual spaces in addition to its physical presence with respect to the human and its data presence on the computer side.
Due to their representational function, the virtual spaces of the interface tend to be both virtually physical and virtually continuous, despite their being implemented as part of the abstract and discrete data space. The computer processing power needed to sustain a convincing fiction of physicality and continuity has only become widely affordable in the last decade or two, giving rise to the field of virtual reality, which finds application in both serious simulations and in games. In Figure 5, the representation of virtual reality would be situated in the interface.
Information transfer or communication between two extended objects takes place in a space shared by both, while intra-object information or messages flow between different parts (sub-objects) of the extended object, where the parts may function in different spaces.
Figure 6 shows the same major spaces as Figure 5, but populated with objects that form part of the three extended objects. This is meant to fill in some details of the model, but also to convey a better sense of how the spaces are conceived. The human objects shown, for example, are the mind in cognitive space, and the brain, hand and eye in physical space.
Four virtual spaces of the interface are also shown, labelled as buffers in accordance with standard usage. Other terms are used in non-standard ways, for example, the discrete interpreter in the data space part of the interface is commonly called the command line interpreter (CLI), but is named in the former way here to distinguish it from a continuous interpreter placed in the virtual space part. Information flow is not represented in Figure 6, because it results in excessive clutter, but it may be added in a fairly straightforward way. Human motor space and visual space meet computer control space and display space respectively
The position, orientation, size and abilities of a human body create its associated motor space. This space is the bounded part of physical space in which human movement can take place, eg in order to touch or move an object. Similarly, a visual space is associated with the human eyes and direction of gaze. The motor and perceptual spaces may be called private, as they belong to, move with and may be partially controlled by a particular individual. Physical space in contrast, is public. By its nature, motor space is usually much smaller than the perceptual spaces.
The position, orientation, size and abilities of a computer input device create its associated control space. It is the bounded part of physical space in which the computer can derive information from the human body by tracking some human movement or its effect. The limited physical area of the computer display device constitutes display space, where the human can in turn derive information from the computer by observing the display.
The possibility of interaction is predicated on a usable overlap between the motor and control spaces on one hand and between the visual and display spaces on the other. Such spatial overlap is possible because all the private spaces are subsets of the same public physical space. The overlap is limited by objects that occupy some part of physical space exclusively, or by objects that occlude the signals being observed.
Other terms may be used for these spaces, depending on the investigator's perspective and contextual emphasis, including input space and output space, action space and observation space, Fitts [6] space and Hick [8] space.
A special graphical pointer or cursor in display space is often used to represent a single point of human focus. The pointer forms one of the four pillars of the classic WIMP graphical user interface (GUI), the others being windows, icons and menus. A physical pointing device in control space may be used to track human movement, which the computer then maps to pointer movement in display space. Doing something in one space and expecting a result in another space at a different physical location is an example of indirection; for instance moving a mouse (horizontally) in control space on the table and observing pointer movement (vertically) in display space on the screen. Another example is the use of a switch or a remote control, which achieves indirect action at a distance. Perhaps more natural is the arrangement found in touch sensitive displays, where the computer's control and display spaces are physically joined together at the same surface. One drawback of this is the occlusion of the display by the fingers, incidentally highlighting an advantage of spatial indirection.
The C-D function The HCI architect can try to teach and seduce, but do not control the human, and therefore only gets to design the computer side. Thus, of the four spaces, only the computer's control and display spaces are up for manipulation. With computer hardware given, even these are mostly fixed. So the software architect is constrained to designing the way in which the computer's display output will change in response to its control input. This response is identical to the stage labeled "interpret" in Figure 2, and is characterized by a relation variously called the input-output, control-display or C-D function.
The possible input-output mapping of movements in control space to visual changes in display space is limited only by the ingenuity of algorithm developers. However, the usual aim is to present humans with responses to their movements that make intuitive sense and give them a sense of control within the context of the particular application. These requirements place important constraints on the C-D function, inter alia in terms of continuity and proportionality.
When defining the C-D function, the computer is often treated as a black box, completely described from the outside by the relation between its outputs and its inputs. Realization of the C-D function is achieved inside the computer by processing of the input data derived from tracking in the context of the computer's internal state. Early research led to the introduction of non-linear C-D functions, for example ones that create pointer acceleration effects on the display which are not present in control space, but which depend on pointing device speed or total distance moved. The classic GUI from the current perspective
Figure 7 contains a schematic model of the classic GUI, which shows a simplified concept of what happens inside the black box, when assuming the abovementioned separation between the interface and the computer beyond it. The input data derived from control space is stored inside the machine in an input or control buffer. Similarly, a display buffer is a special part of memory that stores a bitmap of what is displayed on the screen. Any non-linear effect of the input transducers is usually counteracted by an early stage of processing. The mapping between the physical control space and its control or input buffer is therefore shown as an isomorphism. The same goes for the mapping between the display buffer and the physical display space. The GUI processing of interaction effects are taken to include the C-D function and two other elements called here the Visualizer and the Interpreter. The Visualizer is responsible for creating visualizations of abstract data, e.g. in the form of icons, pictures or text, while the Interpreter generates commands to be processed by the computer beyond the interface.
Input processing in this scheme is neatly separated from interaction processing, but an overlap exists between interaction processing and display processing. The locus of this overlap is the display buffer, which contains an isomorphic representation of what appears on the screen. This approach was probably adopted to save memory during the early days of GUI development in the 1980's. The overlap currently creates some constraints on interaction processing, especially in terms of resolution. Some games engines have a separate internal representation of the game world to overcome this limitation and to create other possibilities.
The experienced GUI user's attention is almost entirely concentrated on display space, with motor manipulations automatically varied to achieve some desired visual result. In this sense, the display space is the ultimate reference for all objects and actions performed by either human or computer in any space that eventually maps to display space.
Computer games from the current perspective
Computer games often build on virtual reality and always need to provide methods for interaction. A model for a generic game engine from the current perspective is shown in Figure 8. A game engine provides a reusable software middleware framework, which may be platform independent, and which simplifies the construction of computer based games. A game engine framework is typically built around a component-based architecture, where developers may have the option to replace or extend individual components. Typical components may include high-level abstractions to input devices, graphics, audio, haptic feedback, physics simulation, artificial intelligence, network communication, scripting, parallel computing and user interaction. A game engine is responsible for creating the game world (game state) from a description of the game and game object models. The game engine dynamically updates the game state based on the game rules, player interaction and the response of real opponents and numerous simulators (e.g. physics simulator and artificial intelligence).
There is a huge spectrum of game types. Sometimes games use GUI elements for interaction in parts of the game (e.g. configuration and control panels), but the majority of games rely on well-defined game data and objects, custom interactions in reaction to player input, actions of opponents (real or artificial) and the current game state.
It is important to note that in many game types, the game world objects are seldom under the player's (user's) control and that selection plays a small role in the game dynamics. Even if the player does nothing (no controlled input) the game world state will continue to evolve. The passing of time is explicit and plays an important role in many game types. Finally, in most games the game objects are not co-operative with respect to the player's actions; more often objects act competitively, ignore the player's actions or are simply static.
Some other considerations from the known art of interaction
The Apple Dock [13] allows interaction based on a one-dimensional fish- eye distortion. The distortion visually magnifies some icons close to the pointer. This has some perceptual advantages, but no motor or Fitts advantage [14]. As a direct result of the magnification, the cursor movement is augmented by movement of the magnified icon in the opposite direction. Therefore this method provides no motor advantage to a user apart from that of a visual aid. The Apple Dock can thus be classified as a visualising tool.
PCT/FI2006/050054 describes a GUI selector tool, which divides up an area about a central point into sectors in a pie menu configuration. Some or all of the sectors are scaled in relation to its relative distance to a pointer. Distance is presumably measured by means of an angle and the tool allows circumferential scrolling. The scaling can be either enlarging or shrinking the sector. The whole enlarged area seems to be selectable and therefore provides a motor advantage to the user. This invention appears aimed at solving the problem of increasing the number of selectable objects on a small screen, such as that of a handheld device.
A similar selector tool is described in US patent 6,073,036. This patent discloses a method wherein one symbol of a plurality of symbols are magnified proximate a tactile input to both increase visualisation and to enlarge the input area.
Fairly recent work on the C-D function has yielded a technique called semantic pointing [15], in which the C-D function itself is changed when the pointer enters or leaves certain predefined meaningful regions of display space. This may be regarded as a form of adaptation controlled by a feedback signal, and it does provide a motor advantage. What these methods lack is a cohesive and general interaction engine and methods of using it, which (i) separates input and output processing from interaction processing, (ii) provides a structured set of processors related to a rich spatial representation containing the elements taking part in the interaction, and (iii) allows the possibility of feedback and adaptation. The present invention is intended to fill this gap; thereby enabling the interaction designer to gain clarity and power in performing complex and difficult interaction processing that will enhance the realisation of user intention. Such enhancement may depend on provision to the human of visual advantage, motor advantage, or both. Thus it is an object of the invention to improve human-computer interaction.
General description of the invention
The invention is now described with reference to the accompanying drawings, in which:
Figure 1 shows Norman's seven stages of human action;
Figure 2 shows a new analysis of the main Human-Computer Interaction loop, for the purposes of the invention;
Figure 3 shows the standard ACM model for Human-Computer Interaction in context;
Figure 4 shows the Coomans & Timmermans model of Human-Computer interaction, as developed for virtual reality interfaces;
Figure 5 shows diagrammatically the spatial context of human-computer interaction (HCI), in accordance with the invention;
Figure 6 shows diagrammatically the Spaces of HCI populated with objects, according to the invention;
Figure 7 shows diagrammatically a model of the well-known GUI, as viewed from the current perspective;
Figure 8 shows diagrammatically a model of a generic games engine, as viewed from the current perspective;
Figure 9 shows diagrammatically the proposed new model of HCI, according to the invention;
Figure 10 shows diagrammatically details of the proposed new interaction engine, according to the invention; Figure 11 shows diagrammatically the Virtual Interaction Space (vIS) , according to the invention;
Figure 12 shows diagrammatically details of the new interaction engine, expanded with more processors and adaptors, according to the invention;
Figures 13.1 to 13.3 show diagrammatically a first example of the invention;
Figures 14.1 to 14.4 shows diagrammatically a second example of the invention;
Figures 15.1 to 15.2 shows diagrammatically a third example of the invention;
Figures 16.1 to 16.3 shows diagrammatically a fourth example of the invention;
Figures 17.1 to 17.4 shows diagrammatically a fifth example of the invention; and
Figures 18.1 to 18.6 shows diagrammatically a sixth example of the invention.
Refer to Figure 9, which shows in context the proposed new interaction engine that is based on a new model of HCI called space-time interaction (STi). In Figure 9 a virtual Interaction Space (vIS) (see Figure 11 ) and various processors are introduced. Together they constitute the Space-time Interaction Engine (STIE), which is detailed in Figure 10. The importance of space has been emphasized in the foregoing, but time makes an essential contribution to every interaction. This is acknowledged by showing a real-time clock in Figures 9 and 10, and in the names chosen for the parts of the model.
According to the invention, a method is provided for human- computer interaction (HCI) on a graphical user interface (GUI), which includes:
• tracking the position and/or movement of a user's body or part of it relative to and/or with an input device in a control space;
• facilitating human-computer interaction by means of an interaction engine, which includes the steps of
- establishing a virtual interaction space; - establishing and referencing one or more virtual objects with respect to the interaction space;
- establishing and referencing one or more focal points in the interaction space in relation to the tracked position and/or movement in the control space;
- applying one or more mathematical functions or algorithms to determine the interaction between one or more focal points and the virtual objects in the interaction space, and/or to determine one or more commands to be executed; and
- applying a mathematical function or algorithm to determine what content of the interaction space is to be presented to the user as feedback, and in which way the content is to be displayed; and • providing feedback to the user in a sensory display or feedback space. According to a further aspect of the invention, an engine is provided for processing human-computer interaction on a GUI, which engine includes:
a means for establishing a virtual interaction space;
a means for establishing and referencing one or more virtual objects with respect to the interaction space;
a means for establishing and referencing one or more focal points in the interaction space in relation to the tracked position and/or movement in the control space;
a means for calculating a mathematical function or algorithm to determine the interaction between one or more focal points and the virtual objects in the interaction space, and/or to determine one or more commands to be executed; and
a means for calculating a mathematical function or algorithm to determine what content of the interaction space is to be presented to the user as feedback, and in which way the content is to be presented. CONTROL SPACE and CONTROL BUFFER
Figure 9 shows the context of both the physical control space (the block labelled "C") and the control buffer or virtual control space (the block labelled "C buffer") in the new space-time model for human-computer interaction.
The position and/or movement of a user's body or part of it relative to and/or with an input device is tracked in the physical control space and the tracking may be represented or stored as a real vector function of time in the control buffer as user input data. The sampling rate in time and space of the tracking may preferably be so high that the tracking appears to be continuous.
More than one part of the user's body or input device may be tracked in the physical control space and all the tracks may be stored as user input data in the control buffer.
The user input data may be stored over time in the control buffer.
The tracking may be in one or more dimensions.
An input device may also be configured to provide inputs other than movement. Typically, such an input may be a discrete input, such as a mouse click, for example. These inputs should preferably relate to the virtual objects with which there is interaction and more preferably to virtual objects which are prioritised. Further examples of such an input may be the touch area or pressure of a person's finger on a touch-sensitive pad or screen. Although the term movement is used to describe what is tracked by an input device, it will be understood to also include tracking of indirect movement derived from sound or changes in electrical currents in neurons, as in the case of a Brain Computer Interface. VIRTUAL INTERACTION SPACE (vIS)
Figure 11 shows a more detailed schematic representation of the virtual interaction space (vIS) and its contents. As shown, the virtual interaction space may be equipped with geometry and a topology. The geometry may preferably be Euclidean and the topology may preferably be the standard topology of Euclidean space.
The virtual interaction space may have more than one dimension.
A coordinate or reference system may be established in the virtual interaction space, comprising a reference point as the origin, an axis for every dimension and a metric to determine distances between points, preferably based on real numbers. More than one such coordinate system can be created.
The objects in the virtual interaction space are virtual data objects and may typically be WIM type objects (window, icon, menu) or other interactive objects. Each object may be referenced at a point in time in terms of a coordinate system, determining its coordinates. Each object may be configured with an identity and a state, the state representing its coordinates, function, behaviour, and other characteristics.
A focal point may be established in the virtual interaction space in relation to the user input data in the control buffer. The focal point may be an object and may be referenced at a point in time in terms of a coordinate system, determining its coordinates. The focal point may be configured with a state, representing its coordinates, function, behaviour, and other characteristics. The focal point state may determine the interaction with the objects in the interaction space. The focal point state may be changed in response to user input data.
More than one focal point may be established and referenced in the virtual interaction space, in which case each focal point may be configured with an identity. The states of the objects in the virtual interaction space may be changed in response to a change in the state of a focal point and/or object state of other objects in the interaction space. A scalar or vector field may be defined in the virtual interaction space. The field may, for example, be a force field or a potential field that may contribute to the interaction between objects and focal points in the virtual interaction space.
FEEDBACK SPACE and FEEDBACK BUFFER
Figure 9 shows the context of both the physical sensory feedback space (the block labelled "F") and the feedback buffer, or virtual feedback space (the block labelled "F buffer") in the new space-time model for human-computer interaction.
An example of a feedback space may be a display device or screen. The content in the virtual interaction space to be observed may be mapped into the display buffer and from there be mapped to the physical display device. The display device may be configured to display feedback in three dimensions.
Another example of a feedback space may be a sound reproduction system.
PROCESSORS
The computer may be configured with one or more physical processors, whose processing power may be used to run many processes, either simultaneously in a parallel processing setup, or sequentially in a time-slice setup. An operating system schedules processing power in such a way that processes appear to run concurrently in both these cases, according to some scheme of priority. When reference is made to processor in the following, it may include a virtual processor, whose function is performed either by some dedicated physical processor, or by a physical processor shared in the way described above.
Figure 10 shows the Space-time interaction engine for example, containing a number of processors, which may be virtual processors, and which are discussed below.
HiC PROCESSOR - Human interaction Control Processor and Control functions The step of establishing and referencing one or more focal points in the interaction space in relation to the tracked position and/or movement in the control space may be effected by a processor that executes one or more Control functions or algorithms, named a Human interaction Control or HiC processor. The HiC processor may take user input data from the virtual control space to give effect to the reference of the focal point in the interaction space. The HiC processor may further be configured to also use other inputs such as a discrete input, a mouse click for example, which can also be used as a variable by a function to interact with objects in the interaction space or to change the characteristics of the focal point.
Ip PROCESSOR - Interaction Processor and Interaction functions
The function or functions and/or algorithms which determine the interaction of the focal point and objects in the interaction space, and possibly the effect of a field in the interaction space on the objects, will be called Interaction functions and may be executed by an Interaction processor or Ip processor.
One or more Interaction functions or algorithms may include interaction between objects in the interaction space. In the case of more than one focal point, there may also be an interaction between the focal points. It will be appreciated that the interaction may preferably be bi-directional, i.e. the focal point may interact with an object and the object may interact with the focal point. The interaction between the focal point and the objects in the interaction space may preferably be nonlinear.
The mathematical function or algorithm that determines the interaction between the focal point and the objects in the interaction space, may be configured for navigation between objects to allow navigation through the space between objects. In this case, the interaction between the focal point and objects relates to spatial interaction. In an embodiment where the interaction function is specified so that objects in the interaction space change their state or status in relation to a relative position of a focal point, an object in the form of an icon may transform to a window and vice versa, for example, in relation to a focal point, whereas in the known GUI these objects are distinct until the object is aligned with the pointer and clicked. This embodiment will be useful for navigation to an object and to determine actions to be performed on the object during navigation to that object.
The mathematical function or algorithm which determines the interaction between the focal point and the objects in the interaction space may be specified so that the interaction of the focal point with the objects is in the form of interacting with all the objects or a predetermined collection of objects according to a degree of selection and/or a degree of interaction. The degree of selection or interaction may, for example, be in relation to the relative distance of the focal point to each of the objects in the interaction space. The degree of selection may preferably be in terms of a number between 0 and 1. The inventors wish to call this Fuzzy Selection.
HIF PROCESSOR - Human interaction Feedback processor and Feedback functions
The mathematical function or algorithm to determine the content of the interaction space to be observed is called the Feedback function and may be executed by the Human interaction Feedback or HiF processor. The Feedback function may be adapted to map or convert the contents to be displayed in a virtual display space or display buffer in which the coordinates are integers. There may be a one-to-one mapping of bits in the display buffer and the pixels on the physical display.
The Feedback function may also be adapted to include a scaling function to determine the number of objects or the collection of objects in the interaction space to be displayed. The scaling function may be user configurable. It will be appreciated that the Feedback function is, in effect, an output function or algorithm and the function or algorithm may be configured to also effect outputs other than visual outputs, such as sound, vibrations and the like.
CiR PROCESSOR - Computer interaction Response processor and Response functions
A mathematical function or algorithm which determines the selection and use of data stored in memory to establish and compose the virtual interaction space and/or objects in it can be called the Response function and may be executed by the Computer interaction Response or CiR processor.
CiC PROCESSOR - Computer interaction Command processor and Command functions A mathematical function or algorithm that determines the data to be stored in memory and/or the commands to be executed, can be called the Command function and may be executed by the Computer interaction Command or CiC processor. ADAPTORS
An adaptor will be understood to mean a processor configured to change or affect any one or more of the parameters, functional form, algorithms, application domain, etc. of another processor, thereby dynamically redefining the functioning of the other processor.
HiC ADAPTOR (HiCa)
One adaptor, which will be called the Human interaction Control adaptor (HiCa), uses information from the virtual interaction space (vIS) to dynamically redefine the functioning of the HiC processor. The HiCa represents a form of feedback inside the interaction engine.
The HiCa may change the Control function to determine or define the position, size or functionality of the control space in relation to the position of the focal point in the interaction space and/or in relation to the position or dimensions of objects in the interaction space. The determination or definition of the control space may be continuous or discrete.
CiR ADAPTOR (CiRa) Another adaptor, which will be called the Computer interaction Response adaptor (CiRa) uses information from the virtual interaction space (vIS) to dynamically redefine the functioning of the CiR processor. The CiRa is a feedback type processor. HiF ADAPTOR (HiFa)
Another adaptor, shown in the expanded engine of Figure 12, which will be called the Human interaction Feedback adaptor (HiFa), uses information from the virtual interaction space (vIS) to dynamically redefine the functioning of the HiF processor. The HiFa is a feed-forward type processor. CiC ADAPTOR (CiCa)
Another adaptor, which will be called the Computer interaction Command adaptor (CiCa) uses information from the virtual interaction space (vIS) to dynamically redefine the functioning of the CiC processor. The CiCa is a feedforward type processor.
Ip ADAPTOR (Ipa)
Another adaptor, which will be called the Interaction Processor adaptor (Ipa) uses information from the virtual interaction space (vIS) to dynamically redefine the functioning of the Ip processor. The Ipa is a feed-forward type processor.
It will be appreciated that the separation of the interaction space and the feedback or display space creates the possibility for the addition of at least one interaction processor (HiF) and one adaptor (HiFa), which was not possible in the classic GUI as shown in Figure 7.
It will be appreciated that, although treated separately, there will often be some conceptual overlap between the interaction space and the display space. It will further be appreciated that referencing the WIM objects in their own space allows for the addition of any one of a number of customised functions or algorithms to be used to determine the interaction of the pointer in the visual space with WIM objects in the interaction space, whether in the visual space or not. The interaction can also be remote and there is no longer a need to align a pointer with a WIM object to interact with that object.
Since the buffer memory of a computer is shared and holds data for more than one application or process at any one time, and since the processor of a computer is normally shared for more than one application or process, it should be appreciated that the idea of creating spaces within a computer is conceptual and not necessarily physical. For example, space separation can be conceptually achieved by assigning two separate coordinates or positions to each object; an interaction position and a display position. Typically one would be a stationary reference coordinate or position and the other would be a dynamic coordinate that changes according to the interaction of the focal point or pointer with each object. Both coordinates may be of a typical Feedback buffer format and the mathematical function or algorithm that determines the interaction between the focal point or pointer and the objects may use the coordinates from there. Similarly, the focal point may be provided with two coordinates, which may be in a Control buffer format or a Feedback buffer format. In other words, there may be an overlap between the Virtual Interaction Space, Control buffer or space and Feedback buffer or space, which can conceptually be separated. It will also be understood that, if an interaction position is defined for an object in virtual and/or display space, it may or may not offset the appearance of the object on the computer screen.
The method may include providing for the virtual interaction and display spaces to overlap in the way described above, and the step of establishing two separate states for every object, namely an interaction state and a display state. These object states may include the object position, sizes, colours and other attributes. The method may include providing for the virtual interaction and virtual display spaces to overlap and thereby establishing a separate display position for each object based on interaction with a focal point or tracked pointer. The display position can also be established based on interaction between a dynamic focal point and a static reference focal point.
The method may include providing for the virtual interaction and virtual display spaces to overlap and to use the relative distances between objects and one or more focal points to establish object positions and/or states. This method may include the use of time derivatives.
One embodiment may include applying one or more mathematical functions or algorithms to determine distant interaction between a focal point and the virtual objects in the interaction space, which interaction at/from a distance may include absence of contact, for example between the focal point and any object with which it is interacting.
In one embodiment, the method may include a non-isomorphic function or algorithm that determines the mapping of object positions from virtual interaction space to display space. Mapping in this context is taken to be the calculation of the display position coordinates based on the known interaction position coordinates. In one embodiment, the method may include a non-isomorphic function or algorithm that uses focal point positions and object point positions to determine the mapping of object sizes from virtual interaction space to display space.
In one embodiment, the method may include a non-isomorphic function or algorithm that determines the mapping of object positions and sizes from virtual interaction space to display space.
In one embodiment, the method may include a non-isomorphic function or algorithm that determines the mapping of object state from virtual interaction space to display space.
The method may include a non-isomorphic function or algorithm that uses a focal point position and object positions in virtual interaction space to update object positions in the virtual interaction space.
The method may include a non-isomorphic function or algorithm that uses a focal point position and object positions in virtual interaction space to update object sizes in the virtual interaction space. The method may include a non-isomorphic function or algorithm that uses a focal point position and object positions in virtual interaction space to update object positions and sizes in the virtual interaction space. The method may include a non-isomorphic function or algorithm that uses a focal point position and object positions in virtual interaction space to update object states in the virtual interaction space. The method may include a non-isomorphic function or algorithm that uses a focal point position and object positions to determine the mapping of object positions from virtual interaction space to display space as well as to update object positions in the virtual interaction space. The method may include a non-isomorphic function or algorithm that determines the mapping of object sizes from virtual interaction space to display space.
The method may include a non-isomorphic function or algorithm that determines the mapping of object positions and sizes from virtual interaction space to display space.
The method may include a non-isomorphic function or algorithm that determines the mapping of object state from virtual interaction space to display space.
The method may include using the position of a focal point in relation to the position of the boundary of one or more objects in the virtual interaction space to effect crossing-based interaction. An example of this may be where object selection is automatically effected by the system when the focal point crosses the boundary of the object, instead of requiring the user to perform, for example, a mouse click for selection.
The method may include the calculation and use of time derivatives of the user input data in the control buffer to create augmented user input data.
The method may include dynamically changing the state of objects in the virtual interaction space, based on the augmented user input data. The method may include dynamically changing the properties of the scalar and/or vector fields in the virtual interaction space, based on the augmented user input data. The method may include dynamically changing the properties of the scalar and/or vector fields in the virtual interaction space, based on the position and/or state of one or more objects in the virtual interaction space.
The method may include dynamically changing the properties of the scalar and/or vector fields in the virtual interaction space, based on data received from or via the part of the computer beyond the interface.
The method may include dynamically changing the geometry and/or topology of the virtual interaction space itself, based on the augmented user input data.
The method may include dynamically changing the geometry and/or topology of the virtual interaction space itself, based on the position and/or properties of one or more objects in the virtual interaction space.
The method may include dynamically changing the geometry and/or topology of the virtual interaction space itself, based on data received from or via the computer. The method may include interaction in the virtual interaction space between the focal point or focal points and more than one of the objects simultaneously.
The method may include the step of utilizing a polar coordinate system in such a way that the angular coordinate of the focal point affects navigation and the radial coordinate affects selection.
The method may include the step of utilizing any pair of orthogonal coordinates of the focal point to determine whether the user intends to navigate or to perform selection. For example, the vertical Cartesian coordinate may be used for navigation and the horizontal coordinate for selection.
The method may preferably use the HiC processor to apply the Control function or algorithm. This may include the non-isomorphic mapping augmented user input from the control buffer to the virtual interaction space.
The method may preferably use the HiF processor to apply the Feedback function or algorithm. This may include the non-isomorphic mapping of relative object positions from virtual interaction space to display space.
The method may preferably use the CiR processor to apply the Response function or algorithm. This may include the establishment of relative object positions in virtual interaction space.
The method may preferably use the CiC processor to apply the Command function or algorithm. This may include a command to play a song, for example.
The method may preferably use the Ip processor to apply the Interaction function or algorithm. This may include using the state of an object in virtual interaction space to change the state of another object or objects in the virtual interaction space.
The method may preferably use the HiCa to adapt the functioning of the HiC processor. This may include the HiCa execution of a function or algorithm to adapt the free parameters of a Control function.
The method may preferably use the HiFa to adapt the functioning of the HiF processor. This may include the HiFa execution of a function or an algorithm to adapt the free parameters of a Feedback function.
The method may use the CiRa to adapt the functioning of the CiR processor. This may include the CiRa execution of a function or an algorithm that determines which objects to insert in virtual interaction space. of the CiC algorithm to
Figure imgf000029_0001
adapt the free paramete oT¾ Command function.
The method may use the Ipa to adapt the functioning of the Ip processor. This may include the Ipa execution of a function or algorithm to adapt the free parameters of an Interaction function.
In a preferred embodiment, the method may use one or more in any combination of the HiC processor, CiC processor, CiR processor, Ip processor, HiF processor, HiCa, CiCa, CiRa, Ipa and/or HiFa to facilitate continuous human- computer interaction. The method may include a Feedback function or algorithm that uses the spatial relations between one or more focal points and the objects in virtual interaction space to establish a different spatial relation between objects in display space. The method may include a further Feedback function or algorithm that uses the spatial relations between one or more focal points and the objects in virtual interaction space to establish different state values for each object in display space. The method may include an Interaction function or algorithm that uses the spatial relations between one or more focal points and the objects in virtual interaction space to establish a different spatial relation between objects in virtual interaction space. The method may include an Interaction function or algorithm that uses the spatial relations between one or more focal points and the objects in virtual interaction space to establish different state values for each object in virtual interaction space. The method may include allowing or controlling the relative position of some or all of the objects in the virtual interaction space to have a similar relative position in the display space when the focal point or focal object has the same relative distance distribution between all the objects. A further method may include allowing or controlling the relative positions of some or all of the objects to change in relation to the relative positions when comparing the interaction and the display space in such a way that the change in relative position of the focal point or focal object is a function of the said change. The relative object positions may differ in the display space when compared with their relevant positions in the interaction space.
The method may include allowing or controlling the relative size of some or all of the objects in the vIS to have a similar size in the display space when the focal point or focal object has the same relative distance distribution between all the objects. A further method may include allowing or controlling the relative size of some or all of the objects to change in relation to the relative sizes when comparing the interaction and the display space in such a way that the change in relative position of the focal point or focal object is a function of the said change. The relative object size may differ in the display space when compared with its relevant positions in the interaction space.
The method may include allowing or controlling the relative position and size of some or all of the objects in the vIS to have a similar relative position and size in the display space when the focal point or focal object has the same relative distance distribution between all the objects. A further method may include allowing or controlling the relative positions and sizes of some or all of the objects to change in relation to the relative positions when comparing the interaction and the display space in such a way that the change in relative position of the focal point or focal object is a function of the said change. The relative object positions and sizes may differ in the display space when compared with their relevant positions in the interaction space. The interaction of the focal point in the control space with objects in the interaction space occurs non-linearly, continuously and dynamically according to an algorithm of which the focal point position in its control space is a function.
Detailed description of the invention
It shall be understood that the examples are provided for illustrating the invention further and to assist a person skilled in the art with understanding the invention and are not meant to be construed as unduly limiting the reasonable scope of the invention.
Example 1 In a first, most simplified, example of the invention, as shown in Figures
13.1 to 13.3, the method for human-computer interaction (HCI) on a graphic user interface (GUI) includes the step of tracking movement of pointing object, a person's finger 40 in this case, on a touch sensitive pad 10, the control space. Human-computer interaction is facilitated by means of an interaction engine 29, which establishes a virtual interaction space 12 and referencing 8 objects 52 in a the space. A CiR processor 23 determines the objects to be composed the virtual interaction space 12. The interaction engine 29 further establishes and references a focal point 42 in the interaction space 12 in relation to the tracked movement of the person's finger 40 and reference point 62. The engine 29 then uses the Ip processor 25 to determine the interaction of the focal point 42 in the interaction space 12 and objects 52 in the interaction space. In terms of the algorithm, the object, 52.1 in this case, closest to the focal point at any point in time will move closer to the focal point and the rest of the objects will remain stationary. The HiF processor 22 determines the content of the interaction space 12 to be observed by a user and the content is isomorphically mapped and displayed on the visual display feedback buffer 14. In the display 14, the reference point is represented by the dot marked 64. The person's finger 40 in the control space 0 is represented by a pointer 44. The objects are represented by 54.1 to 54.8. The tracking of the person's finger is repeated within short intervals and appear to be continuous. The tracked input device or pointer object input data is stored over time in the virtual control space or control buffer.
Example 2
In another example of the invention, with reference to Figures 14.1 to 14.4, the method for human-computer interaction (HCI) on a graphical user interface (GUI) includes the step of tracking the movement of a pointing object, a person's finger 40 in this case, on a touch sensitive pad 10, in the control space (C). The tracked pointing object input data (coordinates or changes in coordinates) 41 is stored over time in the virtual control (vC), space or control buffer 11 , after being mapped isomorphically by processor 20. Reference point 62 is established by the CiR processor 23 inside the virtual interaction space 12. The CiR processor 23 further assigns regularly spaced positions π on a circle of radius one centred on the reference point 62, and uniform sizes Wj to the circular virtual objects 52. i in virtual interaction space 12, where i may throughout this example range from 1 to N. The HiC processor 21 establishes a focal point 42 and calculates, and continually updates, its position in relation to the reference point 62, using a function or algorithm based on the input data 41. The Ip processor 25 calculates the distance rp between the focal point 42 and the reference point 62, as well as the relative distances rip between all virtual objects 52. i and the focal point 42, based on the geometry and topology of the virtual interaction space 12, and updates these values whenever the position of the focal point 42 changes. The HiF processor 22 establishes a reference point 63, a virtual pointer 43 and virtual objects 53. i in the feedback buffer 13, and calculates and continually updates the positions and sizes of virtual objects 63, 43 and 53. i, using a function or algorithm based on the relative distances rip in virtual interaction space 12 as calculated by the Ip processor 25. Processor 27 establishes and continually updates a reference point 64, a pointer 44 and pixel objects 54. i in the feedback space, a display device 14 in this case, isomorphically mapping from 63, 43 and 53. i respectively.
Figure 14.1 shows the finger 40 in a neutral position in control space 10, which is the position mapped by the combined effect of processors 20 and 21 to the reference point 62 in the virtual interaction space 12, where the focal point 42 and reference point 62 therefore coincide for this case. The relative distances rip between the N=12 virtual objects 52. i and the focal point 42 are all equal to one. The combined effect of processors 22 and 27 therefore in this case preserves the equal sizes and the symmetry of object placement in the mapping from the virtual interaction space 12 to the feedback or display space 14, where all circles have the same diameter W.
With displacement of the finger 40 in control space 10 to a new position, Figure 14.2 shows the focal point 42 mapped halfway between the reference point 62 and the virtual object 52.1 in the virtual interaction space 12. Note that the positions of the virtual objects 52. i never change in this example. The relative distance rip with respect to the focal point 42 is different for every virtual object 52. i however, and the mapping by the HiF processor 22 yields different sizes and shifted positions for the objects 54.i in the feedback or display space 14. The function used for calculating display size is
W =
' ! +(»/- I V?
where m is a free parameter determining the maximum magnification and q is a free parameter determining how strongly magnification depends upon the relative distance. The function family used for calculating relative angular positions may be sigmoidal, as follows: 9jP is the relative angular position of virtual object 52. i with respect to the line connecting the reference point 62 to the focal point 42 in the virtual interaction space 12. The relative angular position is normalised to a value between -1 and 1 by calculating
— θ ψ
u>r V .
Next, the value of vip is determined as a function of uip and rp, using a piecewise function based on for l / v , a straight line for ≤ U< 2 I S and
[~ e " for A≤ u≤ 1 , with rp as a parameter indexing the strength of the non- linearity. The relative angular position φιρ of pixel object 54. i with respect to the line connecting the reference point 64 to the pointer 44 in display space 14, is then calculated as ^ ^ ~K ' * . The resultant object sizes and placements are shown in Figure 14.2. On displacement of the finger 40 in control space 10 to a new position that is mapped as described above to a focal point 42 in virtual interaction space 12 that coincides with the position in this case of virtual object 52.1 , the functions implemented by the HiF processor 22 described above lead to the arrangement of objects 54. i in display space 14 shown in Figure 14.3.
On displacement of the finger 40 in control space 10 to a new position that is mapped as described above to a focal point 42 in virtual interaction space 12 that in this case lies a distance halfway from the reference point 62 and halfway between the positions of virtual objects 52.1 and 52.2, the functions implemented by HiF processor 22 described above lead to the arrangement of objects 54. i in display space 4 shown in Figure 14.4, where Wi= W2 and φίρ= φ. The display of reference point 64 and pointer 44 may be suppressed, a change which can be effected by changing the mapping applied by the HiF processor 22 to make them invisible.
If chosen correctly, the functions or algorithms implemented by the HiC processor 21 and the HiF processor 22 may be sufficient to completely and uniquely determine the configurations of the pixel objects 54. i in display space 14 for any position of the person's finger 40 in the control space 10. The tracking of the person's finger 40 is repeated within short intervals of time and the sizes and positions of pixel objects 54.i appear to change continuously due to image retention on the human retina. If the necessary calculations are completed in real time, the person has the experience of continuously and simultaneously controlling all the displayed objects 54. i by moving his finger 40. Example 3
For this example, reference is made to Figures 15.1 to 15.3. The controller (C), is in the form of a three-dimensional multi-touch (3D-MT) input device. The 3D-MT device provides the position of multiple pointing objects (such as fingers) as a set of 3D coordinates (projected positions) in the touch (x-y) plane, along with the height of the objects (z) above the touch plane. The method for human-computer interaction (HCI) on a graphical user interface (GUI) includes the step of tracking the movement of multiple pointer objects 40. i, in the form of a person's fingers, where i can be 1 to N, on or over a three-dimensional multi-touch (3D-MT) input device (C) 10. After being isomorphically mapped as in the previous example, the tracked pointer input data (3D coordinates or change in coordinates) 41. i are stored over time in the virtual control space (vC) 1 1 . The HiC processor 21 establishes a focal point 42. i for each pointer object in the virtual interaction space (VIS) 12 as a function of each pointer's previous position, its current position, so that objects that move the same distance over 1 1 and 12's x-y plane, but at different heights (different z coordinate values) above the touch plane, result in different distances moved for each 42 i in VIS 12. The HiF processor 22 establishes for each focal point 42. i a virtual pointer 43. i in the virtual feedback buffer (vF) 13 using isomorphic mapping. Each virtual pointer 43. i is again mapped isomorphically to a visual pointer 44. i in the feedback space (F) 14.
The following dynamic, self-adaptive infinite impulse response (MR) filter is used in the HIC processor 21 :
Q(n) = Q(n - \)+f(z [P(n)- P(n - \)]
(Equation 103.1 ) where is a vector containing the x and y coordinate values of a pointer in the virtual control buffer 12 at time step is a vector containing the x and y coordinate values of a focal point in the VIS 12 at time step n , is a continuous function of * that determines a scaling factor for the current sample and z is the current coordinate value of a the pointer in vC 1 1. Equation 103.1 is initialised, so that, at time step « = , Q(n- \) = Q(n) and P(n - l) = P(n) Tnere are numerous possible embodiments of f(z e.g.:
/(*) = !
(Equation 103.2)
which embodies unity scaling;
Figure imgf000036_0001
Equation 103.3) where Zfland z*are constants and z° < z*; and
Figure imgf000036_0002
(Equation 103.4) where z°and z*are constants and
Figure 15.1 shows two pointer objects, in this case fingers 40.1 and 40.2, in an initial position, so that the height∑Q of pointer object 40.1 is higher above the touch plane of 10 than the height Z402 of pointer object 40.2, i.e. z<t0 1 > Z402. The pointer objects are isomorphically mapped to establish pointers 41.1 and 41.2. The pointers are mapped by the HiC 21 processor, using in this case
Equation 103.3 as the scaling function in Equation 103.1 and with z*0 ] > z* and zAo. < za t to establish focal points 42.1 and 42.2 in 12. The focal points are mapped by HiF 22 to establish virtual pointers 43.1 and 43.2 in 13. The virtual pointers are isomorphically mapped to display pointers 44.1 and 44.2 in 14. Figure 15.2 shows the displacement of pointer objects 40.1 and 40.2 to new positions. The pointer objects moved the same relative distance over the touch plane, while maintaining their initial height values. The pointer objects are isomorphically mapped to 11 as before. Note that 41.1 and 41.2 moved the same relative distance and maintained their respective ^ coordinate values. The pointers in 11 are mapped by the HiC 21 processor, while still using Equation 103.3 as the scaling function in Equation 103.1 , to establish new positions for focal points 42.1 and 42.2 in 12. The relative distances that the focal points moved are no longer equal, with 42.2 travelling half the relative distance of 42.1 in this case. As before, the focal points are mapped by HiF 22 to establish virtual pointers 43.1 and 43.2 in 13 and virtual pointers, in turn are isomorphically mapped to display pointers 44.1 and 44.2 in 14. The effect of the proposed transformation is to change a relative pointer object 40. i movement in the controller 11 space to scaled relative movement of a display pointer 44.i in the feedback 14 space, so that the degree of scaling may cause the display pointer 44. i to move slower, at the same speed or faster than the relative movement of pointer object 40. i.
Example 4
In this example reference is made to Figures 16.1 to 16.3. A controller 10 that provides at least a two-dimensional input coordinate can be used.
The method for human-computer interaction (HCI) on a graphical user interface (GUI) includes the step of tracking the movement of a pointer object 40 on a touch sensitive input device (C) 10. The tracked pointer object is isomorphically mapped to establish a pointer input data coordinate 41 in the virtual control space (vC) 11. The HiC processor 21 establishes a focal point 42 for the pointer coordinate in the virtual interaction space (VIS) 12. The CiR processor 23 establishes a grid-based layout object 52.1 that contains N cells. Each cell may be populated with a virtual object 52. i, where 2 < < 10 i which contains a fixed interaction coordinate centred within the cell, by the CiR processor 23. The Ip processor 25 calculates, for each cell, a normalised relative distance Κψ between the focal points 42 and interaction coordinate of virtual object 52. i, based on the geometry and topology of VIS 12, and updates these values whenever the position of the focal point 42 changes. The HiF processor
22 establishes a virtual pointer 43 and virtual objects 53. i in the feedback buffer 13, and calculates and continuously updates the positions and sizes of 43 and
53. i, using a function or algorithm based on the relative distances r,p in VIS 12 as calculated by the Ip processor 25. The virtual pointer 43 and virtual objects 53.i are mapped isomorphically to a visual pointer 44 and visual objects 54. i in the visual display feedback space (F) 14.
Figure 16.1 shows a case where no pointer object is present in 10. The isomorphic transformation does not establish a pointer coordinate in 11 and the HiC processor 21 does not establish a focal point in VIS 12. The CiR processor
23 establishes a grid-based layout container 52.1 with 9 cells, and populates each cell with a virtual object 52. i, where 2 <z' < 10 i with a fixed interaction coordinate centred within the cell. With the focal points 42 absent in VIS 12, the r = 1
Ip processor sets Ip for all values of 1. In this case, the HIF processor 22 may perform an algorithm, such as the following, to establish virtual objects 53. i in the virtual feedback buffer 13:
1. The grid-based layout container is mapped to a virtual container object that consumes the entire space available in 14. The virtual container object is not visualised, but its width W53 1 and height ^53 1 are used to calculate the location and size for each cell's virtual object 53. i.
2. Assign a size factor of Ji for each cell that does not contain a virtual object in VIS 12.
3. Calculate a relative size factor
Figure imgf000038_0001
for each cell that contains a virtual object in the VIS 12 as a function of the normalised relative distance r>p between the focal points 42 and the interaction coordinate of the virtual object 52. i, as calculated by Ip 25 in VIS 12. The function for the relative size factor may be:
Figure imgf000039_0001
(Equation 104.1) where is the minimum allowable relative size factor with a range of values ≤1 , js the maximum allowable relative size factor with a sf > 1
range of values •/max ~~ and 1 is a free parameter determining how strongly the relative size factor magnification depends upon the normalised relative distance r'p .
4. Calculate the width 53 ' of virtual object 53. i as a function of all the relative size factors contained in the same row as the virtual object. A function for the width may be:
Figure imgf000039_0002
(Equation 104.2)
where « is the index of the first cell in a row and is the index of the last index in a row.
h
5. Calculate the height 53 of virtual object 53. i as a function of all the relative size factors contained in the same column as the virtual object. A function for the height may be:
Figure imgf000039_0003
(Equation 104.3)
where a is the index of the first cell in a column and b js the index of the last index in a column.
6. Calculate positions for each virtual object by sequentially packing them in the cells of the grid-based container. 7. Virtual objects 53. i with larger relative size factors Ji are placed on top of virtual objects with smaller relative size factors. = 1
In the current case, where focal points 42 is absent and ip for all values of the HiF processor 22 assigns equal widths and equal heights to each virtual object. The result is a grid with equally distributed virtual objects. The virtual pointer 43 and virtual objects 53. i are mapped isomorphically to a visual pointer 44 and visual objects 54. i in the visual display feedback space (F) 14. On the introduction of a pointer object 40 in control space 10, a focal points 42 and virtual objects 52. i are established and normalised relative distances r'p are calculated in VIS 12 through the process described above. The application of the algorithm and functions implemented by the HiF processor 22, as described above, leads to the arrangements of visual objects 54. i in the visual display feedback space 14 as shown in Figure 16.2. In this case, visual object 54.6 is much larger than the other visual objects, due to its proximity to visual pointer 44.
On the displacement of pointer object 40 in control space 10, the position of focal points 42 is updated, while virtual objects 52. i are established, and normalised relative distances Κψ are calculated as before. The application of the algorithm and functions implemented by the HiF processor 22 as described above, leads to the arrangements of visual objects 54. i in the visual display feedback space 14 as shown in Figure 16.3. In this case visual, object 54.4 is much larger than the other visual objects, due to its proximity to visual pointer 44, while 54.8 is much smaller and the other objects are sized between these two.
The location of visual pointer 44 and the size and locations of visual objects 54. i are updated as changes to pointer object 40 are tracked, so that the resulting visual effect is that visual objects compete for space based on proximity to visual pointer 44, so that visual objects closer to the visual pointer 44 are larger than objects farther from 44. Note that by independently calculating the width and height of a virtual object 53. i, objects may overlap in the final layout in 13 and 14.
Example 5
In this example reference is made to Figures 17.1 to 17.4. Any controller 10 that provides at least a three-dimensional multi-touch (3D-MT) input device can be used. The method for human-computer interaction (HCI) on a graphical user interface (GUI) includes a method, function or algorithm that combines the passage of time with the movement of a pointer object in the z-axis to dynamically navigate through a hierarchy of visual objects. The movement of a pointer object 40 is tracked on a 3D multi-touch input device (C) 10. The tracked pointer object is isomorphically mapped to establish a pointer input data coordinate 41 in the virtual control space (vC) 11. The HiC processor 21 establishes a focal point 42 for the pointer coordinate in the virtual interaction space (VIS) 12. The CiR processor 23 establishes a hierarchy of cells in VIS 12. Each cell may be populated with a virtual object, which contains a fixed interaction coordinate centered within the cell, by the CiR processor 23. The hierarchy of virtual objects is established so that a virtual object 52. i contains virtual objects 52.i.j. The virtual objects to be included in VIS 12 may be determined by using the CiRa 33 to modify the free parameters, functions or algorithms of the CiR processor 23. One such algorithm may be the following set of rules:
1. If no pointer object is present in control space 10, establish positions and sizes in VIS 12 for all virtual objects and their children.
2. If a pointer object is present in control space 10, with an associated focal point in VIS 12, establish positions and sizes in VIS 12 for all, or a subset, of the virtual objects and their children based on the z coordinate of the focal point and the following rules: a. If z < z", where z is the hierarchical expansion threshold, select the virtual object under the focal points object and let it, and its children, expand to occupy all the available space in VIS 12. i. If an expansion occurs, do not process another expansion unless:
1. a time delay of d seconds has passed, or
2. the movement direction has reversed so that z > ^te + zhd ^ where ∑hd is a small hysteretic distance and ∑hd < (zic ~ zx with z,c as defined below. b. If Δ > z'c, where z,c is the hierarchical contraction threshold, contract the current top level virtual object and its children, then reintroduce its siblings in VIS 12.
i. If a contraction occurred, do not process another contraction unless:
1. a time delay of d seconds has passed, or
2. the movement direction has reversed so that < z,c ~ z',d , where Zhd is as defined before. c. Note that ζ« < ζ < ζ« .
Using the methods, functions and algorithms described in Example 4, the HiF processor 22 establishes a virtual pointer 43 and virtual objects 53. i and 53.i.j in the feedback buffer 13. The virtual pointer 43 and virtual objects 53. i and 53.i.j are mapped isomorphically to a visual pointer 44 and visual objects 54. i and 54.i.j in the visual display feedback space 14.
Figure 17.1 shows an initial case where no pointer object is present in 10. This condition triggers Rule 1. Using the methods, functions and algorithms described in Example 4, the hierarchy of virtual objects 52. i and 52.i.j in VIS 12, leads to the arrangements of visual objects 54. i and 54.i.j in the visual display feedback space 14.
In Figure 17.2, a pointer object 40 is introduced in control space 10 with coordinate positions x , y and 2 so that z« > z«\ This condition triggers Rule 2. Using the methods, functions and algorithms described in Example 4, the pointer object 40 in control space 10 is mapped to visual pointer 44 in the visual display feedback space 14. The hierarchy of virtual objects 52. i and 52.i.j in VIS 12 are mapped to rearrange visual objects 54. i and 54.i.j in the visual display feedback space 14 as shown. In this case, all the initial virtual objects are visible. Visual object 54.1 is much larger than its siblings 54.2 - 54.4, due to its proximity to the visual pointer 44.
Figure 17.3 shows a displaced pointer object 40 in control space 10 with
7 7 ^ 7 7 7
new coordinate positions , y and b, so that "* a and * ,e . This condition triggers Rule 2. a. The CiRa 33 modifies the free parameters, functions or algorithms of the CiR processor 23 so that it now establishes new positions and sizes only for the hierarchy of cells that contains virtual object 54.1 and its children 52.1.j. The effect is that virtual objects 52.2 - 52.4 are removed from VIS 12, while virtual object 52.1 and its children 52.1.j are expanded to occupy all the available space in VIS 12. Using the methods, functions and algorithms described in Example 4, the pointer object 40 in control space 10 is mapped to visual pointer 44 in the visual display feedback space 14. The hierarchy of virtual objects 52.1 and 52.1.j in VIS 12 are mapped to rearrange visual objects 54.1 and 54.1.j in the visual display feedback space 14 as shown. In this case, only visual object 54.1 and its children 54.1.j are visible. Visual object 54.1.1 is much larger than its siblings (54.1.2 - 54.1.4) due to its proximity to the visual pointer 44.
Figure 17.4 shows pointer object 40 in control space 10 at the same
7 t
position y and *) for more than d seconds. This condition triggers Rule 2.a.i.1. The CiRa 33 again modifies the free parameters, functions or algorithms of the CiR processor 23 so that it now establishes new positions and sizes only for the hierarchy of cells that contains virtual object 54.1.1. The effect is that virtual objects 52.2 - 52.4, as well as virtual objects 52.1.2 - 52.1.4 are removed from VIS 12, while virtual object 52.1.1 is expanded to occupy all the available space in VIS 12. Using the methods, functions and algorithms described in Example 4, the pointer object 40 in control space 10 is mapped to visual pointer 44 in the visual display feedback space 14. The hierarchy of virtual objects 52.1.1 in VIS 12 is mapped to rearrange visual objects 54.1 and 54.1.1 in the visual display feedback space 14 as shown. In this case, only visual objects 54.1 and 54.1.1 are visible and occupy all the available space in in the visual display feedback space 14.
In a further case, a pointer object 40 is introduced in control space 10 coordinate positions * , y and z° , so that∑a > z,e . This leads to the arrangement of visual pointer 44 and visual display objects 54. i and 54.i.j in the visual display feedback space 14 as shown before in Figure 7.2. The pointer object 40 is next displaced in control space 10 to coordinate positions , and ∑b, so that∑b <∑a and∑b <∑te . This leads to the arrangement of visual pointer 44 and visual objects 54.1 and 54.1.j in the visual display feedback space 14 as shown before in Figure 17.3. The pointer object 40 displacement direction is now reversed to coordinate positions* , y and c, so that "b ° a and c ",e ",d. The pointer object 40 displacement direction is again reversed to coordinate positions* , y and ∑b, so that ζ* < ζ . This condition triggers Rule 2.a.i.2, which leads to the arrangement of visual pointer 44 and visual objects 54.1 and 54.1.1 in the visual display feedback space 14 as shown before in Figure 17.4. The pointer object 40 displacement direction is again reversed to coordinate positions* , y and ∑d , so that ∑b < zc < zd < za and ¾ >¾_ This condition triggers Rule 2.b, which leads to the arrangement of visual pointer 44 and visual objects 54.1 and 54.1.j in the visual display feedback space 14 as shown before in Figure 17.3. If the pointer object 40 is maintained at the same position ( x , y and d) for more than d second Rule 2.b.i.1 is triggered, otherwise if the pointer object 40 displacement direction is reversed to coordinate positions * , y and e , so that e d and ze < ztc ~ z,d t RU|e 2.b.i.2 is triggered. Both these conditions lead to the arrangement of visual pointer 44 and visual objects visual display objects 54. i and 54.i.j in the visual display feedback space 14 as shown before in Figure 17.2. Example 6
In a further example of the invention reference is made to Figures 18.1 to 18.6. The method for human computer interaction (HCI) on a graphical user interface (GUI) includes the step of tracking the movement of a pointer object 40 on a touch sensitive input device 10. In this example any controller 10 that provides at least a two-dimensional input coordinate can be used. The tracked pointer object is isomorphically mapped to establish a pointer input data coordinate 41 in the virtual control space 11. The HiC processor 21 establishes a focal point 42 for the pointer coordinate in the virtual interaction space (VIS) 12. The CiR processor 23 populates VIS 12 with N virtual objects 52. i and establishes for each object a location and size, so that the objects are distributed equally over VIS 12. The CIR processor 23 also establishes a fixed interaction coordinate centred within each object. The HiF processor 22 establishes a virtual pointer 43 and virtual objects 53. i in the feedback buffer 13, and calculates and updates the size and position of the feedback objects 53. i to maintain the equal distribution of objects in the feedback buffer 13. The virtual pointer 43 and virtual objects 53. i are mapped isomorphically to a visual pointer 44 and visual objects 54. i in the visual display feedback space 14.
Figure 18.1 shows a case where no pointer object is present in 10. The isomorphic transformation does not establish a pointer coordinate in 1 and the HiC processor 21 does not establish a focal point in VIS 12. The CiR processor 23 establishes 16 virtual objects 52. i, where 1 < /≤16 , each with a fixed interaction coordinate, location and size so that the virtual objects are distributed equally over VIS 12. HiF processor 22 assigns the size and position of the feedback objects 53. i to maintain the equal distribution of objects in the feedback buffer 13. The feedback objects 53 i are mapped isomorphically to visual objects 54.i in the visual display feedback space 14.
On the introduction of a pointer object 40 in control space 10 as shown in Figure 18.2, a focal point 42 and virtual objects 52. i are established through the process described above. The HiF processor 22 assigns the size and position of the virtual objects 53. i to maintain the equal distribution of objects in the feedback buffer 13, but if the focal point 42 falls within the bounds of a virtual object, thereby selecting the virtual object, the HiF processor will emphasize the selected virtual object's corresponding feedback object in the feedback buffer 13 and de-emphasize all other virtual object's corresponding feedback objects. Figure 18.2 demonstrates a case where the focal point 42 falls within the bounds of virtual object 52.16. The corresponding feedback object 53.16 will be emphasised by increasing its size slightly, while all other feedback objects 53.1 to 53.15 will be de-emphasised by increasing their grade of transparency. The feedback objects 53. i are mapped isomorphically to visual objects 54. i in the visual display feedback space 14.
The CiC processor 24 continuously checks if the focal point 42 falls within the bounds of one of the virtual objects 52. i. If the focal point stays within the bounds of the same virtual object for longer than a short time period td , a command to prepare additional objects and data is send to the computer. The CiR and CiRa processors, process the additional data and object information to determine if some virtual objects should no longer be present in VIS 12 and/or if additional objects should be introduced in VIS 12. Figure 18.3 shows a case where the focus point 42 stayed within the bounds of virtual object 52.16 for longer than td seconds. In this case, virtual objects 52.1 to 52.15 will no longer be introduced in VIS 12, while new secondary objects 52.16J, where l </ < 3 , with virtual reference point 62.1 , located on the virtual object 52.16's virtual interaction coordinate, are introduced in VIS 12 at a constant radius rd from virtual reference point 62.1 , and at fixed angles 0j . Tertiary objects 52.16J.1 , representing the virtual objects for each secondary virtual object, along with a second virtual reference point 62.2, located in the top left corner, are also introduced in VIS 12. The Ip 25 calculates, based on the geometry and topology of VIS 12:
• a vector r\P between reference point 62.1 and focal point 42,
· a vector nP between reference point 62.2 and focal point 42,
• a set of vectors y between reference point 62.1 and the interaction coordinates of the secondary virtual objects 52.16J, • a set of vectors rp}\ that are the orthogonal projections of vector r^onto vectors n7 .
The Ip continuously updates vectors r\P , riP and rP]\ whenever the position of the focal point 42 changes. The HIF processor 22 maps the focal point 42 and the remaining primary virtual objects 52. i as before and isomorphically maps virtual reference point 62.1 to feedback. It then uses projection vectors "rP)\ to perform a function or an algorithm to establish the size and location for the secondary feedback objects 53.16J in the virtual feedback buffer 13. Such a function or algorithm may be:
· Isomorphically map an object's size to its representation in VIS 12.
• Objects maintain their angular 6>; coordinates.
• Objects obtain a new distance rdj from feedback reference point 63.1 for each feedback object 53.16J using, for example, the following contraction functi
Figure imgf000047_0001
(Equation106.1)
where c is a free parameter that controls contraction linearly, and q is a free parameter that controls contraction exponentially.
The HiF processor 22 also uses rdj to determine if a tertiary virtual object should be mapped to feedback buffer 13 and what the object's size should be. Such a function or algorithm may be:
• Find the largest rdj and make the corresponding tertiary object 54.16J.1 visible, then hide all other tertiary objects.
• Increase the size of the visible tertiary object 54.16.j.1 in proportion to the value of rdj .
• Keep tertiary objects anchored to reference point 62.2.
In the current case, the application of the algorithm and functions implemented by the HiF processor 22, as described above, leads to the arrangements of the visual pointer 44 and visual objects 54.16, 54.16J and 54.16.J.1 in the visual display feedback space 14 as shown in Figure 18.3. With the focal point located at the same position as virtual reference point 62.1 , the secondary visual objects 54.16.j are placed a constant radius rd away from feedback reference point 63.1 and at fixed angles #,· , while no tertiary visual objects 54.16J.1 are visible.
Figure 18.4 shows a displaced pointer object 40 in control space 10. The position of focal point 42 is updated, while virtual objects 52. i and 52.i.j are established, and vectors P , r2p and rPj are calculated as before. The application of the algorithm and functions implemented by the HiF processor 22 as describe above, leads to the arrangements of visual objects 54.16, 54.16J, 54.16.3.1 in the visual display feedback space 14 as shown in Figure 18.4. Visual object 54.16.1 almost did not move, visual object 54.16.2 moved closer to visual object 54.16 and visual object 54.16.3 moved even closer than visual object 54.16.2 to visual object 54.16. Tertiary visual object 54.16.3.1 is visible and becomes larger, while all other tertiary visual object 54.16.3. k are not visible.
Figure 18.5 shows a further displacement of pointer object 40 in control space 10, so that the focal point crossed secondary virtual object 52.16.3 and then continued on towards tertiary virtual object 52.16.3.1. The position of focal point 42 and all calculated values are updated. If a secondary virtual object is 52.16.j is selected, in this case using crossing-based selection, the CiRa 33 adapts the CiR processor 23 to now only load the preciously selected primary virtual object, the currently selected secondary virtual object and its corresponding tertiary virtual object. In this case, only primary virtual object 52.16, secondary virtual object 52.16.3 and tertiary virtual object 52.16.3.1 are loaded. The HiF processor 22 may now change so that:
• no primary virtual objects 52. i are mapped to feedback buffer 13,
• no secondary virtual objects 52.i.j are mapped to feedback buffer 13, · the selected secondary virtual object's tertiary virtual object takes over the available space in feedback buffer 13.
• the selected secondary virtual object's tertiary virtual object further adjust its position so that if the focal point 42 moves towards the virtual reference point 62.2, the tertiary virtual object moves upwards, while if the focal point 42 moves away from virtual reference point 62.2, the tertiary virtual object moves downwards. The application of the algorithm and functions implemented by the HiF processor 22 as describe above, leads to the arrangements of visual objects 54.16, 54.16.3, 54,16.3.1 in the visual display feedback space 14 as shown in Figure 18.5. Visual objects 54.16 and 54.16J is no longer visible and visual object 54,16.3.1 expanded to take up the available visual feedback buffer space.
Figure 18.6 shows a further upward displacement of pointer object 40 in control space 10. The position of focal point 42 and all calculated values are updated. The application of the algorithm and functions implemented by the HiF processor 22 as describe above, leads to the arrangements of visual object 54,16.3.1 in the visual display feedback space 14 as shown in Figure 18.6. Visual object 54,16.3.1 moved downwards, so that more of its object is shown, in response the focal point moving closer to virtual reference point 62.2 in VIS 14.
References
[1] Card, SK, TP Moran & A Newell, The Psychology of Human-Computer Interaction, Lawrence Erlbaum Associates, Hillsdale, NJ, 1983.
[2] Bederson, BB & B Shneiderman, The Craft of Information Visualization - Readings and Reflections, Morgan Kaufmann, San Francisco, 2003.
[3] Dix, A, J Finlay, GD Abowd & R Beale, Human-Computer Interaction, 3rd Ed, Pearson Education, Essex, 2004.
[4] Bennett KB & JM Flach, Display and Interface Design - Subtle Science, Exact Art, CRC Press, Boca Raton, FL, 2011.
[5] Norman, DA, The design of everyday things, Basic Books, New York, 1988,
(originally published as Psychology of everyday things)
[6] Fitts, Paul M, "The information capacity of the human motor system in controlling the amplitude of movement," Journal of Experimental Psychology, volume 47, number 6, June 1954, pp. 381-391. (Reprinted in Journal of Experimental Psychology: General, 121(3):262-269, 1992). [7] Mackenzie, IS, "Fitt's Law as a research and design tool in human-computer interaction, " Human-computer Interaction, Vol 7, pp 91-139, 1992.
[8] Hick, WE, "On the rate of gain of information," Quart. J. Exp. Psychol. 4, pp 11-26,
1952.
[9] Shannon C & Weaver W, The mathematical theory of communication, Univ. of Illinois Press, Urbana, 1949.
[10] Seow, SC, "Information Theoretic Models of HCI: A Comparison of the Hick-Hyman Law and Fitts' Law , " Human-computer Interaction, Vol 20, pp 315-352, 2005.
[11] Hewett, TT, Baecker, Card, Carey, Gasen, Mantei, Perlman, Strong & Verplank, "ACM SIGCHI Curricula for Human-Computer Interaction," ACM SIGCHI, 1992, 1996, http://old.sigchi.org/cdq (accessed 31 May 2012)
[12] Coomans, MKD & HJP Timmermans, "Towards a Taxonomy of Virtual Reality User Interfaces," Proc. Intl. Conf. on Information Visualisation (IV97), London, 27-29 August 1997.
[13] Ording B, Jobs SP, Lindsay DJ, "User interface for providing consolidation and access", US Patent 7,434, 177, Get 7, 2008.
[14] Zhai S, Conversy S, Beaudouin-Lafon M, Guiard Y, "Human on-line response to target expansion, " Proc CHI 2003, pp 177-184, 2003.
[15] Blanch R, Guiard Y, Beaudouin-Lafon M, " Semantic pointing: Improving target acquisition with control-display ratio adaptation " Proc. CHI'04, pp 519 - 526, 2004.

Claims

1. A method for human-computer interaction (HCI) on a. graphical user interface (GUI), which includes:
· tracking the position and/or movement of a user's body or part of it relative to and/or with an input device in a control space;
• facilitating human-computer interaction by means of an interaction engine, which includes the steps of
- establishing a virtual interaction space(vlS);
- establishing and referencing one or more virtual objects with respect to the interaction space;
- establishing and referencing one or more focal points in the interaction space in relation to the tracked position and/or movement in the control space;
- applying one or more interaction functions or algorithms to determine the interaction between one or more focal points and the virtual objects in the interaction space, and/or to determine one or more commands to be executed; and
- applying a feedback function or algorithm to determine what content of the interaction space is to be presented to the user as feedback, and in which way the content is to be displayed; and
• providing feedback to the user in a sensory feedback space.
2. A method as claimed in Claim , wherein the step of establishing and referencing one or more focal points in the interaction space in relation to the tracked position and/or movement in the control space is effected by a processor that executes one or more Control functions or algorithms, named a Human interaction Control (HiC) processor.
3. A method as claimed in Claim 2, wherein the HiC processor takes user input data from the control space to give effect to the reference of the focal point in the interaction space.
4. A method as claimed in Claim 3, wherein the HiC processor takes other user input data to be used as a variable by an interaction function or to change the characteristics of the focal point.
5. A method as claimed in any one of Claims 1 to 4, wherein an interaction function which determine the interaction of the focal point and/ or objects in the interaction space, and possibly the effect of a field in the interaction space, on the objects, is executed by an Interaction (Ip) processor.
6. A method as claimed in Claim 5, wherein interaction between the focal point and the objects in the interaction space is nonlinear.
7. A method as claimed in Claim 5 or Claim 6, wherein the interaction function is configured for navigation between objects to allow navigation through the space between objects.
8. A method as claimed in any one of Claims 5 to 7, wherein the interaction function is specified so that objects in the interaction space change their state or status in relation to a relative position of a focal point.
9. A method as claimed in any one of Claims 5 to 8, wherein the interaction function which determines the interaction between the focal point and the objects in the interaction space is specified so that the interaction of the focal point with the objects is in the form of interacting with all the objects or a predetermined collection of objects according to a degree of selection and/or a degree of interaction.
10. A method as claimed in any one of Claims 1 to 9, wherein the feedback function is executed by a Human interaction Feedback (HiF) processor.
11. A method as claimed in Claim 10, wherein the feedback function is adapted to include a scaling function to determine the number of objects or the collection of objects in the interaction space to be displayed.
12. A method as claimed in any one of Claims 1 to 11 , wherein a Response function determines the selection and use of data stored in memory to establish and compose the virtual interaction space and/or objects in it and is executed by a Computer interaction Response (CiR) processor.
13. A method as claimed in any one of Claims 1 to 12, wherein a Command function determines the data to be stored in memory and/or the commands to be executed is executed by the Computer interaction Command (CiC) processor.
14. A method as claimed in any one of Claims 2 to 13, wherein a Human interaction Control adaptor (HiCa), uses information from the virtual interaction space (vIS) to dynamically redefine the functioning of the HiC processor.
15. A method as claimed in Claim 14, wherein the HiCa changes the Control function to determine or define the position, size or functionality of the control space in relation to the position of the focal point in the interaction space and/or in relation to the position or dimensions of objects in the interaction space.
16. A method as claimed in any one of Claims 12 to 15, wherein a Computer interaction Response adaptor (CiRa) uses information from the virtual interaction space (vIS) to dynamically redefine the functioning of the CiR processor.
17. A method as claimed in any one of Claims 10 to 16, wherein a Human interaction Feedback adaptor (HiFa), uses information from the virtual interaction space (vIS) to dynamically redefine the functioning of the HiF processor.
18. A method as claimed in any one of Claims 10 to 17, wherein a Computer interaction Command adaptor (CiCa) uses information from the virtual interaction space (vIS) to dynamically redefine the functioning of the CiC processor.
19. A method as claimed in any one of Claims 5 to 18, wherein an Interaction Processor adaptor (Ipa) uses information from the virtual interaction space (vIS) to dynamically redefine the functioning of the Ip processor.
20. A method as claimed in any one of Claims 1 to 19, wherein there is at least some overlap between any one or more of the interaction space, control space and feedback space, which can conceptually be separated.
21. A method as claimed in Claim 20, wherein the method includes providing for the interaction and feedback spaces to overlap, and the step of establishing two separate states for every object, namely an interaction state and a display state.
22. A method as claimed in Claim 20, wherein the method includes providing for the interaction and feedback spaces to overlap and establishing a separate display position for each object based on interaction with a focal point or tracked pointer.
23. A method as claimed in Claim 20, wherein the method includes providing for the interaction and feedback spaces to overlap and to use the relative distances between objects and one or more focal points to establish object positions and/or states, and using time derivatives.
24. A method as claimed in any one of Claims 1 to 23, wherein the interaction space is provided with more than one dimension.
25. A method as claimed in any one of Claims 1 to 24, which includes the step of establishing a coordinate or reference system in the interaction space.
26. A method as claimed in Claim 25, wherein the objects in the interaction space are virtual data objects and each object is referenced at a point in time in terms of a coordinate system, and each object is configured with a state, representing any one or more of its coordinates, function and behaviour.
27. A method as claimed in Claim 25 or Claim 26, wherein the focal point is provided with a state, representing any one or more of its coordinates, function and behaviour.
28. A method as claimed in any Claim 26 or Claim 27, wherein the object state of objects in the interaction space is changed in response to a change in the state of a focal point and/or object state of other objects in the interaction space.
29. A method as claimed in any one of Claims 1 to 28, wherein a scalar or vector field is defined in the interaction space.
30. A method as claimed in any one of Claims 1 to 29, which method includes the step of applying one or more mathematical functions or algorithms to determine distant interaction of a focal point and the virtual objects in the interaction space, which interaction at/from a distance includes absence of contact.
31. A method as claimed in any one of Claims 1 to 30, which method includes the step of applying a non-isomorphic function or algorithm that determines the mapping of object positions from interaction space to a display space.
32. A method as claimed in any one of Claims 1 to 31 , which method includes the step of applying a non-isomorphic function or algorithm that uses focal point positions and object point positions to determine the mapping of object sizes from interaction space to a display space.
33. A method as claimed in any one of Claims 1 to 3 , which method includes the step of applying a non-isomorphic function or algorithm that determines the mapping of object positions and sizes from virtual interaction space to a display space.
34. A method as claimed in any one of Claims 1 to 33, which method includes the step of applying a non-isomorphic function or algorithm that determines the mapping of object state from interaction space to a display space.
35. A method as claimed in any one of Claims 1 to 34, which method includes the step of applying a non-isomorphic function or algorithm that uses a focal point position and object positions in interaction space to update object positions in the interaction space.
36. A method as claimed in any one of Claims 1 to 35, which method includes the step of applying a non-isomorphic function or algorithm that uses a focal point position and object positions in virtual interaction space to update object sizes in the virtual interaction space.
37. A method as claimed in any one of Claims 1 to 35, which method includes the step of applying a non-isomorphic function or algorithm that uses a focal point position and object positions in interaction space to update object positions and sizes in the interaction space.
38. A method as claimed in any one of Claims 1 to 37, which method includes the step of applying a non-isomorphic function or algorithm that uses a focal point position and object positions in interaction space to update object states in the interaction space.
39. A method as claimed in any one of Claims 1 to 38, which method includes the step of applying a non-isomorphic function or algorithm that uses a focal point position and object positions to determine the mapping of object positions from interaction space to a display space as well as to update object positions in the interaction space.
40. A method as claimed in any one of Claims 1 to 39, which method includes the step of applying a non-isomorphic function or algorithm that determines mapping of object sizes from interaction space to a feedback space.
41. A method as claimed in any one of Claims 1 to 40, which method includes the step of applying a non-isomorphic function or algorithm that determines the mapping of object positions and sizes from interaction space to a feedback space.
42. A method as claimed in any one of Claims 1 to 41 , which method includes the step of applying a non-isomorphic function or algorithm that determines the mapping of an object state from interaction space to a feedback space.
43. A method as claimed in any one of Claims 1 to 42, which method includes using the position of a focal point in relation to the position of the boundary of one or more objects in the interaction space to effect crossing-based interaction.
44. A method as claimed in any one of Claims 1 to 43, which method includes the calculation and use of time derivatives of the user input data.
45. A method as claimed in any one of Claims 29 to 44, which method includes dynamically changing the properties of the scalar and/or vector fields in the interaction space, based on the position and/or state of one or more objects in the interaction space.
46. A method as claimed in any one of Claims 1 to 45, which method includes dynamically changing a geometry and/or topology of the interaction space itself, based on the position and/or properties of one or more objects in the interaction space.
47. A method as claimed in any one of Claims 1 to 46, wherein non- linear, continuous and dynamic interaction is established between the focal point and objects in the interaction space, which occurs according to an algorithm of which the focal point position in a control space is a function.
48. An engine for human-computer interaction on a GUI, which engine includes:
a means for establishing a virtual interaction space;
a means for establishing and referencing one or more virtual objects with respect to the interaction space;
a means for establishing and referencing one or more focal points in an interaction space in relation to the tracked position and/or movement in a control space;
a means for calculating an interaction function or algorithm to determine the interaction between one or more focal points and the virtual objects in the interaction space, and/or to determine one or more commands to be executed; and
a means for calculating a feedback function or algorithm to determine what content of the interaction space is to be presented to the user as feedback in a feedback space, and in which way the content is to be presented.
49. An engine as claimed in Claim 48, wherein the means for establishing and referencing one or more focal points in the interaction space in relation to the tracked position and/or movement in the control space is in the form of a processor that executes one or more Control functions or algorithms, named a Human interaction Control (HiC) processor.
50. An engine as claimed in Claim 49, wherein the HiC processor takes user input data from the control space to give effect to the reference of the focal point in the interaction space.
51. An engine as claimed in Claim 50, wherein the HiC processor takes other user input data to be used as a variable by a function to interact with objects in the interaction space or to change the characteristics of the focal point.
52. An engine as claimed in any one of Claims 48 to 5 , which includes an Interaction (Ip) processor wherein the interaction function which determine the interaction of the focal point and/ or objects in the interaction space, and possibly the effect of a field in the interaction space, on the objects, is executed.
53. An engine as claimed in Claim 52, wherein the interaction function which determines the interaction between the focal point and the objects in the interaction space, is configured for navigation between objects to allow navigation through the space between objects.
54. An engine as claimed in any one of Claims 48 to 53, which includes a Human interaction Feedback (HiF) processor wherein a Feedback function is executed.
55. An engine as claimed in any one of Claims 48 to 54, which includes , a Computer interaction Response (CiR) processor wherein a Response function is executed which determines the selection and use of data stored in memory to establish and compose the virtual interaction space and/or objects in it.
56. An engine as claimed in any one of Claims 48 to 55, which includes a Computer interaction Command (CiC) processor wherein a Command function, which determines the data to be stored in memory and/or the commands to be executed, is executed.
57. An engine as claimed in any one of Claims 49 to 56, which includes a Human interaction Control adaptor (HiCa), which uses information from the virtual interaction space (vIS) to dynamically redefine the functioning of the HiC processor.
58. An engine as claimed in Claim 57, wherein the HiCa changes the Control function to determine or define the position, size or functionality of the control space in relation to the position of the focal point in the interaction space and/or in relation to the position or dimensions of objects in the interaction space.
59. An engine as claimed in any one of Claims 55 to 58, which includes a Computer interaction Response adaptor (CiRa), which uses information from the interaction space (vIS) to dynamically redefine the functioning of the CiR processor.
60. An engine as claimed in any one of Claims 54 to 59, which includes a Human interaction Feedback adaptor (HiFa), which uses information from the interaction space (vIS) to dynamically redefine the functioning of the HiF processor.
61. An engine as claimed in any one of Claims 56 to 60, which includes a Computer interaction Command adaptor (CiCa), which uses information from the interaction space (vIS) to dynamically redefine the functioning of the CiC processor.
62. An engine as claimed in any one of Claims 52 to 61 , which includes an Interaction Processor adaptor (Ipa), which uses information from the interaction space (vIS) to dynamically redefine the functioning of the Ip processor.
63. A method for human-computer interaction (HCI) on a graphical user interface (GUI) substantially as described herein with reference to accompanying drawings.
64. An engine for human-computer interaction on a GUI substantially as described herein with reference to accompanying drawings.
PCT/ZA2013/000042 2012-06-15 2013-06-13 Method and mechanism for human computer interaction WO2013188893A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US14/407,917 US20150169156A1 (en) 2012-06-15 2013-06-13 Method and Mechanism for Human Computer Interaction
AU2013273974A AU2013273974A1 (en) 2012-06-15 2013-06-13 Method and mechanism for human computer interaction
EP13753509.2A EP2862043A2 (en) 2012-06-15 2013-06-13 Method and mechanism for human computer interaction
ZA2015/00171A ZA201500171B (en) 2012-06-15 2015-01-12 Method and mechanism for human computer interaction

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
ZA2012/04407 2012-06-15
ZA201204407 2012-06-15

Publications (2)

Publication Number Publication Date
WO2013188893A2 true WO2013188893A2 (en) 2013-12-19
WO2013188893A3 WO2013188893A3 (en) 2014-04-10

Family

ID=49054946

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/ZA2013/000042 WO2013188893A2 (en) 2012-06-15 2013-06-13 Method and mechanism for human computer interaction

Country Status (5)

Country Link
US (1) US20150169156A1 (en)
EP (1) EP2862043A2 (en)
AU (1) AU2013273974A1 (en)
WO (1) WO2013188893A2 (en)
ZA (1) ZA201500171B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140375572A1 (en) * 2013-06-20 2014-12-25 Microsoft Corporation Parametric motion curves and manipulable content
US9986225B2 (en) * 2014-02-14 2018-05-29 Autodesk, Inc. Techniques for cut-away stereo content in a stereoscopic display
US10534866B2 (en) * 2015-12-21 2020-01-14 International Business Machines Corporation Intelligent persona agents for design
CN106681516B (en) * 2017-02-27 2024-02-06 盛世光影(北京)科技有限公司 Natural man-machine interaction system based on virtual reality
CN107728901B (en) * 2017-10-24 2020-07-24 Oppo广东移动通信有限公司 Interface display method and device and terminal
CN113703767A (en) * 2021-09-02 2021-11-26 北方工业大学 Method and device for designing human-computer interaction interface of engineering machinery product
DE102021125204A1 (en) 2021-09-29 2023-03-30 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Procedure and system for cooperative machine calibration with KIAgent using a human-machine interface
CN117215415B (en) * 2023-11-07 2024-01-26 山东经鼎智能科技有限公司 Multi-user collaborative virtual interaction method based on MR recording and broadcasting technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6073036A (en) 1997-04-28 2000-06-06 Nokia Mobile Phones Limited Mobile station with touch input having automatic symbol magnification function
US7434177B1 (en) 1999-12-20 2008-10-07 Apple Inc. User interface for providing consolidation and access

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6285374B1 (en) * 1998-04-06 2001-09-04 Microsoft Corporation Blunt input device cursor
US7227526B2 (en) * 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US8230367B2 (en) * 2007-09-14 2012-07-24 Intellectual Ventures Holding 67 Llc Gesture-based user interactions with status indicators for acceptable inputs in volumetric zones
JP5160457B2 (en) * 2009-01-19 2013-03-13 ルネサスエレクトロニクス株式会社 Controller driver, display device and control method
JP2010170388A (en) * 2009-01-23 2010-08-05 Sony Corp Input device and method, information processing apparatus and method, information processing system, and program
US8009022B2 (en) * 2009-05-29 2011-08-30 Microsoft Corporation Systems and methods for immersive interaction with virtual objects
US20110107216A1 (en) * 2009-11-03 2011-05-05 Qualcomm Incorporated Gesture-based user interface
US20130057553A1 (en) * 2011-09-02 2013-03-07 DigitalOptics Corporation Europe Limited Smart Display with Dynamic Font Management

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6073036A (en) 1997-04-28 2000-06-06 Nokia Mobile Phones Limited Mobile station with touch input having automatic symbol magnification function
US7434177B1 (en) 1999-12-20 2008-10-07 Apple Inc. User interface for providing consolidation and access

Non-Patent Citations (15)

* Cited by examiner, † Cited by third party
Title
BEDERSON, BB; B SHNEIDERMAN: "The Craft of Information Visualization - Readings and Reflections", 2003, MORGAN KAUFMANN
BENNETT KB; JM FLACH: "Exact Art", 2011, CRC PRESS, article "Display and Interface Design - Subtle Science"
BLANCH R; GUIARD Y; BEAUDOUIN-LAFON M: "Semantic pointing: Improving target acquisition with control-display ratio adaptation", PROC. CHI'04, 2004, pages 519 - 526
CARD, SK; TP MORAN; A NEWELL: "The Psychology of Human-Computer Interaction", 1983, LAWRENCE ERLBAUM ASSOCIATES
COOMANS, MKD; HJP TIMMERMANS: "Towards a Taxonomy of Virtual Reality User Interfaces", PROC. INTL. CONF. ON INFORMATION VISUALISATION (IV97, 27 August 1997 (1997-08-27)
DIX, A; J FINLAY; GD ABOWD; R BEALE: "Human-Computer Interaction", 2004, PEARSON EDUCATION
FITTS, PAUL M: "The information capacity of the human motor system in controlling the amplitude of movement", JOURNAL OF EXPERIMENTAL PSYCHOLOGY, vol. 47, no. 6, June 1954 (1954-06-01), pages 381 - 391
HEWETT, TT; BAECKER; CARD; CAREY; GASEN; MANTEI; PERLMAN; STRONG; VERPLANK: "ACM SIGCHI Curricula for Human-Computer Interaction", ACM SIGCHI, 1992, Retrieved from the Internet <URL:http://old.sigchi.org/cdg>
HICK, WE: "On the rate of gain of information", QUART. J. EXP. PSYCHOL, vol. 4, 1952, pages 11 - 26
JOURNAL OF EXPERIMENTAL PSYCHOLOGY: GENERAL, vol. 121, no. 3, 1992, pages 262 - 269
MACKENZIE, IS: "Fitt's Law as a research and design tool in human-computer interaction", HUMAN-COMPUTER INTERACTION, vol. 7, 1992, pages 91 - 139
NORMAN, DA: "The design of everyday things", 1988, BASIC BOOKS
SEOW, SC: "Information Theoretic Models of HCI: A Comparison of the Hick-Hyman Law and Fitts' Law", HUMAN-COMPUTER INTERACTION, vol. 20, 2005, pages 315 - 352
SHANNON C; WEAVER W: "The mathematical theory of communication", 1949, UNIV. OF ILLINOIS PRESS
ZHAI S; CONVERSY S; BEAUDOUIN-LAFON M; GUIARD Y: "Human on-line response to target expansion", PROC CHI 2003, 2003, pages 177 - 184

Also Published As

Publication number Publication date
US20150169156A1 (en) 2015-06-18
ZA201500171B (en) 2015-12-23
EP2862043A2 (en) 2015-04-22
WO2013188893A3 (en) 2014-04-10
AU2013273974A1 (en) 2015-02-05

Similar Documents

Publication Publication Date Title
US20150169156A1 (en) Method and Mechanism for Human Computer Interaction
Cabral et al. On the usability of gesture interfaces in virtual reality environments
Herndon et al. The challenges of 3D interaction: a CHI'94 workshop
CA2847602C (en) Graphical user interface, computing device, and method for operating the same
WO2017054004A1 (en) Systems and methods for data visualization using tree-dimensional displays
Ramani A gesture-free geometric approach for mid-air expression of design intent in 3D virtual pottery
Schirski et al. Vista flowlib-framework for interactive visualization and exploration of unsteady flows in virtual environments
CN112114663B (en) Implementation method of virtual reality software framework suitable for visual and tactile fusion feedback
Kulik Building on realism and magic for designing 3D interaction techniques
Rieger et al. Conquering the Mobile Device Jungle: Towards a Taxonomy for App-enabled Devices.
Mihelj et al. Interaction with a virtual environment
Faeth et al. Combining 3-D geovisualization with force feedback driven user interaction
Kerdvibulvech A review of augmented reality-based human-computer interaction applications of gesture-based interaction
Nishino et al. A virtual environment for modeling 3D objects through spatial interaction
Gîrbacia et al. Design review of CAD models using a NUI leap motion sensor
Capece et al. A preliminary investigation on a multimodal controller and freehand based interaction in virtual reality
Preez et al. Human-computer interaction on touch screen tablets for highly interactive computational simulations
Raya et al. Haptic navigation along filiform neural structures
Pramudwiatmoko et al. A high-performance haptic rendering system for virtual reality molecular modeling
Herndon et al. Workshop on the challenges of 3D interaction
Li et al. Object-in-hand feature displacement with physically-based deformation
Cao et al. Research and Implementation of virtual pottery
Donchyts et al. Benefits of the use of natural user interfaces in water simulations
Palleis et al. Novel indirect touch input techniques applied to finger-forming 3d models
Faeth Expressive cutting, deforming, and painting of three-dimensional digital shapes through asymmetric bimanual haptic manipulation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13753509

Country of ref document: EP

Kind code of ref document: A2

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 14407917

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2013753509

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2013273974

Country of ref document: AU

Date of ref document: 20130613

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13753509

Country of ref document: EP

Kind code of ref document: A2