WO2013029083A1 - Graphics communication apparatus - Google Patents

Graphics communication apparatus Download PDF

Info

Publication number
WO2013029083A1
WO2013029083A1 PCT/AU2012/000422 AU2012000422W WO2013029083A1 WO 2013029083 A1 WO2013029083 A1 WO 2013029083A1 AU 2012000422 W AU2012000422 W AU 2012000422W WO 2013029083 A1 WO2013029083 A1 WO 2013029083A1
Authority
WO
WIPO (PCT)
Prior art keywords
finger
audio
computer
user
touch
Prior art date
Application number
PCT/AU2012/000422
Other languages
French (fr)
Inventor
Cagatay GONCU
Kimbal MARRIOTT
A. John HURST
Original Assignee
Monash University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2011903581A external-priority patent/AU2011903581A0/en
Application filed by Monash University filed Critical Monash University
Publication of WO2013029083A1 publication Critical patent/WO2013029083A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • the present invention relates to a graphics communication apparatus.
  • the apparatus is able to communicate graphics to a visually impaired person using vibration, audio and an interactive touch computer device, and the audio may include speech.
  • the apparatus may be used for applications, such as gaming applications, where the apparatus discriminates between or determines which fingers of a user's hand are touching the apparatus.
  • Graphics and other inherently two dimensional content are used regularly in written and electronic communication. They include images, diagrams, tables, maps, mathematics, plots and charts etc. They are used in media, workplace communication and in educational material at all levels of schooling. However, if you are blind or suffer severe vision impairment your access to such graphics is severely limited. This constrains enjoyment of popular media, particularly on the web, restricts effective participation in the workplace and limits educational opportunities.
  • a computer apparatus that can be used effectively by people who are blind or visually impaired to effectively read an accessible version of a wide range of graphics and two dimensional content.
  • the apparatus should be practical from an operation and cost perspective, and also support interactive and active use of graphics similar to the kind of interactive use sighted users currently enjoy.
  • an alternative apparatus that may be used for games involving interactin with a touch sensitive display, SUMMARY
  • Embodiments of the present invention provide a graphics communication apparatus, including:
  • a haptic feedback device connected to the computer and including vibration actuators positioned adjacent a user's fingers to indicate the relative position of a user's finger to a graphic element presented on the display.
  • the computer also generates and presents audio to assist with navigation of the display, and is able to generate speech to provide responses to guide navigation.
  • the computer may be a tablet device, and the haptic device may be a glove including vibrating motors connected to the computer device.
  • Embodiments of the present invention also provide a graphics communication computer, including:
  • an audio interface module to generate and present audio for a visually impaired person to indicate the position of at least one paphic element presented on the display.
  • Embodiments of the present invention also provide a computer, including: a touch sensitive display; and
  • a touch controller configured to assign at least one touch point to a finger of a hand of a user.
  • Figure 1 is a diagram of an embodiment of a graphics communication apparatus in use
  • Figure 2 is a block diagram of a computer of the apparatus
  • Figures 3 and 4 are diagrams of graphics generated on a visual display of the apparatus
  • Fi ure 5 is a diagram of a table generated on the visual display-
  • Figure 6 is a diagram of a floor plan generated on the display of the apparatus
  • Figure 7 is a diagram of a line graph generated on the visual display
  • Figure 8 is a flow chart of a finger assignment process
  • Figure 9 is a diagram illustrating 3D sound generation
  • Figure 10 is a flow chart of a scanline mode process
  • FIGS 1 1 and 12 illustrate 2D sound generation for two scanline modes
  • FIG. 13 illustrates panning modes
  • FIG. 14 illustrates zooming modes. DETAILED DESCRIPTION
  • a graphics communication apparatus 100 referred to herein as "GraV VITAS"- (for Graphics Viewer using Vibration, Interactive Touch, Audio and Speech) is a muIti*modal presentation device.
  • GraV VITAS 100 shown in Figure 1 , includes a touch sensitive tablet personal computer (PC) 202 shown in Figure 2. This tracks the position of the user's, i.e. reader's, fingers, allowing natural navigation like that with a tactile graphic.
  • I-Iaptic feedback is provided by a device 218 comprising a pair of gloves with small vibrating motors of the kind used in mobile phones, such as the iPhone produced by Apple Inc. The motors are attached to the fingers of the reader and controlled by the tablet PC 202. This assists the user in determining the position and geometric properties of graphic elements.
  • the apparatus also provides audio feedback to help the user with navigation and to allow the user to query a graphic element in order to obtain non-geometric information about the element.
  • the touch sensitive tablet PC 202 is a 32 or 64 bit Intel architecture computer, such as those produced by Lenovo Corporation, IBM Corporation, or Apple Inc.
  • the data processes executed by the computer 202 are defined and controlled by computer program instruction code and data of software components or modules 250 stored on non-volatile (e.g. flash memory) storage 204 of the computer 202.
  • the processes performed by the modules 250 can, alternatively, be performed by firmware stored in read only memory (ROM) or at least in part by dedicated hardware circuits of the computer 202, such as application specific integrated circuits (ASICs) and/or field programmable gate arrays (FPGAs).
  • ROM read only memory
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • the computer 202 includes random access memory (RAM) 206, at least one microprocessor 208, and external interfaces 210, 212, 214 that are all connected by a system bus 216.
  • the external interfaces include universal serial bus (USB) interfaces 210, a network interface connector (NIC) 212, and a display adapter 214,
  • USB interfaces 210 are connected to input/output devices, such as the haptic device 218.
  • the display adapter 214 is connected to the touch sensitive LCD display screen 222.
  • the NIC 212 enables the computer 202 to connect to the communications network 1 10.
  • the network 1 10 may include one or a combination of existing networks 130, such as a LAN, WAN, the PSTN, the Internet, mobile cellular telephone networks, etc.
  • the computer 202 includes an operating system (OS) 224, such as Microsoft Windows, Mac OSX or Linux.
  • the modules 250 all run on the OS 224, and include program code written using languages such as C, Ruby or C#.
  • the computer 202 tracks the position of the reader's fingers, and the computer 202 shown in Figure 1 is a Dell Latitute XT (http://www.dell.com) which is equipped with a NTrigDuoSense dual-mode digitizer (http://www,n-tr3 ⁇ 4g.eom ' 3 ⁇ 4 that supports both pen and touch input using capacitive sensors.
  • Drivers 252 on the tablet PC 202 allow the device to detect and track up to at least four fingers on the touch screen 222. The user is allowed to use the index and middle finger of both the left and right hand.
  • the haptic feedback device 218 includes at least one low cost cotton data glove with vibrating .motor actuators that applies mechanical activation to the fingers touching the screen 222 and supports multi -touch haptic feedback.
  • the small vibrating motors are attached to the fingers and controlled by the tablet PC 202 using an Engineering Diecimi!a board (http://arduino,cc') of the haptic device 218 and which is connected to the USB port 210.
  • the motors may be coin vibrator motors connected to the chicken board through its analog ports by copper wires, where the level of voltage applied to the motors varies their rotation speed and, accordingly, can be used to vary the haptic feedback.
  • the touch screen 222 tracks at least four fingers, and the glove has separately controlled motors for each tracked finger. The amount of vibration produced by the motors depends on the colour of the graphic element on the touch screen 222 under the finger and if the finger is over empty space there is no vibration. To Shield unwanted fingers, such as a fifth finger, the cotton glove is used to cover that finger
  • the computer 202 stores in a database 256 the maximum and average vector difference between the finger stroke sequences on the touch screen 222.
  • the database 256 is maintained by a database module 254, such as MySQL.
  • a Baycsian data processing module 258 processes the stored vector difference data and determines the most probable feasible finger configuration, where a finger configuration is a mapping from each stroke sequence to a particular finger.
  • a configuration is tnfeasible if the mapping is physically impossible, such as assigning the index and middle finger of the same hand to strokes that were sometimes more than 10cm apart.
  • Data on the area of the touch points, and the angle between them is also stored and processed by the module 258 to determine the correct finger configuration in real-time.
  • Haptic presentation of a graphic is difficult because the sequential movement of hands and fingers involved in perception, and acquisition of information is slower and less parallel than vision. Also, because there is no haptic equivalent of peripheral vision, the position of previously encountered objects may need to be stored in memory. To alleviate this, the apparatus 100 provides audio feedback using a speaker output 224 in order to help the user with navigation and to obtain an overview of the graphic and its layout. The use of audio allows the user to obtain an overview without having to physically touch the elements.
  • the apparatus provides audio feedback when the viewer queries graphic elements on the display, by touching the elements with a predetermined finger gesture or speaking an audio command, into a microphone input 226, that is recognised by the modules 250.
  • the speaker output 224 and the microphone input 226 may be provided by a sound card incorporated in the computer 202, then used to drive a headset incorporating headphones, an external sound card connected to the USB interface 210 can also be used.
  • a Sound Blaster X-Fi Surround 5 sound card by Creative Technology Limited can be used to generate 2D and 3D audio signals for a headset, such as the Triton AX Pro headset by adCatz Inc. that has four individual speakers to provide surround audio.
  • the computer 202 displays graphic content specified in SVG (the W3C standard for Scalable Vector Graphics) on a displayed canvas which is generated by a display module 206 implement using the Windows Presentation Framework (WPF) of Microsoft Corporation.
  • WPF Windows Presentation Framework
  • the metadata in each SVG file and associated with each shape is: the shape ID, the vibration level for the edges and audio volume level for the interior of the shape and for its boundary, the text string to be read out when the shape is queried, and the name of a (non-speech) audio file for generating the sound associated with the shape during navigation.
  • the SVG graphics are constructed using an SVG editor, such as Inkscape (http://www.inkscape.org), and the metadata for each shape can be added using an XML editor, such as Inkscape's internal XML editor.
  • the apparatus 100 was configured with motors attached respectively to two fingers, as shown in Figure 1 , being the left and right index fingers.
  • an audio feedback mode was evaluated as an alternative to haptic feedback. This mode was restricted to the use of one finger or two fingers on different hands.
  • audio roode rf the user touches an object on the screen then they will hear a sound from headphones connected to the speaker output 224. If they use one finger they will hear a sound coming from both headphones while if they use two fingers then they will hear a sound on the left/right headphone if their left/right finger is on an element.
  • the sounds associated with objects are short tones from different instruments played in a loop, and are generated using JFugue library files stored in the database 256, The evaluation or usability study related to determining the geometric properties (specifically position and shape) of graphic elements.
  • the first strategy was to find the corners of the shapes, and then to carefully trace the boundary of the object using one or two fingers.
  • the second strategy was to use a single finger to repeatedly perform a quick horizontal and/or vertical scan across the shape, moving the starting point of the finger between scans slightly in the converse direction to that of the scan. Scanning gives a different audio or haptic pattern for different shapes. For instance, when scanning a rectangle, the duration of a loud sound on an edge, a soft sound inside the shape, and another loud sound on the other edge are all equal as you move down the shape. In contrast for a triangle the duration of the soft sound will either increase or decrease as you scan down the shape. This strategy was quite effective and those participants who used it were faster than those using the boundary tracing strategy.
  • the apparatus 100 has been configured to provide haptic feedback rather than audio feedback to indicate when a user is touching a graphic element.
  • Haptic feedback for this purpose can be scaled to more than two fingers, and allows audio feedback to be used for other purposes.
  • the audio feedback is generated by an audio interface module 262, which provides non- geometric information about a graphic element and help in navigation.
  • the interface 262 provides non-geometric information about a graphic element so that if a finger is touching a graphic element the user can query the element by using a finger gesture that involves "twiddling" their finger in a quick tiny circular motion around the current location without lifting it up. This triggers the interface 262 so the audio (speech or non-speech) associated with the element in the SVG file is delivered to the speaker output 224. Audio feedback is halted by lifting the finger from the tablet. Audio feedback is triggered by whichever finger the user twiddled and comes from more than one- finger.
  • the audio interface 262 enables audio to assist with determining the position of elements in the graphic using two different techniques.
  • the first technique involves generating 3D positional audio, as described below, based on the location of one of the fingers on the touchscreen. This use of 3D audio is based on that in computer video games.
  • 3D audio is based on that in computer video games.
  • a user When a user is not touching an element, they hear through the headphones the sound associated with the graphic elements within a fixed radius of the finger's current position.
  • the sound's position (in 3D) is relative to the finger's position. So if there is an object on the top right of the finger, the associated audio sounds as if it comes from the top right of the user.
  • the 3D positional audio navigation mode is initiated by a triple tapping gesture on the screen 222, i.e.
  • the user can turn the 3D positional audio off temporarily by triple tapping the. active finger when receiving haptic feedback, and it resumes when the haptic feedback stops.
  • stereo audio is generated, as described below, for all objects that intersect a scanline between the two fingers touching the screen 222. Thus if there is an object between the two touch points then the user hears its associated sound. This audio is positioned relative to the mid point of the scanline.
  • the scanline navigation mode is initiated by another gesture, on the screen 222, i.e. tapping both fingers, and stopped by lifting one of the fingers from the screen. Triple tapping is also used to temporarily turn it off.
  • the two modes are complementary: with the scanline mode being most suited to obtaining an initial overview of the graphic and the 3D positional audio mode being suited to finding particular graphic elements.
  • the audio interface 262 controls both 3D positional audio and scanline navigation modes using triple tap gestures, and which mode is entered depends on how many fingers are touching the display when the mode is turned on, i.e. two for the scanline mode and one for the other modes.
  • Graphic elements are queried by either a twiddle or double tap gesture to deliver an audio response, which may be spoken text and/or sound response.
  • the audio interface 262 can be configured to trigger the audio response when the user first touches an element.
  • GraVVITAS 100 Use of GraVVITAS 100 was evaluated using three common kinds of 2D content that were quite different to each other: a table, a floor plan, and a line graph, as shown in Figures 5, 6 and 7.
  • GraVVITAS 100 In tactile graphics the height of the tactile object is often used to distinguish between different kinds of elements, similarly to the use of colour or style in visual graphics.
  • GraVVITAS 100 a difference in vibration level provided by the motors is used to give a natural analogue. The vibration level is used to distinguish different kinds of elements, with the same level used for similar kinds of objects. For the graphics of Figures 5, 6 and 7, three levels were found to be sufficient. Blind users often find it difficult when encountering an unfamiliar kind of graphic to gain an understanding of its structure and purpose.
  • GraVVITAS 100 provides at the top left corner of each graphic a "summary" rectangular shape which, when queried, provides a short spoken description of the graphic's purpose and content.
  • the summary shape has the same audio sound associated with it in all graphics, making it easier for the user to identify and find it.
  • the red square at the top left corner of each graphic is the summary rectangle.
  • the reading strategy associated with the apparatus 100 is to fust use the scanline navigation mode to traverse the graphic from the top of the screen to the bottom to obtain an overview of the elements. Then to use the 3D positional audio navigation mode to find the summary rectangle, and use the query gesture to hear the summary. Then repeatedly use 3D positional audio to navigate through the graphic to find the other elements. And, for each clement, using the query gesture to find what each element is and to use haptic feedback to precisely locate the element and understand its geometric shape.
  • the human perceptual subsystem groups audio streams by using different characteristics of audio such as frequency, amplitude, temporal position, and multidimensional attributes like timbre, and tone quality.
  • the cells are represented as squares and aligned in rows and columns. Querying a cell gives its value as well as the name of the row and column it is in. Different vibration levels differentiate between row headers, column headers and cells. Thin lines connect the headers and the cells so that it is easier to find the neighbouring cells.
  • the tabic gives the average distances ran by three different runners in three different months, and the following questions were asked:
  • the components 250 of the computer 202 include a number of controller classes to control the data processing and signal generation of the apparatus 100.
  • a main analyser controller class 260 integrates all the other components 250 and controls operation of the apparatus 100.
  • the Bayesian data processing module 258 provides a touch controller for controlling all touch related events.
  • the audio interface 262 includes an audio controller for controlling the generation of the 2D and 3D audio, and a speech controller for speech synthesis, speech recognition and speech recording.
  • a gesture controller 264 provides a controller for recognising various gestures, as described below, and a haptic controller 266 is used for controlling the haptic device 218.
  • the analyser 260 is essentially an integration component that receives system events provided by all of the other components 250 and distributes or transmits them to the relevant components 250.
  • the analyser operates as central message parser and processor.
  • the touch controller 258 captures touch points from the NTrig digitiser of the touch screen 222 which are received as touch events from the drivers 252.
  • the drivers 252 may be part of the OS 224.
  • the touch controller 258 establishes a touch event data structure to store the touch events with data concerning their ID numbers, location on the screen, touch area, etc.
  • the events that arc recorded include:
  • TOUCH_DOWN fires when a finger is put on the screen 222;
  • TOUCH_ OVE fires multiple times when a finger is moved on the screen 222; and TOUCH_UP fires when a finger is lifted 222.
  • the touch controller 258 establishes a TouchlnputGroup data structure to group the touch events, as shown below in Code Listing A. This grouping is used to avoid incorrect finger assignment due to quick touch up and down events. It is used to store the touch events based on their unique id values, locations and covered areas on the screen. That is, when a new touch event (teo) occurs, a new TouchlnputGroup (iigi) is created for it. All the following touch events (tei ...te,,,) are added to this TouchlnputGroup (Iigi) until the user lifts his/her finger up.
  • TouchlnputGroup tigs
  • a comparison is made between the new touch event (te n+ i) and the previous touch events in the TouchlnputGroup (iigi). This comparison checks the differences between the coordinate values of the touch points and the areas they cover on the screen. If these differences are significantly large, the touch event (tCn-n) is added to a new TouchlnputGroup (tig ⁇ , otherwise it is added to the old TouchlnputGroup (iigi).
  • TouchlnputGroup is also used to calculate data values for the following parameters: speed, acceleration, active state, previous touch event data and direction. These parameters are used in finger assignment and finger movement predictor procedures of the touch controller 258.
  • a TouchlnputGroup data structure used for a group of touch points that are mapped to a specific finger.
  • TouchlriputGroup addTouchlnpu (TOUCHINPUT tii ;
  • the finger assignment procedure 800 determines which touch point belongs to which finger according to the placement of the fingers on the screen 222.
  • the fingers are first separated into two groups, each of which corresponds to a hand. A higher priority or probability is given to the right hand fingers if the touch point is located on the right hand side of the screen, and to the left hand fingers if the touch point is located on the left hand side of the screen.
  • a higher probability is given to the index fingers if there is only one touch point on the screen 222.
  • All the other fingers are given equal prior probabilities as being the first finger on the screen.
  • probabilities are calculated based on the distance and the angle between the finger pairs.
  • the procedure 800 when first invoked needs to clear and initialise values for the finger probabilities (step 802) and then initial distances and angles between the finger pairs are calculated in a calibration step 804 where the user is asked to put both of his/her hands on the screen.
  • the data is stored in a hand spot relation data structure, as described in Listing B below.
  • Thc mean distances are calculated from the relaxedhand posture, while the max distances are calculated from the stretched hand posture.
  • the relaxed hand distance is used for the ⁇ value, and the ⁇ value is calculated from the distance between the minimum and the relaxed finger distance,
  • the distribution parameters and the Gaussian distribution are used to calculate the probability of a finger configuration and allocate touch points to fingers of the user. This is done when a new touch point is detected on the screen or when it is detected that one of the fingers is assigned incorrectly because of physically impossibility. For instance, right hand index and left hand index fingers can be put on the screen with a similar configuration as left hand middle and left hand index fingers. In this case when the user moves one of the fingers away more than the maximum distance between index and middle fingers, the procedure 800 will detect the incorrect assignment.
  • the assignment procedure 800 receives the touch points passed to the touch controller 258 (808) and processes them to calculate the distance and angles between the touch points (810).
  • Bayesian processing is then executed using the distribution parameters and the Gaussian distribution to calculate Bayesian probabilities for each finger of the user's hand so as to assign a finger probability, representing a finger configuration, to each touch point (step 812).
  • the touch points are then each assigned to each of the fingers of the user respectively, to as to determine which touch points relate to which finger (step 814).
  • the touch points are each assigned to the finger with the highest probability.
  • the touch points allocated to a specific finger are allocated to a TouchlnputGroup, and the finger configuration for the user's hands is determined.
  • the procedure 800 continues to track the touch points received and if the data associated with the touch point indicates that there has been a configuration change, the distances and angles are determined (step 816). If the distances and angles represent a physical impossibility, then processing returns to step 812 to recalculate the Bayesian probabilities. Also, if there is a new touch point on the screen, then processing returns to step 808 to recalculate the distances and angles.
  • the finger assignment procedure 800 assigns a TouchlnputGioup to a finger it fires the following events for the other components 250 in the system:
  • Finger ⁇ Down fires when a finger is put on the screen and a touch input group is assigned; Finger_Move fires when a finger moves on the screen;
  • Finger_Up fires when a finger is lifted from the screen
  • FingerjStop fires when a finger stops on the screen
  • FingerJWiggle fires when a finger wiggles on the screen.
  • This delay happens when the users move their fingers quickly. This causes a mismatch between the received location and the actual location of the fingers. For instance, if a finger moves in a fast pace the coordinate values received from the screen are slightly behind the actual finger position on the screen. In addition to this there is a slight dela to activate the vibrator motors on the data glove because of the communication and processing latencies of the device.
  • the module 258 incoi-porates a finger movement predictor procedure to compensate for these delays to provide the required haptic and audio feedback accurately.
  • the movement predictor predicts the actual location of the finger on the graphic based on its position, speed, acceleration and direction.
  • the movement predictor procedure executes:
  • X represents the predicted coordinate
  • X,.t represents the previous coordinate.
  • V,_i represents the speed of the finger in the previous calculation.
  • a,. / is the acceleration of the finger in the previous calculation.
  • / is the time frame, which can be 10 ms.
  • the haptic controller 266 implements a communication protocol to send actuation commands to the haptic device 218 as fast as possible once the touch controller 258 has determined the fingers touching the display and the position of the fingers.
  • a 3D sound generator of the audio interface module 262 generates 3D sound for a single observable point on the screen. It simulates a game situation where this point corresponds to a game character in a 3D environment, and the graphic elements concspond to the objects in the scene.
  • the FMOD Ex library by Firelight Technologies Pty Ltd is used by the sound generator to load the sound files representing the different graphic elements into hardware accelerated 3D channels, and position them in a 3D coordinate system where the location of the sound sources correspond to the location of the graphic elements. These channels are then played in a loop.
  • a user updates the single observable point by moving their fingers on the screen, and depending on the proximity of the graphic elements they hear the associated sounds.
  • This component also conUOls the positions of the sound sources and updates their coordinate values after an operation which changes the active view.
  • each shape is associated with an audio file from the FMOD Ex library, and as a user moves their finger 900 on the screen 222 the audio output 244 will generate 3D audio s that the user hears sound generated positioned in a 3D space around the finger, i.e. the user will hear a sound associated with the large circle 902 that is coming from behind their left side, whereas sound for the line 904 will come from the left side and appear closer than the circle 902.
  • the audio interface 262 executes a scanline process 1000, as shown in Figure 10, and generates 2D sound using audio files from a Microsoft DirectX library.
  • the process 1000 generates stereo audio for all objects that intersected a scanline.
  • the process 1000 commences by first obtaining the finger positions on the screen 222 from the touch controller 258 (step 1002) and then proceeds to construct the scanline 1 100 or 1206, as shown in Figures 1 1 and 12.
  • the process 1004 has two modes for constructing the scanline: one finger mode, as shown in Figure 1 1, and two finger mode, as shown in Figure 12.
  • one finger mode the user uses a single finger 1 106 and moves it either hori-santally or vertically. Based on the direction a horizontal or vertical line 1 100 is used as the scanline.
  • the second mode for scanline construction uses two fingers 1202 and 1204 touching the screen and to position the scanline between the two fingers 1206.
  • the level of the sounds that the user hears from the intersecting objects 1208, 1210 and 1214 is inversely proportional to the distance between the object and the corresponding finger.
  • the intersecting objects along the scanline are determined (1006).
  • the intersecting objects are the line 1 100 and the rectangle 1 104.
  • the intersecting objects are the large circle 1208, the line 1210 and the triangle 1214.
  • the audio file associated with each object is then accessed (1008) and played so as to appear in a position relative to the mid point of the scanline 1100, 1206 so as to indicate the position of the intersecting object relative to the scanline (1010).
  • the procedure returns to step 1002 to obtain the finger positions.
  • the gesture controller module 264 includes a GestureManager which is responsible for recognizing finger gestures such as double tap, flick and pinch.
  • the GestureManager receives touch events from the analyzer 260 arid passes them to relevant gesture components of the module 264 which are TapGcsture, ZoomGesture, QueryGesture and PanGesture, These gesture components act as finite state automata which recognize a gesture from a sequence of finger down, move, stop and up events. Gestures can be invoked by one, two, three or four fingers. They not only involve simultaneous movements ⁇ of fingers but also a sequence of fingers in a time period, such as double or triple tapping. By providing panning and zooming the apparatus supports dynamic exploration of tactile graphics, The apparatus 100 supports fixed and continuous modes for panning. In fixed mode the user pans in discrete intervals such as panning a full screen to the left or right.
  • the apparatus 100 also supports two modes for zooming: fixed and continuous.
  • fixed mode users can zoom in and out by discrete scale such as making the graphic elements double their original size.
  • continuous mode users can 2oom to any scale where the zoom ratio is proportional to the movement of the fingers. In both cases the graphic element under the anchor points remains the same.
  • Figure 13 illustrates the different multi-pan gestures that can be used by the PanGesture component to instruct the display to pan across the objects presented in the display.
  • Figure 13(a) illustrates panning with three fingers 1302, 1304 and 1306, where two fingers 1 04 and 1306 remain stationary whilst the other finger 1302 is moved to illustrate the pan direction.
  • Figure 13(b) one finger 1302 is placed in a designated position for a pan command (although the actual command menu presented does not need to be displayed), and the second finger 1304 is moved to indicate the pan direction.
  • two fingers 1302 and 1 04 are moved together to indicate the pan direction.
  • a zoom gesture can be made where one finger 1 02 is kept stationary whilst two other fingers are moved either up or down to indicate zooming in or out, as shown in Figure 14(a).
  • One finger 1402 is positioned to indicate a zoom in or 200m out command (although the command menu does not need to be displayed) and another finger 1404 is placed in the position where zooming in or out occurs.
  • GraVVITAS 100 can be used effectively by people who are blind or visually impaired to read an accessible version of a wide range of graphics and 2D content.
  • the apparatus 100 is practical and inexpensive to produce and to operate. Graphics can be produced using an SVG editor or SVG file generation tool, without the need for access to special purpose printers and paper when producing tactile graphics.
  • the touch screen 222 of the computer 202 of the apparatus supports interactive, active use of graphics.
  • GraVVITAS 100 can also be used for a wide variety of gaming applications, particularly video games where it is desirable to be able to distinguish between the paiticulai- fingers of a user's hand that are touching the display.
  • non-visuall impaired users could use the GraVVITAS 100 without the haptic feedback device 218, and real time finger determination data generated by the Bayesian data processing module 258 can be used to trigger events in a video game 268 executed on the apparatus.
  • the two audio modes can also be toggled on and off as desired for the video game 268.

Abstract

A graphics communication apparatus, including: a computer with a touch sensitive visual display; and a haptic feedback device connected to the computer and including vibration actuators positioned adjacent a user's fingers to indicate the relative position of a user's finger to a graphic element presented on the display. The computer includes a touch controller configured to assign at least one touch point to a finger of a hand of a user.

Description

GRAPHICS COMMUNICATION APPARATUS
FIELD The present invention relates to a graphics communication apparatus. In particular, the apparatus is able to communicate graphics to a visually impaired person using vibration, audio and an interactive touch computer device, and the audio may include speech. Also, the apparatus may be used for applications, such as gaming applications, where the apparatus discriminates between or determines which fingers of a user's hand are touching the apparatus.
BACKGROUND
Graphics and other inherently two dimensional content are used regularly in written and electronic communication. They include images, diagrams, tables, maps, mathematics, plots and charts etc. They are used in media, workplace communication and in educational material at all levels of schooling. However, if you are blind or suffer severe vision impairment your access to such graphics is severely limited. This constrains enjoyment of popular media, particularly on the web, restricts effective participation in the workplace and limits educational opportunities.
There are a number of different techniques for allowing people who are blind to access graphics. One is the use of tactile graphics presented on swell or embossed paper. However, none are widely used and currently there is no reasonably priced technology or tool which can be effectively used by someone who is blind to access graphics, tables and other two dimensional content. This is in contrast to textual content, for which there exist a number of computer applications used by the blind community. For instance, DAISY (the Digital Accessible Information System, http://www.daisv.Org l provides access to textbooks and other textual material using speech or refreshable Braille displays and Apple Inc's VoiceOver screen reader provides accessible access to the text in webpages. It is desired to provide at least a useful alternative, and preferably an apparatus that can be used to allow a visually impaired person to experience graphics. In particular, it is desired to provide a computer apparatus that can be used effectively by people who are blind or visually impaired to effectively read an accessible version of a wide range of graphics and two dimensional content. The apparatus should be practical from an operation and cost perspective, and also support interactive and active use of graphics similar to the kind of interactive use sighted users currently enjoy. It is also desired to provide an alternative apparatus that may be used for games involving interactin with a touch sensitive display, SUMMARY
Embodiments of the present invention provide a graphics communication apparatus, including:
a computer with a touch sensitive visual display; and
a haptic feedback device connected to the computer and including vibration actuators positioned adjacent a user's fingers to indicate the relative position of a user's finger to a graphic element presented on the display.
The computer also generates and presents audio to assist with navigation of the display, and is able to generate speech to provide responses to guide navigation.
The computer may be a tablet device, and the haptic device may be a glove including vibrating motors connected to the computer device. Embodiments of the present invention also provide a graphics communication computer, including:
a touch sensitive visual display; and
an audio interface module to generate and present audio for a visually impaired person to indicate the position of at least one paphic element presented on the display.
Embodiments of the present invention also provide a computer, including: a touch sensitive display; and
a touch controller configured to assign at least one touch point to a finger of a hand of a user. BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention are hereinafter described, by way of example only, with reference to the accompanying drawings, wherein: Figure 1 is a diagram of an embodiment of a graphics communication apparatus in use;
Figure 2 is a block diagram of a computer of the apparatus;
Figures 3 and 4 are diagrams of graphics generated on a visual display of the apparatus;
Fi ure 5 is a diagram of a table generated on the visual display-
Figure 6 is a diagram of a floor plan generated on the display of the apparatus; Figure 7 is a diagram of a line graph generated on the visual display;
Figure 8 is a flow chart of a finger assignment process;
Figure 9 is a diagram illustrating 3D sound generation;
Figure 10 is a flow chart of a scanline mode process;
Figures 1 1 and 12 illustrate 2D sound generation for two scanline modes;
Figure 13 illustrates panning modes; and
Figure 14 illustrates zooming modes. DETAILED DESCRIPTION
A graphics communication apparatus 100, referred to herein as "GraV VITAS"- (for Graphics Viewer using Vibration, Interactive Touch, Audio and Speech) is a muIti*modal presentation device. GraV VITAS 100, shown in Figure 1 , includes a touch sensitive tablet personal computer (PC) 202 shown in Figure 2. This tracks the position of the user's, i.e. reader's, fingers, allowing natural navigation like that with a tactile graphic. I-Iaptic feedback is provided by a device 218 comprising a pair of gloves with small vibrating motors of the kind used in mobile phones, such as the iPhone produced by Apple Inc. The motors are attached to the fingers of the reader and controlled by the tablet PC 202. This assists the user in determining the position and geometric properties of graphic elements. The apparatus also provides audio feedback to help the user with navigation and to allow the user to query a graphic element in order to obtain non-geometric information about the element.
The touch sensitive tablet PC 202 is a 32 or 64 bit Intel architecture computer, such as those produced by Lenovo Corporation, IBM Corporation, or Apple Inc. The data processes executed by the computer 202 are defined and controlled by computer program instruction code and data of software components or modules 250 stored on non-volatile (e.g. flash memory) storage 204 of the computer 202. The processes performed by the modules 250 can, alternatively, be performed by firmware stored in read only memory (ROM) or at least in part by dedicated hardware circuits of the computer 202, such as application specific integrated circuits (ASICs) and/or field programmable gate arrays (FPGAs).
The computer 202 includes random access memory (RAM) 206, at least one microprocessor 208, and external interfaces 210, 212, 214 that are all connected by a system bus 216. The external interfaces include universal serial bus (USB) interfaces 210, a network interface connector (NIC) 212, and a display adapter 214, The USB interfaces 210 are connected to input/output devices, such as the haptic device 218. The display adapter 214 is connected to the touch sensitive LCD display screen 222. The NIC 212 enables the computer 202 to connect to the communications network 1 10. The network 1 10 may include one or a combination of existing networks 130, such as a LAN, WAN, the PSTN, the Internet, mobile cellular telephone networks, etc. The computer 202 includes an operating system (OS) 224, such as Microsoft Windows, Mac OSX or Linux. The modules 250 all run on the OS 224, and include program code written using languages such as C, Ruby or C#. The computer 202 tracks the position of the reader's fingers, and the computer 202 shown in Figure 1 is a Dell Latitute XT (http://www.dell.com) which is equipped with a NTrigDuoSense dual-mode digitizer (http://www,n-tr¾g.eom'¾ that supports both pen and touch input using capacitive sensors. Drivers 252 on the tablet PC 202 allow the device to detect and track up to at least four fingers on the touch screen 222. The user is allowed to use the index and middle finger of both the left and right hand.
The haptic feedback device 218 includes at least one low cost cotton data glove with vibrating .motor actuators that applies mechanical activation to the fingers touching the screen 222 and supports multi -touch haptic feedback. The small vibrating motors are attached to the fingers and controlled by the tablet PC 202 using an Arduino Diecimi!a board (http://arduino,cc') of the haptic device 218 and which is connected to the USB port 210. The motors may be coin vibrator motors connected to the Arduino board through its analog ports by copper wires, where the level of voltage applied to the motors varies their rotation speed and, accordingly, can be used to vary the haptic feedback. The touch screen 222 tracks at least four fingers, and the glove has separately controlled motors for each tracked finger. The amount of vibration produced by the motors depends on the colour of the graphic element on the touch screen 222 under the finger and if the finger is over empty space there is no vibration. To Shield unwanted fingers, such as a fifth finger, the cotton glove is used to cover that finger.
A significant technical challenge related to determining in real-time which fingers are touching the tablet and which finger corresponds to which touchpoint on the computer device 202. This is important in order to provide the appropriate haptic feedback to each finger. It is also important for gaming applications that need to distinguish or discriminate between which fingers are touching tho tablet, such as whether it is an index finger, middle finger, ring finger, etc. of the left or right hand. The computer 202 stores in a database 256 the maximum and average vector difference between the finger stroke sequences on the touch screen 222. The database 256 is maintained by a database module 254, such as MySQL. Based on these differences, a Baycsian data processing module 258, as described below, processes the stored vector difference data and determines the most probable feasible finger configuration, where a finger configuration is a mapping from each stroke sequence to a particular finger. A configuration is tnfeasible if the mapping is physically impossible, such as assigning the index and middle finger of the same hand to strokes that were sometimes more than 10cm apart. There is a prior probability for each finger to be touching the device and a probability of a particular finger configuration based on an expected vector difference between each possible pair of fingers. Data on the area of the touch points, and the angle between them is also stored and processed by the module 258 to determine the correct finger configuration in real-time. Haptic presentation of a graphic is difficult because the sequential movement of hands and fingers involved in perception, and acquisition of information is slower and less parallel than vision. Also, because there is no haptic equivalent of peripheral vision, the position of previously encountered objects may need to be stored in memory. To alleviate this, the apparatus 100 provides audio feedback using a speaker output 224 in order to help the user with navigation and to obtain an overview of the graphic and its layout. The use of audio allows the user to obtain an overview without having to physically touch the elements.
Another disadvantage of a purely haptic presentation is that it is difficult to represent non- geometric properties of elements and text, While Braille can be used it takes up a lot of space and cannot be read by many users. To overcome this the apparatus provides audio feedback when the viewer queries graphic elements on the display, by touching the elements with a predetermined finger gesture or speaking an audio command, into a microphone input 226, that is recognised by the modules 250. Whilst the speaker output 224 and the microphone input 226 may be provided by a sound card incorporated in the computer 202, then used to drive a headset incorporating headphones, an external sound card connected to the USB interface 210 can also be used. For example, a Sound Blaster X-Fi Surround 5 sound card by Creative Technology Limited can be used to generate 2D and 3D audio signals for a headset, such as the Triton AX Pro headset by adCatz Inc. that has four individual speakers to provide surround audio. The computer 202 displays graphic content specified in SVG (the W3C standard for Scalable Vector Graphics) on a displayed canvas which is generated by a display module 206 implement using the Windows Presentation Framework (WPF) of Microsoft Corporation. The canvas loads a SVG file from the data store 256 and uses metadata associated with the shapes to control the tool behaviour. The metadata in each SVG file and associated with each shape is: the shape ID, the vibration level for the edges and audio volume level for the interior of the shape and for its boundary, the text string to be read out when the shape is queried, and the name of a (non-speech) audio file for generating the sound associated with the shape during navigation. The SVG graphics are constructed using an SVG editor, such as Inkscape (http://www.inkscape.org), and the metadata for each shape can be added using an XML editor, such as Inkscape's internal XML editor.
For a blind user it is beneficial to use fingers on both hands but it is difficult for the user to distinguish between vibration of the index and middle finger on the same hand. With enough practice a user can distinguish between vibrations on all four fingers but this takes many hours of use. For evaluation purposes the apparatus 100 was configured with motors attached respectively to two fingers, as shown in Figure 1 , being the left and right index fingers.
To determine whether stereo audio feedback may be better than haptic feedback for this configuration an audio feedback mode was evaluated as an alternative to haptic feedback. This mode was restricted to the use of one finger or two fingers on different hands. In audio roode rf the user touches an object on the screen then they will hear a sound from headphones connected to the speaker output 224. If they use one finger they will hear a sound coming from both headphones while if they use two fingers then they will hear a sound on the left/right headphone if their left/right finger is on an element. The sounds associated with objects are short tones from different instruments played in a loop, and are generated using JFugue library files stored in the database 256, The evaluation or usability study related to determining the geometric properties (specifically position and shape) of graphic elements. The study used simple graphics containing one to three geometric shapes (line, triangle, rectangle and circle) such as those shown in Figures 3 and 4. Each shape had a low intensity interior colour and a thick black boundary around it. This meant that the intensity of the haptic or audio feedback was greater when the finger was on the boundary. The graphics were presented to each participant in the two different modes; audio and haptic. It was found that participants preferred haptic feedback. Error rates with audio and haptic feedback were very similar but the time to answer questions associated with the graphics was generally faster with haptic feedback.
Participants used two quite different strategics to identify shapes. The first strategy was to find the corners of the shapes, and then to carefully trace the boundary of the object using one or two fingers. The second strategy was to use a single finger to repeatedly perform a quick horizontal and/or vertical scan across the shape, moving the starting point of the finger between scans slightly in the converse direction to that of the scan. Scanning gives a different audio or haptic pattern for different shapes. For instance, when scanning a rectangle, the duration of a loud sound on an edge, a soft sound inside the shape, and another loud sound on the other edge are all equal as you move down the shape. In contrast for a triangle the duration of the soft sound will either increase or decrease as you scan down the shape. This strategy was quite effective and those participants who used it were faster than those using the boundary tracing strategy.
As a result of this usability study the apparatus 100 has been configured to provide haptic feedback rather than audio feedback to indicate when a user is touching a graphic element. Haptic feedback for this purpose can be scaled to more than two fingers, and allows audio feedback to be used for other purposes.
The audio feedback is generated by an audio interface module 262, which provides non- geometric information about a graphic element and help in navigation. The interface 262 provides non-geometric information about a graphic element so that if a finger is touching a graphic element the user can query the element by using a finger gesture that involves "twiddling" their finger in a quick tiny circular motion around the current location without lifting it up. This triggers the interface 262 so the audio (speech or non-speech) associated with the element in the SVG file is delivered to the speaker output 224. Audio feedback is halted by lifting the finger from the tablet. Audio feedback is triggered by whichever finger the user twiddled and comes from more than one- finger.
The audio interface 262 enables audio to assist with determining the position of elements in the graphic using two different techniques.
The first technique involves generating 3D positional audio, as described below, based on the location of one of the fingers on the touchscreen. This use of 3D audio is based on that in computer video games. When a user is not touching an element, they hear through the headphones the sound associated with the graphic elements within a fixed radius of the finger's current position. The sound's position (in 3D) is relative to the finger's position. So if there is an object on the top right of the finger, the associated audio sounds as if it comes from the top right of the user. The 3D positional audio navigation mode is initiated by a triple tapping gesture on the screen 222, i.e. by triple tapping one of the fingers, and stopped when either the user lifted the finger or they triple tapped their other finger initiating 3D positional audio relative to that finger. The user can turn the 3D positional audio off temporarily by triple tapping the. active finger when receiving haptic feedback, and it resumes when the haptic feedback stops. In the second technique, stereo audio is generated, as described below, for all objects that intersect a scanline between the two fingers touching the screen 222. Thus if there is an object between the two touch points then the user hears its associated sound. This audio is positioned relative to the mid point of the scanline. The scanline navigation mode is initiated by another gesture, on the screen 222, i.e. tapping both fingers, and stopped by lifting one of the fingers from the screen. Triple tapping is also used to temporarily turn it off. The two modes are complementary: with the scanline mode being most suited to obtaining an initial overview of the graphic and the 3D positional audio mode being suited to finding particular graphic elements.
In the GraVVITAS apparatus 100, the audio interface 262 controls both 3D positional audio and scanline navigation modes using triple tap gestures, and which mode is entered depends on how many fingers are touching the display when the mode is turned on, i.e. two for the scanline mode and one for the other modes. Graphic elements are queried by either a twiddle or double tap gesture to deliver an audio response, which may be spoken text and/or sound response. Alternatively the audio interface 262 can be configured to trigger the audio response when the user first touches an element.
Use of GraVVITAS 100 was evaluated using three common kinds of 2D content that were quite different to each other: a table, a floor plan, and a line graph, as shown in Figures 5, 6 and 7.
In tactile graphics the height of the tactile object is often used to distinguish between different kinds of elements, similarly to the use of colour or style in visual graphics. In GraVVITAS 100, a difference in vibration level provided by the motors is used to give a natural analogue. The vibration level is used to distinguish different kinds of elements, with the same level used for similar kinds of objects. For the graphics of Figures 5, 6 and 7, three levels were found to be sufficient. Blind users often find it difficult when encountering an unfamiliar kind of graphic to gain an understanding of its structure and purpose. To assist reading of the' graphics GraVVITAS 100 provides at the top left corner of each graphic a "summary" rectangular shape which, when queried, provides a short spoken description of the graphic's purpose and content. The summary shape has the same audio sound associated with it in all graphics, making it easier for the user to identify and find it. For the graphics of Figures 5, 6 and 7, the red square at the top left corner of each graphic is the summary rectangle. The reading strategy associated with the apparatus 100 is to fust use the scanline navigation mode to traverse the graphic from the top of the screen to the bottom to obtain an overview of the elements. Then to use the 3D positional audio navigation mode to find the summary rectangle, and use the query gesture to hear the summary. Then repeatedly use 3D positional audio to navigate through the graphic to find the other elements. And, for each clement, using the query gesture to find what each element is and to use haptic feedback to precisely locate the element and understand its geometric shape.
For the audio feedback provided in the navigation mode, the human perceptual subsystem groups audio streams by using different characteristics of audio such as frequency, amplitude, temporal position, and multidimensional attributes like timbre, and tone quality.
Humans can differentiate about five or six different simultaneous sounds. Accordingly associating audio with all elements in a complex graphic could be confusing, so for the graphics of Figures 5, 6, and 7 audio feedback was associated with those graphic elements that were considered particularly important and objects that were natural navigational landmarks. If an object has no associated audio, haptic feedback is still associated with the object. The same audio was used for the same kind of objects.
For the table of Figure 5, the cells are represented as squares and aligned in rows and columns. Querying a cell gives its value as well as the name of the row and column it is in. Different vibration levels differentiate between row headers, column headers and cells. Thin lines connect the headers and the cells so that it is easier to find the neighbouring cells. The tabic gives the average distances ran by three different runners in three different months, and the following questions were asked:
(i) (Tl) Who ran the maximum distance in February?
(ii) (T2) What is the distance ran by John in March?
(iii) (T3) How was the performance of Richard? For the floor plan of Figure 6, audio feedback is used for the doors but not for the rooms to assist with "walking" through the floorplan. The rooms are represented with filled rectangles which have two different vibration levels correspondin to their border and interior, and the doors have one strong vibration level. The doors and the rooms also have associated text information that can be queried. The floor plan is of a building with one entrance and seven rooms connected by six doors. The following questions were asked:
(i) (Fl) Where is room 4?
(ii) (F2) How do you go to room 7 from the entrance?
(iii) (F3) How many doors does room 6 have? For the line graph of Figure 7, the axes and labels are represented as rectangles which have their value as the non-geometric information. The lines in the graph belong to two datasets and have different vibration levels. Small squares are used to represent the exact value of a line at a grid point. Their non-geometric information is the name of the dataset and their value on the horizontal and vertical axis. These squares also have audio associated with them so that the user can hear them while using the 3D positional mode. The line graph shows the average points scored by two different basketball teams during a seven month season. The following questions were asked:
(i) (LI) What is the average score of Boston Celtics in September?
(ii) (L2) Have the Houston Rockets improved their score?
(iii) (L3) Which team generally scored more during the season?
Participants in the study were able to read the example graphics and answer the questions correctly.
The components 250 of the computer 202 include a number of controller classes to control the data processing and signal generation of the apparatus 100. A main analyser controller class 260 integrates all the other components 250 and controls operation of the apparatus 100. The Bayesian data processing module 258 provides a touch controller for controlling all touch related events. The audio interface 262 includes an audio controller for controlling the generation of the 2D and 3D audio, and a speech controller for speech synthesis, speech recognition and speech recording. A gesture controller 264 provides a controller for recognising various gestures, as described below, and a haptic controller 266 is used for controlling the haptic device 218.
The analyser 260 is essentially an integration component that receives system events provided by all of the other components 250 and distributes or transmits them to the relevant components 250. The analyser operates as central message parser and processor.
The touch controller 258 captures touch points from the NTrig digitiser of the touch screen 222 which are received as touch events from the drivers 252. The drivers 252 may be part of the OS 224. The touch controller 258 establishes a touch event data structure to store the touch events with data concerning their ID numbers, location on the screen, touch area, etc. The events that arc recorded include:
TOUCH_DOWN fires when a finger is put on the screen 222;
TOUCH_ OVE fires multiple times when a finger is moved on the screen 222; and TOUCH_UP fires when a finger is lifted 222.
The touch controller 258 establishes a TouchlnputGroup data structure to group the touch events, as shown below in Code Listing A. This grouping is used to avoid incorrect finger assignment due to quick touch up and down events. It is used to store the touch events based on their unique id values, locations and covered areas on the screen. That is, when a new touch event (teo) occurs, a new TouchlnputGroup (iigi) is created for it. All the following touch events (tei ...te,,,) are added to this TouchlnputGroup (Iigi) until the user lifts his/her finger up. When this happens a timer is started, and if no other touch event is received within that given time, the TouchlnputGroup (tigs) is removed, otherwise a comparison is made between the new touch event (ten+i) and the previous touch events in the TouchlnputGroup (iigi). This comparison checks the differences between the coordinate values of the touch points and the areas they cover on the screen. If these differences are significantly large, the touch event (tCn-n) is added to a new TouchlnputGroup (tig^, otherwise it is added to the old TouchlnputGroup (iigi). TouchlnputGroup is also used to calculate data values for the following parameters: speed, acceleration, active state, previous touch event data and direction. These parameters are used in finger assignment and finger movement predictor procedures of the touch controller 258.
Code Listing A TouchlnputGroup data structure used for a group of touch points that are mapped to a specific finger.
class TouchlriputGroup {
public :
int m index;
//state of finger
bool mjactivej
bool n noving ;
bool rajMigglingGeefcuiiii ;
bool tn_etopingi)
bool m_chang>ad;
TOOCHINPUT m_lastTouchInpu i
double m_AverageX;
double m_AverageY;
OOLORREF m_lastEletnent ;
list< 0UCHI«P0T5. m_touchInputs;
list«TOUCHiNPUT> mtouch ovee;
int m__ssignedSpot ;
inc m_previouSA_eignedspot ; // f inger change event
int m_a3signedH-or>d;
TouchMoveData m_tnidj
publ c :
void TouchlriputGroup : : addTouchlnpu (TOUCHINPUT tii ;
void setActive O ; void
resetActive O ; void
3etMbving ( ) ; void
sets toping t) ;
void calculateTouchMoveData (TOUCHINPUT ti) ;
double calculatePradictedX (double t ) ;
double calculatePredictedY (double t ) ;
}
The finger assignment procedure 800, as shown in Figure S, executed by the Bayesian data processing module 258, determines which touch point belongs to which finger according to the placement of the fingers on the screen 222. The fingers are first separated into two groups, each of which corresponds to a hand. A higher priority or probability is given to the right hand fingers if the touch point is located on the right hand side of the screen, and to the left hand fingers if the touch point is located on the left hand side of the screen. In addition, since it can be difficult to accurately determine the finger with a single touch point, a higher probability is given to the index fingers if there is only one touch point on the screen 222.
'5
All the other fingers are given equal prior probabilities as being the first finger on the screen. When the user subsequently puts other fingers on the screen probabilities are calculated based on the distance and the angle between the finger pairs. 0 The procedure 800 when first invoked needs to clear and initialise values for the finger probabilities (step 802) and then initial distances and angles between the finger pairs are calculated in a calibration step 804 where the user is asked to put both of his/her hands on the screen. The data is stored in a hand spot relation data structure, as described in Listing B below.
5
Code Listing 6 HandSpotRelationPata data structure
cypedei struct KandspoC elationPata {
double itiinDistance;
double maxDlstance;
double meanDistance ;
double stdDevDistance;
double miriHorizontalDis ance ;
double maanori aontalBdeftance
double mear-HorizontalDistanee ;
double stdDevHorizon alDistance ;
double minver cicalOistance ;
double maxVerti-calDiscance ;
double meanVert icalDi e aftce ;
double scdDeWerticalCietahce;
double mirAngle;
double Tna¾¾nglej
double meanAngle;
double a dDe Anglej
5 } ;
The minimum distance between any pair is almost the same because a person can put two fingers next to each other and the width of the fingers determines the minimum distance. Thc mean distances are calculated from the relaxedhand posture, while the max distances are calculated from the stretched hand posture.
The minimum, relaxed, and maximum distance values are then used at step 806 to calculate the parameters (μ, and σ ) of a Gaussian distribution P(X) = J— g -</-/'>'
The relaxed hand distance is used for the μ value, and the σ value is calculated from the distance between the minimum and the relaxed finger distance, The distribution parameters and the Gaussian distribution are used to calculate the probability of a finger configuration and allocate touch points to fingers of the user. This is done when a new touch point is detected on the screen or when it is detected that one of the fingers is assigned incorrectly because of physically impossibility. For instance, right hand index and left hand index fingers can be put on the screen with a similar configuration as left hand middle and left hand index fingers. In this case when the user moves one of the fingers away more than the maximum distance between index and middle fingers, the procedure 800 will detect the incorrect assignment.
Accordingly the assignment procedure 800, once invoked, receives the touch points passed to the touch controller 258 (808) and processes them to calculate the distance and angles between the touch points (810). Bayesian processing is then executed using the distribution parameters and the Gaussian distribution to calculate Bayesian probabilities for each finger of the user's hand so as to assign a finger probability, representing a finger configuration, to each touch point (step 812). Based on the probabilities, the touch points are then each assigned to each of the fingers of the user respectively, to as to determine which touch points relate to which finger (step 814). The touch points are each assigned to the finger with the highest probability. The touch points allocated to a specific finger are allocated to a TouchlnputGroup, and the finger configuration for the user's hands is determined. The procedure 800 continues to track the touch points received and if the data associated with the touch point indicates that there has been a configuration change, the distances and angles are determined (step 816). If the distances and angles represent a physical impossibility, then processing returns to step 812 to recalculate the Bayesian probabilities. Also, if there is a new touch point on the screen, then processing returns to step 808 to recalculate the distances and angles.
When the finger assignment procedure 800 assigns a TouchlnputGioup to a finger it fires the following events for the other components 250 in the system:
Finger^Down fires when a finger is put on the screen and a touch input group is assigned; Finger_Move fires when a finger moves on the screen;
Finger_Up fires when a finger is lifted from the screen;
FingerjStop fires when a finger stops on the screen;
Finger Changed fires when a finger is put on the screen, and its finger assignment changes to another finger; and
FingerJWiggle fires when a finger wiggles on the screen. There, can be a noticeable delay in audio and haptic feedback because of the hardware devices, in particular the touch screen 222 and the data glove shown in Figure 1. This delay happens when the users move their fingers quickly. This causes a mismatch between the received location and the actual location of the fingers. For instance, if a finger moves in a fast pace the coordinate values received from the screen are slightly behind the actual finger position on the screen. In addition to this there is a slight dela to activate the vibrator motors on the data glove because of the communication and processing latencies of the device.
The module 258 incoi-porates a finger movement predictor procedure to compensate for these delays to provide the required haptic and audio feedback accurately. The movement predictor predicts the actual location of the finger on the graphic based on its position, speed, acceleration and direction.
The movement predictor procedure executes:
X ,— Xl + Vt_t x +— x x ί 2 Where
X, represents the predicted coordinate.
X,.t represents the previous coordinate.
V,_i represents the speed of the finger in the previous calculation.
a,./ is the acceleration of the finger in the previous calculation.
/ is the time frame, which can be 10 ms.
The haptic controller 266 implements a communication protocol to send actuation commands to the haptic device 218 as fast as possible once the touch controller 258 has determined the fingers touching the display and the position of the fingers.
A 3D sound generator of the audio interface module 262 generates 3D sound for a single observable point on the screen. It simulates a game situation where this point corresponds to a game character in a 3D environment, and the graphic elements concspond to the objects in the scene. The FMOD Ex library by Firelight Technologies Pty Ltd is used by the sound generator to load the sound files representing the different graphic elements into hardware accelerated 3D channels, and position them in a 3D coordinate system where the location of the sound sources correspond to the location of the graphic elements. These channels are then played in a loop.
A user updates the single observable point by moving their fingers on the screen, and depending on the proximity of the graphic elements they hear the associated sounds. This component also conUOls the positions of the sound sources and updates their coordinate values after an operation which changes the active view.
As shown in Figure 9, each shape is associated with an audio file from the FMOD Ex library, and as a user moves their finger 900 on the screen 222 the audio output 244 will generate 3D audio s that the user hears sound generated positioned in a 3D space around the finger, i.e. the user will hear a sound associated with the large circle 902 that is coming from behind their left side, whereas sound for the line 904 will come from the left side and appear closer than the circle 902. For the scan line mode, the audio interface 262 executes a scanline process 1000, as shown in Figure 10, and generates 2D sound using audio files from a Microsoft DirectX library. The process 1000 generates stereo audio for all objects that intersected a scanline. Thus, if there was ah object intersecting the scanline then the user would hear its associated sound. The audio is positioned relative to the mid point of the scanline. The process 1000 commences by first obtaining the finger positions on the screen 222 from the touch controller 258 (step 1002) and then proceeds to construct the scanline 1 100 or 1206, as shown in Figures 1 1 and 12.
The process 1004 has two modes for constructing the scanline: one finger mode, as shown in Figure 1 1, and two finger mode, as shown in Figure 12. In one finger mode the user uses a single finger 1 106 and moves it either hori-santally or vertically. Based on the direction a horizontal or vertical line 1 100 is used as the scanline. The second mode for scanline construction uses two fingers 1202 and 1204 touching the screen and to position the scanline between the two fingers 1206. The level of the sounds that the user hears from the intersecting objects 1208, 1210 and 1214 is inversely proportional to the distance between the object and the corresponding finger.
Once the scanline 1 100 or 1206 has been constructed (step 1004) the intersecting objects along the scanline are determined (1006). In Figure 1 1, the intersecting objects are the line 1 100 and the rectangle 1 104. In Figure 12 the intersecting objects are the large circle 1208, the line 1210 and the triangle 1214. The audio file associated with each object is then accessed (1008) and played so as to appear in a position relative to the mid point of the scanline 1100, 1206 so as to indicate the position of the intersecting object relative to the scanline (1010). Once the finger is moved then the procedure returns to step 1002 to obtain the finger positions. The gesture controller module 264 includes a GestureManager which is responsible for recognizing finger gestures such as double tap, flick and pinch. The GestureManager receives touch events from the analyzer 260 arid passes them to relevant gesture components of the module 264 which are TapGcsture, ZoomGesture, QueryGesture and PanGesture, These gesture components act as finite state automata which recognize a gesture from a sequence of finger down, move, stop and up events. Gestures can be invoked by one, two, three or four fingers. They not only involve simultaneous movements · of fingers but also a sequence of fingers in a time period, such as double or triple tapping. By providing panning and zooming the apparatus supports dynamic exploration of tactile graphics, The apparatus 100 supports fixed and continuous modes for panning. In fixed mode the user pans in discrete intervals such as panning a full screen to the left or right. In continuous mode the user pans by following an anchor point so that the view scrolls at every movement of the finger which will be on the same graphic element in the final view. The apparatus 100 also supports two modes for zooming: fixed and continuous. In the fixed mode users can zoom in and out by discrete scale such as making the graphic elements double their original size. In continuous mode users can 2oom to any scale where the zoom ratio is proportional to the movement of the fingers. In both cases the graphic element under the anchor points remains the same.
Figure 13 illustrates the different multi-pan gestures that can be used by the PanGesture component to instruct the display to pan across the objects presented in the display. Figure 13(a) illustrates panning with three fingers 1302, 1304 and 1306, where two fingers 1 04 and 1306 remain stationary whilst the other finger 1302 is moved to illustrate the pan direction. In Figure 13(b), one finger 1302 is placed in a designated position for a pan command (although the actual command menu presented does not need to be displayed), and the second finger 1304 is moved to indicate the pan direction. In Figure 13(c), two fingers 1302 and 1 04 are moved together to indicate the pan direction. Similarly for the ZoomGesture component, as shown in Figure 14, a zoom gesture can be made where one finger 1 02 is kept stationary whilst two other fingers are moved either up or down to indicate zooming in or out, as shown in Figure 14(a). In Figure 14(b), One finger 1402 is positioned to indicate a zoom in or 200m out command (although the command menu does not need to be displayed) and another finger 1404 is placed in the position where zooming in or out occurs.
GraVVITAS 100 can be used effectively by people who are blind or visually impaired to read an accessible version of a wide range of graphics and 2D content. The apparatus 100 is practical and inexpensive to produce and to operate. Graphics can be produced using an SVG editor or SVG file generation tool, without the need for access to special purpose printers and paper when producing tactile graphics. The touch screen 222 of the computer 202 of the apparatus supports interactive, active use of graphics.
GraVVITAS 100 can also be used for a wide variety of gaming applications, particularly video games where it is desirable to be able to distinguish between the paiticulai- fingers of a user's hand that are touching the display. For example, non-visuall impaired users could use the GraVVITAS 100 without the haptic feedback device 218, and real time finger determination data generated by the Bayesian data processing module 258 can be used to trigger events in a video game 268 executed on the apparatus. The two audio modes can also be toggled on and off as desired for the video game 268.
The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgment or admission or any form of suggestion that that prior publication (or information derived from it) or known matter forms part of the common general knowledge in the field of endeavour to which this specification relates.
Many modifications will be apparent to those skilled in the art without departing from the scope of the present invention.

Claims

CLA1MS:
1. A graphics communication apparatus, including:
a computer with a touch sensitive visual display; and
a haptic feedback device connected to the computer and including vibration actuators positioned adjacent a user's fingers to indicate the relative position of a user's finger to a graphic clement presented on the display.
2. A graphics communication apparatus as claimed in claim 1 , wherein the computer includes an audio interface module to generate and present audio to assist a user with determining the position of at least one graphic element on the display.
3. A graphics communication apparatus as claimed in claim 2, wherein the audio is triggered by a touch gesture, 4. A graphics communication apparatus as claimed in claim 2 or 3, wherein the audio is presented when at least one element crosses a scanline defined by at least one finger on the display 5. A graphics communication apparatus as claimed in claim 3 or 4, wherein 3D positional audio is presented and adjusted depending on the relative position of user's finger to the element to indicate the position as the finger is moved
6. A graphics communication apparatus as claimed in claim 3, 4, or 5, wherein the audio interface generates speech to assist reading when an element is touched.
7. A graphics communication apparatus as claimed in any one of the preceding claims, wherein the computer is a tablet device, and the haptic device is a glove including vibrating motors connected to the tablet device.
8. A graphics communication apparatus as claimed in any one of the preceding claims, wherein the haptic device generates different vibration levels for different graphic elements. 10. A graphics communication computer, including:
a touch sensitive visual display; and
an audio interface module to generate and present audio for a visually impaired person to indicate the position of at least one graphic element presented on the display. 1 1, A graphics communication computer as claimed in claim 10, wherein the audio is presented when at least one element crosses a scanline defined by at least one finger on the display.
12. A graphics communication computer as claimed in claim 10 or 1 1, wherein 3D positional audio is presented and adjusted depending on the relati ve position of user's finger to the element to indicate the position as the finger is moved.
13. A graphics communication computer as claimed in claim 10, 1 1 or 12, wherein the audio generates different respective sound for each graphic element.
14. A computer, including:
a touch sensitive display; and
a touch controller configured to assign at least one touch point to a finger of a user, 15. A computer as claimed in claim 1 , wherein the touch conuoller determines distribution parameters associated with fingers of a user's hand, determines position parameters associated with said at least one touch point, determines finger probabilities based on said position parameters and uses said probabilities to assign said at least one touch point to said finger.
PCT/AU2012/000422 2011-09-02 2012-04-20 Graphics communication apparatus WO2013029083A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2011903581 2011-09-02
AU2011903581A AU2011903581A0 (en) 2011-09-02 Graphics communication apparatus

Publications (1)

Publication Number Publication Date
WO2013029083A1 true WO2013029083A1 (en) 2013-03-07

Family

ID=47755095

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2012/000422 WO2013029083A1 (en) 2011-09-02 2012-04-20 Graphics communication apparatus

Country Status (1)

Country Link
WO (1) WO2013029083A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9335839B1 (en) 2014-04-08 2016-05-10 Clive Lynch Graphic artistic tablet computer
CN115087558A (en) * 2020-03-17 2022-09-20 奥迪股份公司 Control device for controlling an infotainment system, method for providing an acoustic signal to a control device, and motor vehicle having a control device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030184574A1 (en) * 2002-02-12 2003-10-02 Phillips James V. Touch screen interface with haptic feedback device
US7450110B2 (en) * 2000-01-19 2008-11-11 Immersion Corporation Haptic input devices
US20090066666A1 (en) * 2007-09-12 2009-03-12 Casio Hitachi Mobile Communications Co., Ltd. Information Display Device and Program Storing Medium
US20110098917A1 (en) * 2009-10-28 2011-04-28 Google Inc. Navigation Queries

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7450110B2 (en) * 2000-01-19 2008-11-11 Immersion Corporation Haptic input devices
US20030184574A1 (en) * 2002-02-12 2003-10-02 Phillips James V. Touch screen interface with haptic feedback device
US20090066666A1 (en) * 2007-09-12 2009-03-12 Casio Hitachi Mobile Communications Co., Ltd. Information Display Device and Program Storing Medium
US20110098917A1 (en) * 2009-10-28 2011-04-28 Google Inc. Navigation Queries

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9335839B1 (en) 2014-04-08 2016-05-10 Clive Lynch Graphic artistic tablet computer
CN115087558A (en) * 2020-03-17 2022-09-20 奥迪股份公司 Control device for controlling an infotainment system, method for providing an acoustic signal to a control device, and motor vehicle having a control device
CN115087558B (en) * 2020-03-17 2023-09-05 奥迪股份公司 Control device for controlling an infotainment system, method for providing an audible signal to a control device, and motor vehicle having a control device

Similar Documents

Publication Publication Date Title
Grussenmeyer et al. Accessible touchscreen technology for people with visual impairments: a survey
Su et al. Timbremap: enabling the visually-impaired to use maps on touch-enabled devices
Kane et al. Access overlays: improving non-visual access to large touch screens for blind users
Oh et al. Follow that sound: using sonification and corrective verbal feedback to teach touchscreen gestures
EP2891955B1 (en) In-vehicle gesture interactive spatial audio system
Goncu et al. GraVVITAS: generic multi-touch presentation of accessible graphics
Wall et al. Feeling what you hear: tactile feedback for navigation of audio graphs
KR101197876B1 (en) Controller by the manipulation of virtual objects on a multi-contact tactile screen
US6445364B2 (en) Portable game display and method for controlling same
EP2796973B1 (en) Method and apparatus for generating a three-dimensional user interface
Schloerb et al. BlindAid: Virtual environment system for self-reliant trip planning and orientation and mobility training
US20120124470A1 (en) Audio display system
JP2014102819A (en) Method and apparatus for providing haptic cues for guidance and alignment with electrostatic friction
Kajastila et al. Eyes-free interaction with free-hand gestures and auditory menus
KR20140058860A (en) Touch table top display apparatus for multi-user
Raja The development and validation of a new smartphone based non-visual spatial interface for learning indoor layouts
Roth et al. Audio-haptic internet browser and associated tools for blind users and visually impaired computer users
Buxton Human skills in interface design
Oh et al. Audio-based feedback techniques for teaching touchscreen gestures
CN110673810A (en) Display device, display method and device thereof, storage medium and processor
WO2013029083A1 (en) Graphics communication apparatus
WO2020138201A1 (en) Game system, processing method, and information storage medium
CN114743422B (en) Answering method and device and electronic equipment
Bustoni et al. Multidimensional Earcon Interaction Design for The Blind: a Proposal and Evaluation
Milo et al. Aural Fabric: an interactive textile sonic map

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12827933

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12827933

Country of ref document: EP

Kind code of ref document: A1