US20130063494A1 - Assistive reading interface - Google Patents

Assistive reading interface Download PDF

Info

Publication number
US20130063494A1
US20130063494A1 US13/361,585 US201213361585A US2013063494A1 US 20130063494 A1 US20130063494 A1 US 20130063494A1 US 201213361585 A US201213361585 A US 201213361585A US 2013063494 A1 US2013063494 A1 US 2013063494A1
Authority
US
United States
Prior art keywords
motion
content
text
computer
computing device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/361,585
Inventor
Douglas R. Kirschner
Nicholas H. Wrem
Jason Anthony Grieves
Amish Patel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US13/361,585 priority Critical patent/US20130063494A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PATEL, AMISH, GRIEVES, JASON ANTHONY, KIRSCHNER, DOUGLAS R., WREM, NICHOLAS H.
Publication of US20130063494A1 publication Critical patent/US20130063494A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Priority to US15/656,825 priority patent/US10339833B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/001Teaching or communicating with blind persons
    • G09B21/006Teaching or communicating with blind persons using audible presentation of the information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • a computing device It is known to equip a computing device with a screen reader for rendering information to a user verbally. Such a capability can be of significant benefit to computer users who are visually impaired.
  • a visually impaired user for example, can command the computer to read text or to indicate using speech what controls or other objects are on being displayed.
  • Such assistive reading facilities may operate in different modes.
  • the computing device may render text or other information verbally as content is generated for presentation to the user.
  • information is rendered verbally analogously to how it might be displayed visually in a graphical user interface.
  • a user may select different locations on a computer screen, such as by hitting various combinations of keys on a keyboard to indicate a particular location, and the computer will render verbally what is at that location.
  • the screen reader may recognize multiple keystroke combinations as commands that can specify which content, as displayed on the screen, is to be rendered verbally.
  • the keystroke combination may also implicitly indicate how much content is to be rendered. For example, different keystroke commands may allow a user to step through rendered content a word at a time or a paragraph at a time.
  • a screen reader for a computer with a touch screen. Rather than indicating a location through keyboard commands, the user may indicate a location by touching the screen. Such a touch may indicate which content to render verbally.
  • the screen reader may be used in connection with keyboard commands, which may control how much content, such as a word or paragraph, is rendered verbally from the designated location.
  • An improved user experience may be provided through an improved control technique for a computing device providing an assistive reading interface.
  • the interface may operate in an investigatory mode in which a user may designate locations on a display.
  • the computing device may then verbally render information based on the content displayed at the designated location of the display.
  • the amount of content rendered for each selection may be controlled by the user based on the direction of motion of the a pointing device used to designate locations on the display.
  • motion in a first direction may result in selection of a first amount of content to be rendered verbally.
  • Motion in a second direction, transverse to the first direction may result in selection of a second amount of content being rendered.
  • first direction may be generally along lines of text and the second direction may generally be in a direction crossing lines of text.
  • the first amount of content may be a designated word.
  • the second amount of content may be a designated line.
  • FIG. 1 is a sketch of an environment in which an assistive reading interface may be used on a touchscreen
  • FIG. 2 is a sketch of the graphical user interface of the computing device illustrated in FIG. 1 in which a user has designated content displayed in the graphical user interface;
  • FIG. 3 is a sketch of the graphical user interface of the computing device illustrated in FIG. 2 at a subsequent time when a user has further indicated content on the display;
  • FIG. 5 is a sketch of a graphical user interface of a touchscreen computing device in which the user has indicated a second amount of content
  • FIG. 7 is a functional block diagram of an exemplary computing device in which embodiments of the invention may operate.
  • an enhanced user experience could be provided for a assistive reading interface of a computing device by providing an improved mechanism for a user to designate content that is to be rendered verbally.
  • Such an interface may provide a simple mechanism for a user to designate different amounts of content, using relatively simple commands.
  • the amount of content to be rendered may be designated based on direction of motion of a pointing device across a user interface.
  • the interface may avoid the need for the user to make gestures that could also be used to control applications. In this way, the possibility is reduced of ambiguous commands to the computing device that could create the appearance of inconsistent operation.
  • the user experience with the assistive reading device may be faster to use, easier to learn, intuitive and natural.
  • Such a mechanism is well suited for use on a touch screen in which the pointing device may be a finger of the user and options for command formats may be more limited than when input is provided through a keyboard.
  • a user is provided with a simple way to investigate content visually rendered on the display at a pace that is consistent with the user's interest at the time.
  • a quantity of content rendered vocally in response to a user indication of content may depend on a direction of motion of a pointing device across the user interface. For example, if the user moves the pointing device along a line of text, the user interface may verbally render content in units of words as motion of the pointing device moves to designates successive words being displayed. Conversely, when the user moves the pointing device transverse to the lines of text, the user interface may verbally render content in units of lines as the pointing device moves to designate successive lines being displayed.
  • an assistive reading interface may generate speech output stating a name, label or other descriptive content associated with graphical content forming a user interface element, such as a control or other icon.
  • the amount of such graphical content rendered verbally may depend on motion of the pointing device across the screen such that, for motion in a first direction, information is provided verbally for each user interface element indicated by the pointed device. For motion of the pointing device in a second direction, information is provided for a group of user interface elements based on the layout of those user interface elements.
  • Such an interface may be useful for relatively dense text on a touch screen interface.
  • positioning a finger to designate precisely a specific location on the screen may be difficult.
  • an interface technique as described herein may be well suited for use with portable electronic devices, including those with relatively small touch screens, including smart phones.
  • the user drags a single finger or other pointing device over the text area. While the finger movement is primarily down (within 45 degrees of downward, for example), the screen reader reads by line. When the finger movement becomes primarily horizontal, the screen reader switches to reading by word.
  • the screen reader can differentiate between rightward and leftward movements to read by word or character. Also, optionally, the screen reader can switch between line and paragraph reading modes by distinguishing between upward and downward movements.
  • Such an approach to controlling a screen reader allows for direct investigation of text and does not require any gestures to specify the amount of content to be rendered. Because there are a limited number of simple touch gestures that a conventional touch screen device can reliably recognize (fewer than there are keyboard shortcuts), avoiding scenario-specific gestures leaves those gestures available for other commands.
  • Such an interface is easy to learn because it uses an existing paradigm of direct investigation with a single finger. The different behaviors are easily discoverable and easy to understand.
  • the interface gives the user the ability to hear larger blocks of text (e.g. a whole line or paragraph) while providing a way to hear individual words or, in some embodiments, characters. Accordingly, it integrates seamlessly with the rest of the direct investigation experience. In this way the user can naturally investigate text with varying levels of precision depending on their direction of finger movement.
  • the output of the touch controller may be processed within the operating system of the computing device or by any other suitable component.
  • the operating system may include components that interface with the touch controller.
  • the operating system may route user input received through the touch controller to an appropriate component. Such routing may be performed using techniques as are known in the art or in any other suitable way.
  • a user has enabled a screen reader utility in the operating system. Accordingly, user inputs received through the touchscreen interface may be routed to the screen reader. Such routing may be performed using techniques as are known in the art.
  • the screen reader may operate in multiple modes. In some modes, the screen reader may render content verbally as it is displayed. In this way, a visually impaired user may hear content as applications or other components executing on the computing device generated and presented for display.
  • a screen reader as described herein may, in response to a user indication of a location on the display, verbally render an amount of content selected based on a direction of motion of the pointing device, which is the user's finger in this example.
  • FIG. 4 illustrates the computing device of FIG. 3 as the user has contacted the screen at a location that is generally along the same line of text containing the words designated in FIGS. 2 and 3 . Accordingly, the screen reader interprets such user input as a command to read a word in the location designated on the touchscreen.
  • the user may have slid a finger across the touchscreen, continuously making contact with the touchscreen.
  • the screen reader may interpret such motion as a command to read words as the finger passes over them.
  • the user may have lifted the finger after the sliding motion illustrated in FIG. 3 ended but subsequently placed the finger on the user interface as indicated in FIG. 4 .
  • the touch input illustrated in FIG. 4 is similarly interpreted as a command to read a word displayed on the user interface at a location indicated by that touch.
  • the unit content read at that contact may be a selected to be of a default size.
  • an initial contact of a motion will not have a direction associated with it, such that a default unit of content, such as a word or single user interface element, may be verbally rendered at the first contact.
  • the unit of content most recently read may be used as the default unit of content.
  • the screen reader may initially begin to read a unit of content of a similar size.
  • the screen reader may interrupt the reading of the longer unit of content and enter a mode of reading individual words as they are indicated.
  • a computing device such as a handheld computing device, may render text with an orientation that depends on the orientation of the computing device.
  • a computing device may include one or more sensors to detect its orientation. Accordingly, a screen reader may interpret motion of a pointing device across the screen in a way that depends on the orientation of the computing device and/or the orientation of text as rendered on the screen of the computing device.
  • FIG. 5 illustrates that after passing partially over the second line by motion in a direction along that line of text, the user has moved the finger in a transverse direction, to pass over the third line. Though such motion positions the finger above a word in the third line, the screen reader selects a quantity of text based on the motion having a substantial transverse direction as a command to select units of text larger than words.
  • motion transverse to a line indicates a command to select a line of text at a time. Accordingly, FIG. 5 shows that user input designating a location over the third line is interpreted as a command to read the entire third line.
  • text is selected in units of one of two possible sizes, words and lines.
  • motion along a line whether to be left or right, corresponding to the backwards or forwards direction of a line of text, may be similarly treated.
  • motion perpendicular to the lines of text whether up or down, may be treated as a command to read text in units of lines.
  • motions in different directions along a line of text may be treated as commands to read different sized units of text. For example, motion to the right along the line of text may be interpreted as a command to read text at the designated locations in units of words. Conversely, motion to the left along the line of text may be interpreted as a command to read text in units of characters.
  • motion with a component perpendicular to lines of text when in the downward direction, may be interpreted as a command to read the text in units of lines.
  • Motion perpendicular to the lines of text in an upward direction may be interpreted as a command to read text at the designated locations in units of paragraphs.
  • the specific size of the units associated with motion in each direction is not critical to the invention and any suitable size units may be associated with motion in any direction.
  • motion in any suitable number of directions may be recognized and associated with any suitable number of commands to select text in units of any suitable size.
  • FIG. 6 illustrates a scenario in which content is rendered on a display with a combination of user interface elements, some of which are graphical components.
  • a user accessing the displayed content with the aid of a screen reader may desire to have designated portions of that content, including the graphical elements, “read.”
  • a graphical element may be “read,” by having the screen reader verbally render a name, label or other description of the graphical element.
  • the user interface illustrated in FIG. 6 includes icons designating files.
  • Each designated folder has descriptive text associated with it, such as a name or date information.
  • each folder is displayed with an icon of a file folder, revealing to a sighted user the nature of the content displayed.
  • display of a file folder icon indicates that further information may be accessed by selecting the file folder icon through the graphical user interface and issuing an open command. Accordingly, as a user explores the content of the graphical user interface as depicted in FIG. 6 , it may be useful, upon designating a location on the display screen that displays a file folder icon, it may be useful for the screen reader to indicate that the user interface element in that location is a file folder icon.
  • FIG. 6 includes icons designating files.
  • a graphical user interface may conventionally display other user interface elements that are also graphical. Those other interface elements may include associated text or, in some scenarios, may appear without any associated text.
  • well-known graphical elements may be recognized by a user without descriptive text.
  • the backwards arrow button which controls navigation through a web browser, may appear without any descriptive text.
  • the screen reader may verbally render information about such a graphical element when the user designates a location of the touchscreen containing that graphical element.
  • Information about graphical elements may be rendered verbally in any suitable way, including using techniques as are known in the art.
  • FIG. 6 shows a pattern of motion across a graphical user interface that is initially horizontal but then includes a substantial transverse component.
  • the initial designation of a location on the user interface may correspond to a folder icon.
  • the screen reader may verbally render a description revealing that the designated location contains an icon of a file folder.
  • the screen reader may verbally render descriptions of other icons or individual words encountered as part of that motion.
  • the user changes the direction of motion of the finger.
  • the motion is at an angle ⁇ relative to the horizontal.
  • a exceeds a threshold angle used by the screen reader to differentiate between motions signifying different sized content units.
  • the screen reader interprets motion in the direction a as an indication that the user would like to have content rendered in larger units.
  • the larger units include the designated icon and information positioned to reveal a relationship to that icon.
  • information about a file is displayed on a line. Accordingly, motion in the direction a indicates to the screen reader to render the information shown on a selected line.
  • That information in this example may include a verbal indication that the line includes a graphical element representing a file, text representing a name of the file and text representing other attributes of the file.
  • verbal indication that the line includes a graphical element representing a file, text representing a name of the file and text representing other attributes of the file.
  • motion of a pointing device in a first direction may be interpreted as a command to verbally render each user interface element individually as it is designated by a pointing device.
  • Motion in a second direction may be interpreted as a command to render a logical grouping of interface elements.
  • logical interface elements appearing in the same display window may be considered to form a logical group.
  • user interface elements aligned in the same level of a display hierarchy may be considered to represent a logical grouping.
  • the user interface of FIG. 6 includes a navigation pane.
  • the navigation pane is segmented into sections, and each section is shown to include a subsection containing choices.
  • each subsection may contain further subsections, defining a hierarchy with multiple levels.
  • options contained within a subsection that are at the same level of the hierarchy may be taken as a logical group for purposes of determining the amount of information to be rendered verbally.
  • any suitable level of the hierarchy may be used to define related information, and in some embodiments, motion in different directions may define different levels of the hierarchy to be used in selecting related information.
  • FIG. 7 illustrates an example of a suitable computing system environment 700 on which the invention may be implemented.
  • the computing system environment 700 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 700 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 700 .
  • the computing environment may execute computer-executable instructions, such as program modules.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including memory storage devices.
  • such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • the system memory 730 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 731 and random access memory (RAM) 732 .
  • ROM read only memory
  • RAM random access memory
  • BIOS basic input/output system 733
  • RAM 732 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 720 .
  • FIG. 7 illustrates operating system 734 , application programs 735 , other program modules 736 , and program data 737 .
  • the drives and their associated computer storage media discussed above and illustrated in FIG. 7 provide storage of computer readable instructions, data structures, program modules and other data for the computer 710 .
  • hard disk drive 741 is illustrated as storing operating system 744 , application programs 745 , other program modules 746 , and program data 747 .
  • operating system 744 application programs 745 , other program modules 746 , and program data 747 are given different numbers here to illustrate that, at a minimum, they are different copies.
  • a user may enter commands and information into the computer 710 through input devices such as a keyboard 762 and pointing device 761 , commonly referred to as a mouse, trackball or touch pad.
  • Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like.
  • These and other input devices are often connected to the processing unit 720 through a user input interface 760 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB).
  • a monitor 791 or other type of display device is also connected to the system bus 721 via an interface, such as a video interface 790 .
  • monitor 791 may be implemented as a touch screen display. Accordingly, the display may serve as both an output and an input device.
  • computers may also include other peripheral output devices such as speakers 797 and printer 796 , which may be connected through a output peripheral interface 795 .
  • a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, or a tablet computer. Additionally, a computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a Personal Digital Assistant (PDA), a smart phone or any other suitable portable or fixed electronic device.
  • PDA Personal Digital Assistant
  • the various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine.
  • the invention may be embodied as a computer readable storage medium (or multiple computer readable media) (e.g., a computer memory, one or more floppy discs, compact discs (CD), optical discs, digital video disks (DVD), magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other tangible computer storage medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the invention discussed above.
  • a computer readable storage medium may retain information for a sufficient time to provide computer-executable instructions in a non-transitory form.
  • Such a computer readable storage medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present invention as discussed above.
  • the term “computer-readable storage medium” encompasses only a computer-readable medium that can be considered to be a manufacture (i.e., article of manufacture) or a machine.
  • the invention may be embodied as a computer readable medium other than a computer-readable storage medium, such as a propagating signal.
  • data structures may be stored in computer-readable media in any suitable form.
  • data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a computer-readable medium that conveys relationship between the fields.
  • any suitable mechanism may be used to establish a relationship between information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationship between data elements.

Abstract

A screen reader for a computing device with a touch sensitive user interface. The screen reader operates in an investigatory mode in which touch inputs detected through the user interface are interpreted as inputs designating content on the display to verbally render. Each selection may be interpreted as designating a unit of content at the designated location. The size of the unit may be determined based on a direction of motion of pointing device arriving at the location on the display where the unit of content is designated. In this way, a user may, through simple and intuitively learned commands, designate content in units of different sizes such as characters, words, lines or paragraphs, when the content is text, and/or individual user interface elements when the displayed content being selected includes graphical elements.

Description

    RELATED APPLICATION
  • This application claims priority under 35 U.S.C. §119(e) to U.S. Provisional Application No. 61/533,583, filed Sep. 12, 2011, entitled Assistive Reading Interface, incorporated herein in its entirety by reference.
  • BACKGROUND
  • It is known to equip a computing device with a screen reader for rendering information to a user verbally. Such a capability can be of significant benefit to computer users who are visually impaired. A visually impaired user, for example, can command the computer to read text or to indicate using speech what controls or other objects are on being displayed.
  • Such assistive reading facilities may operate in different modes. In some modes, for example, the computing device may render text or other information verbally as content is generated for presentation to the user. In such a mode, information is rendered verbally analogously to how it might be displayed visually in a graphical user interface.
  • In other modes, a user may select different locations on a computer screen, such as by hitting various combinations of keys on a keyboard to indicate a particular location, and the computer will render verbally what is at that location. When a keyboard interface is used, the screen reader may recognize multiple keystroke combinations as commands that can specify which content, as displayed on the screen, is to be rendered verbally. The keystroke combination may also implicitly indicate how much content is to be rendered. For example, different keystroke commands may allow a user to step through rendered content a word at a time or a paragraph at a time.
  • It is known to provide a screen reader for a computer with a touch screen. Rather than indicating a location through keyboard commands, the user may indicate a location by touching the screen. Such a touch may indicate which content to render verbally. The screen reader may be used in connection with keyboard commands, which may control how much content, such as a word or paragraph, is rendered verbally from the designated location.
  • SUMMARY
  • An improved user experience may be provided through an improved control technique for a computing device providing an assistive reading interface. The interface may operate in an investigatory mode in which a user may designate locations on a display. The computing device may then verbally render information based on the content displayed at the designated location of the display. The amount of content rendered for each selection may be controlled by the user based on the direction of motion of the a pointing device used to designate locations on the display.
  • In some embodiments, motion in a first direction may result in selection of a first amount of content to be rendered verbally. Motion in a second direction, transverse to the first direction, may result in selection of a second amount of content being rendered. As a specific example, when content comprises text arranged in lines, the first direction may be generally along lines of text and the second direction may generally be in a direction crossing lines of text. The first amount of content may be a designated word. The second amount of content may be a designated line.
  • The foregoing is a non-limiting summary of the invention, which is defined by the attached claims.
  • BRIEF DESCRIPTION OF DRAWINGS
  • The accompanying drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures is represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. In the drawings:
  • FIG. 1 is a sketch of an environment in which an assistive reading interface may be used on a touchscreen;
  • FIG. 2 is a sketch of the graphical user interface of the computing device illustrated in FIG. 1 in which a user has designated content displayed in the graphical user interface;
  • FIG. 3 is a sketch of the graphical user interface of the computing device illustrated in FIG. 2 at a subsequent time when a user has further indicated content on the display;
  • FIG. 4 is a sketch of the graphical user interface of the computing device of FIG. 3 at a subsequent time at which the user has further indicated content;
  • FIG. 5 is a sketch of a graphical user interface of a touchscreen computing device in which the user has indicated a second amount of content;
  • FIG. 6 is a sketch of a graphical user interface of a touchscreen computing device in which the user has indicated a second amount of content; and
  • FIG. 7 is a functional block diagram of an exemplary computing device in which embodiments of the invention may operate.
  • DETAILED DESCRIPTION
  • The inventor has recognized and appreciated that an enhanced user experience could be provided for a assistive reading interface of a computing device by providing an improved mechanism for a user to designate content that is to be rendered verbally. Such an interface may provide a simple mechanism for a user to designate different amounts of content, using relatively simple commands. The amount of content to be rendered may be designated based on direction of motion of a pointing device across a user interface. In addition to avoiding the need for the user to memorize complex keystroke combinations or gestures on a touch screen, the interface may avoid the need for the user to make gestures that could also be used to control applications. In this way, the possibility is reduced of ambiguous commands to the computing device that could create the appearance of inconsistent operation. As a result, the user experience with the assistive reading device may be faster to use, easier to learn, intuitive and natural.
  • Such a mechanism is well suited for use on a touch screen in which the pointing device may be a finger of the user and options for command formats may be more limited than when input is provided through a keyboard. Using such an interface, a user is provided with a simple way to investigate content visually rendered on the display at a pace that is consistent with the user's interest at the time.
  • For text based content, a quantity of content rendered vocally in response to a user indication of content may depend on a direction of motion of a pointing device across the user interface. For example, if the user moves the pointing device along a line of text, the user interface may verbally render content in units of words as motion of the pointing device moves to designates successive words being displayed. Conversely, when the user moves the pointing device transverse to the lines of text, the user interface may verbally render content in units of lines as the pointing device moves to designate successive lines being displayed.
  • Motion in any number of directions may be recognized by the user interface. For example, the interface may recognize motion in four directions: forward and backward motion and also up and down motion. In such an embodiment, by detecting direction of motion, the user interface may recognize commands for rendering units of content in four different sizes. These different commands may map to commands to render content a word at a time, a character at a time, a line at a time, a paragraph at a time or in larger units.
  • Though, it should be recognized that the such an interface is not limited to rendering text-based content. In some embodiments, an assistive reading interface may generate speech output stating a name, label or other descriptive content associated with graphical content forming a user interface element, such as a control or other icon. The amount of such graphical content rendered verbally may depend on motion of the pointing device across the screen such that, for motion in a first direction, information is provided verbally for each user interface element indicated by the pointed device. For motion of the pointing device in a second direction, information is provided for a group of user interface elements based on the layout of those user interface elements.
  • As a result, the user interface provides a user with an ability to investigate contents of a display with varying levels of precision. Although users often want to hear a large block of text in its entirety, there are times when higher level of precision is needed (for example to find a specific word to start a selection). To be able to simply designate the size of the unit of content selected facilitates such an exploration.
  • Moreover, such an interface may be useful for relatively dense text on a touch screen interface. In such an interface, positioning a finger to designate precisely a specific location on the screen may be difficult. However, through an interface that operates in an investigatory mode in which a user can simply designate quantities of text, a user, particularly a visually impaired user, may quickly identify a desired location on the screen. For this reason, an interface technique as described herein may be well suited for use with portable electronic devices, including those with relatively small touch screens, including smart phones.
  • In accordance with some embodiments, to investigate and read text on the screen, the user drags a single finger or other pointing device over the text area. While the finger movement is primarily down (within 45 degrees of downward, for example), the screen reader reads by line. When the finger movement becomes primarily horizontal, the screen reader switches to reading by word.
  • In some embodiments, the screen reader can differentiate between rightward and leftward movements to read by word or character. Also, optionally, the screen reader can switch between line and paragraph reading modes by distinguishing between upward and downward movements.
  • Such an approach to controlling a screen reader allows for direct investigation of text and does not require any gestures to specify the amount of content to be rendered. Because there are a limited number of simple touch gestures that a conventional touch screen device can reliably recognize (fewer than there are keyboard shortcuts), avoiding scenario-specific gestures leaves those gestures available for other commands. Such an interface is easy to learn because it uses an existing paradigm of direct investigation with a single finger. The different behaviors are easily discoverable and easy to understand.
  • Moreover, the interface gives the user the ability to hear larger blocks of text (e.g. a whole line or paragraph) while providing a way to hear individual words or, in some embodiments, characters. Accordingly, it integrates seamlessly with the rest of the direct investigation experience. In this way the user can naturally investigate text with varying levels of precision depending on their direction of finger movement.
  • An example of the use of such a screen reader in connection with a graphical user interface is provided in connection with FIG. 1. FIG. 1 illustrates a computing device with a touchscreen interface. Such a computing device may be constructed using techniques as are known in the art. The touchscreen, for example, may detect a user's finger or other pointing device pressing on or brought near the surface of the touchscreen. Motion across the surface of the touchscreen may be detected by a touchscreen controller that outputs indications of motion of the user's finger or other pointing device. The computing device in the example of FIG. 1 may be implemented with conventional components for performing these functions or in any other suitable way.
  • The output of the touch controller may be processed within the operating system of the computing device or by any other suitable component. As a specific example, the operating system may include components that interface with the touch controller. The operating system may route user input received through the touch controller to an appropriate component. Such routing may be performed using techniques as are known in the art or in any other suitable way. In the operating state illustrated in the example of FIG. 1, a user has enabled a screen reader utility in the operating system. Accordingly, user inputs received through the touchscreen interface may be routed to the screen reader. Such routing may be performed using techniques as are known in the art.
  • In the example of FIG. 1, the screen reader may operate in multiple modes. In some modes, the screen reader may render content verbally as it is displayed. In this way, a visually impaired user may hear content as applications or other components executing on the computing device generated and presented for display.
  • The screen reader may also support an investigatory mode of operation. In the investigatory mode, the user may indicate locations on the touchscreen. Content may be designated by the user touching the touchscreen with the pointing device, which in this example, is simply the user's finger. In response to a designation of a location on the touchscreen, the screen reader may render verbally that content. A screen reader that verbally renders designated content may be implemented using techniques as are known in the art or may be implemented in any other suitable way.
  • In contrast to conventional screening, a screen reader as described herein may, in response to a user indication of a location on the display, verbally render an amount of content selected based on a direction of motion of the pointing device, which is the user's finger in this example.
  • In the example of FIG. 2, the computing device is operating in an investigatory mode. The computing device may have been placed in the investigatory mode in any suitable way, such as, for example, by user input through the keyboard. FIG. 2 illustrates the computing device of FIG. 1 in an operating state in which the user has moved a finger to indicate a location on the touchscreen where a word is being displayed. In response to this indication of a location on the touchscreen, the screen reader may read the indicated word. The word may be read using a text-to-speech converter that drives a speaker as is known in the art, though any suitable approach for verbally rendering content may be used.
  • FIG. 3 illustrates the computing device of FIG. 2 at a later instant in time as the user is moving the finger across the display. In this example, content on the display is formatted as lines of text. The direction of motion of the user's finger is generally along a line of text. In the embodiment illustrated in FIG. 3, motion along a line of text indicates to the screen reader a selection of content in units of words of text. Accordingly, as the user has moved the finger from one word to an adjacent word, the screen reader interprets this input as a command to read each word individually.
  • FIG. 4 illustrates the computing device of FIG. 3 as the user has contacted the screen at a location that is generally along the same line of text containing the words designated in FIGS. 2 and 3. Accordingly, the screen reader interprets such user input as a command to read a word in the location designated on the touchscreen.
  • In the sequence of interactions from FIGS. 2 to 3, the user may have slid a finger across the touchscreen, continuously making contact with the touchscreen. The screen reader may interpret such motion as a command to read words as the finger passes over them. In the sequence from FIGS. 3 to 4, the user may have lifted the finger after the sliding motion illustrated in FIG. 3 ended but subsequently placed the finger on the user interface as indicated in FIG. 4. In the embodiment illustrated, because the motion designated by the touch illustrated in FIG. 4 is generally in the same direction as the sliding motion illustrated between FIGS. 2 and 3, the touch input illustrated in FIG. 4 is similarly interpreted as a command to read a word displayed on the user interface at a location indicated by that touch.
  • Accordingly, it should be appreciated that it is not critical to the invention whether motion across the touch screen occurs as either a continuous sliding motion or as a series of contact a different locations. Any suitable gesture, or combination of gestures from which a direction of motion can be inferred, may be interpreted as a command illustrating a direction of motion. Though, in embodiments in which a continuous contact is not required, a mechanism may be incorporated to differentiate between successive contacts that are to be interpreted together versus as separate gestures. Such a determination may be based on time between contacts or relative location of a contact and successive contact. Regardless of the criteria used to distinguish, if successive contacts are to be interpreted together, the direction of motion may be inferred based on the change in position of the contact locations and the unit of content read at each contact may be determined based on that direction.
  • If a successive contact is not to be interpreted as the start of a separate motion, the unit content read at that contact may be a selected to be of a default size. For example, in some embodiments, an initial contact of a motion will not have a direction associated with it, such that a default unit of content, such as a word or single user interface element, may be verbally rendered at the first contact. Though, in some embodiments, at the first contact of a motion, the unit of content most recently read may be used as the default unit of content. For example, if, as part of the most recent motion, the unit of content read was a line or paragraph, at the first contact of a new motion, the screen reader may initially begin to read a unit of content of a similar size. Though, if the user then indicates an adjacent word as a result of motion in a direction along a line to signify reading of smaller units of content, the screen reader may interrupt the reading of the longer unit of content and enter a mode of reading individual words as they are indicated.
  • In the examples of FIGS. 2, 3 and 4, the direction of motion is indicated to be generally parallel with lines of text appearing on the display. In this example, the motion is generally horizontal. In some embodiments, a computing device, such as a handheld computing device, may render text with an orientation that depends on the orientation of the computing device. In such an embodiment, a computing device may include one or more sensors to detect its orientation. Accordingly, a screen reader may interpret motion of a pointing device across the screen in a way that depends on the orientation of the computing device and/or the orientation of text as rendered on the screen of the computing device.
  • Regardless of the manner in which the direction of motion is determined or interpreted, FIG. 5 illustrates the manner in which the screen reader interprets motion in a different direction. In the scenario illustrated in FIG. 5, a user has initially moved the finger across the touchscreen in a direction parallel to the lines of text displayed on the screen. In this example, the motion initiates in a second line of text. In this scenario, the screen reader reads, word-by-word, the words in the indicated line as the finger passes over them.
  • Though, FIG. 5 illustrates that after passing partially over the second line by motion in a direction along that line of text, the user has moved the finger in a transverse direction, to pass over the third line. Though such motion positions the finger above a word in the third line, the screen reader selects a quantity of text based on the motion having a substantial transverse direction as a command to select units of text larger than words. In this example, motion transverse to a line indicates a command to select a line of text at a time. Accordingly, FIG. 5 shows that user input designating a location over the third line is interpreted as a command to read the entire third line.
  • In the example of FIG. 5, the motion of the finger that is interpreted as a command to read full lines of text at a time is not strictly perpendicular to the lines. Rather, as can be seen in FIG. 5, motion that deviates from the horizontal by an angle α was nonetheless interpreted as a command to select text for rendering verbally by line. The specific angular directions associated with commands to select different sized units is not critical to the invention. However, in this example, an angle α of 45° or more relative to the horizontal may be interpreted as a command to select content in units of lines. Conversely, an angle α of less than 45° may be interpreted as a command to select text in units of words.
  • In the embodiment illustrated, text is selected in units of one of two possible sizes, words and lines. In such an embodiment, motion along a line, whether to be left or right, corresponding to the backwards or forwards direction of a line of text, may be similarly treated. In the same vein, motion perpendicular to the lines of text, whether up or down, may be treated as a command to read text in units of lines. Though, in some embodiments, motions in different directions along a line of text may be treated as commands to read different sized units of text. For example, motion to the right along the line of text may be interpreted as a command to read text at the designated locations in units of words. Conversely, motion to the left along the line of text may be interpreted as a command to read text in units of characters. In a similar fashion, motion with a component perpendicular to lines of text, when in the downward direction, may be interpreted as a command to read the text in units of lines. Motion perpendicular to the lines of text in an upward direction may be interpreted as a command to read text at the designated locations in units of paragraphs. Though, it should be appreciated that the specific size of the units associated with motion in each direction is not critical to the invention and any suitable size units may be associated with motion in any direction. Moreover, it should be appreciated that an example in which four commands are associated with motion in four directions, motion in any suitable number of directions may be recognized and associated with any suitable number of commands to select text in units of any suitable size.
  • Further, it should be recognized that a screen reader is not limited to operating on content rendered as text. FIG. 6 illustrates a scenario in which content is rendered on a display with a combination of user interface elements, some of which are graphical components. A user accessing the displayed content with the aid of a screen reader may desire to have designated portions of that content, including the graphical elements, “read.” In this context, a graphical element may be “read,” by having the screen reader verbally render a name, label or other description of the graphical element.
  • For example, the user interface illustrated in FIG. 6 includes icons designating files. Each designated folder has descriptive text associated with it, such as a name or date information. Though, each folder is displayed with an icon of a file folder, revealing to a sighted user the nature of the content displayed. In this example, display of a file folder icon indicates that further information may be accessed by selecting the file folder icon through the graphical user interface and issuing an open command. Accordingly, as a user explores the content of the graphical user interface as depicted in FIG. 6, it may be useful, upon designating a location on the display screen that displays a file folder icon, it may be useful for the screen reader to indicate that the user interface element in that location is a file folder icon. Moreover, as illustrated in FIG. 6, a graphical user interface may conventionally display other user interface elements that are also graphical. Those other interface elements may include associated text or, in some scenarios, may appear without any associated text. For example, well-known graphical elements may be recognized by a user without descriptive text. As a specific example, the backwards arrow button, which controls navigation through a web browser, may appear without any descriptive text. Accordingly, when the user is operating a screen reader in an exploratory mode, the screen reader may verbally render information about such a graphical element when the user designates a location of the touchscreen containing that graphical element. Information about graphical elements may be rendered verbally in any suitable way, including using techniques as are known in the art.
  • Additionally, techniques as described herein may be used by a screen reader operating in an investigatory mode to control the amount of content verbally rendered in response to a user selection. As in the example of FIG. 5, FIG. 6 shows a pattern of motion across a graphical user interface that is initially horizontal but then includes a substantial transverse component. In this example, the initial designation of a location on the user interface may correspond to a folder icon. In response to that designation, the screen reader may verbally render a description revealing that the designated location contains an icon of a file folder.
  • As the user's finger moves horizontally across the display screen, the screen reader may verbally render descriptions of other icons or individual words encountered as part of that motion.
  • In the scenario illustrated in FIG. 6, after some period of horizontal motion, the user changes the direction of motion of the finger. In this example, the motion is at an angle α relative to the horizontal. Here, a exceeds a threshold angle used by the screen reader to differentiate between motions signifying different sized content units. Accordingly, the screen reader interprets motion in the direction a as an indication that the user would like to have content rendered in larger units. In this example, the larger units include the designated icon and information positioned to reveal a relationship to that icon. As a specific example, in the user interface of FIG. 6, information about a file is displayed on a line. Accordingly, motion in the direction a indicates to the screen reader to render the information shown on a selected line. That information, in this example may include a verbal indication that the line includes a graphical element representing a file, text representing a name of the file and text representing other attributes of the file. Though, it should be appreciated that the specific information verbally rendered is not critical to the invention.
  • Moreover, it should be appreciated that the manner in which logical groupings of information are formed is not critical to the invention. For example, in some embodiments, motion of a pointing device in a first direction may be interpreted as a command to verbally render each user interface element individually as it is designated by a pointing device. Motion in a second direction may be interpreted as a command to render a logical grouping of interface elements. For example of an approach in which a logical grouping may be defined, logical interface elements appearing in the same display window may be considered to form a logical group.
  • In other embodiments, user interface elements aligned in the same level of a display hierarchy may be considered to represent a logical grouping. For example, the user interface of FIG. 6 includes a navigation pane. The navigation pane is segmented into sections, and each section is shown to include a subsection containing choices. Though not shown in FIG. 6, each subsection may contain further subsections, defining a hierarchy with multiple levels. In some embodiments, options contained within a subsection that are at the same level of the hierarchy may be taken as a logical group for purposes of determining the amount of information to be rendered verbally. Though, it should be recognized that any suitable level of the hierarchy may be used to define related information, and in some embodiments, motion in different directions may define different levels of the hierarchy to be used in selecting related information.
  • As a further example, the user interface illustrated in FIG. 6 includes user other interface elements. For example, the user interface includes menu bars and groups of related controls, such as controls to close, maximize or minimize a display window. Other controls, such as a magnifying glass icon that triggers execution of a search command may be displayed in conjunction with related user interface elements, such as a text box through which a user may specify a search query. A screen reader implemented using techniques as described herein may group these user interface elements in any suitable way and associate motion of a pointing device across the touchscreen as different commands to verbally render this content in different sized units defined in any suitable way to logically group the user interface elements. For example, when motion I occurring in a first direction, each of these user interface elements may be verbally rendered separately. Motion is a second direction may result in a logical group being rendered, which may be defined as all of the user interface elements at the same level of hierarchy in a menu or that relate to the same control function, such as the magnifying glass and its associated text box.
  • Accordingly, it should be appreciated that a screen reader implemented according to the techniques described herein may provide substantial flexibility but should not be limited to the specific embodiments disclosed as examples.
  • FIG. 7 illustrates an example of a suitable computing system environment 700 on which the invention may be implemented. The computing system environment 700 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 700 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 700.
  • The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, tablet computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • The computing environment may execute computer-executable instructions, such as program modules. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
  • With reference to FIG. 7, an exemplary system for implementing the invention includes a general purpose computing device in the form of a computer 710. Components of computer 710 may include, but are not limited to, a processing unit 720, a system memory 730, and a system bus 721 that couples various system components including the system memory to the processing unit 720. The system bus 721 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
  • Computer 710 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 710 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by computer 710. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media.
  • The system memory 730 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 731 and random access memory (RAM) 732. A basic input/output system 733 (BIOS), containing the basic routines that help to transfer information between elements within computer 710, such as during start-up, is typically stored in ROM 731. RAM 732 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 720. By way of example, and not limitation, FIG. 7 illustrates operating system 734, application programs 735, other program modules 736, and program data 737.
  • The computer 710 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only, FIG. 7 illustrates a hard disk drive 741 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 751 that reads from or writes to a removable, nonvolatile magnetic disk 752, and an optical disk drive 755 that reads from or writes to a removable, nonvolatile optical disk 756 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 741 is typically connected to the system bus 721 through an non-removable memory interface such as interface 740, and magnetic disk drive 751 and optical disk drive 755 are typically connected to the system bus 721 by a removable memory interface, such as interface 750.
  • The drives and their associated computer storage media discussed above and illustrated in FIG. 7, provide storage of computer readable instructions, data structures, program modules and other data for the computer 710. In FIG. 7, for example, hard disk drive 741 is illustrated as storing operating system 744, application programs 745, other program modules 746, and program data 747. Note that these components can either be the same as or different from operating system 734, application programs 735, other program modules 736, and program data 737. Operating system 744, application programs 745, other program modules 746, and program data 747 are given different numbers here to illustrate that, at a minimum, they are different copies.
  • In some embodiments, a screen reader as described herein may be implemented as a component of an operating system. Though, it should be appreciated that a screen reader may be implemented as an application or in any other suitable way.
  • A user may enter commands and information into the computer 710 through input devices such as a keyboard 762 and pointing device 761, commonly referred to as a mouse, trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 720 through a user input interface 760 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 791 or other type of display device is also connected to the system bus 721 via an interface, such as a video interface 790.
  • In the embodiment illustrate, monitor 791 may be implemented as a touch screen display. Accordingly, the display may serve as both an output and an input device. In addition to the monitor, computers may also include other peripheral output devices such as speakers 797 and printer 796, which may be connected through a output peripheral interface 795.
  • The computer 710 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 780. The remote computer 780 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 710, although only a memory storage device 781 has been illustrated in FIG. 7. The logical connections depicted in FIG. 7 include a local area network (LAN) 771 and a wide area network (WAN) 773, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
  • When used in a LAN networking environment, the computer 710 is connected to the LAN 771 through a network interface or adapter 770. When used in a WAN networking environment, the computer 710 typically includes a modem 772 or other means for establishing communications over the WAN 773, such as the Internet. The modem 772, which may be internal or external, may be connected to the system bus 721 via the user input interface 760, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 710, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 7 illustrates remote application programs 785 as residing on memory device 781. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated that various alterations, modifications, and improvements will readily occur to those skilled in the art.
  • For example, though embodiments of the invention were illustrated in connection with a touch screen computing device, techniques as described herein could be applied in connection with other types of human interfaces. For example, direction of motion of mouse pointer across a screen may similarly be used to indicate a quantity of content to render verbally.
  • As another example, it was described that direction of motion controlled the size of units of text rendered verbally. Other characteristics of motion of a finger or other input device could be used to control the size of units of text. For example, the speed of motion could be used to control the size of the units.
  • Such alterations, modifications, and improvements are intended to be part of this disclosure, and are intended to be within the spirit and scope of the invention. Further, though advantages of the present invention are indicated, it should be appreciated that not every embodiment of the invention will include every described advantage. Some embodiments may not implement any features described as advantageous herein and in some instances. Accordingly, the foregoing description and drawings are by way of example only.
  • The above-described embodiments of the present invention can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. Such processors may be implemented as integrated circuits, with one or more processors in an integrated circuit component. Though, a processor may be implemented using circuitry in any suitable format.
  • Further, it should be appreciated that a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, or a tablet computer. Additionally, a computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a Personal Digital Assistant (PDA), a smart phone or any other suitable portable or fixed electronic device.
  • Also, a computer may have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets. As another example, a computer may receive input information through speech recognition or in other audible format.
  • Such computers may be interconnected by one or more networks in any suitable form, including as a local area network or a wide area network, such as an enterprise network or the Internet. Such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks.
  • Also, the various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine.
  • In this respect, the invention may be embodied as a computer readable storage medium (or multiple computer readable media) (e.g., a computer memory, one or more floppy discs, compact discs (CD), optical discs, digital video disks (DVD), magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other tangible computer storage medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the invention discussed above. As is apparent from the foregoing examples, a computer readable storage medium may retain information for a sufficient time to provide computer-executable instructions in a non-transitory form. Such a computer readable storage medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present invention as discussed above. As used herein, the term “computer-readable storage medium” encompasses only a computer-readable medium that can be considered to be a manufacture (i.e., article of manufacture) or a machine. Alternatively or additionally, the invention may be embodied as a computer readable medium other than a computer-readable storage medium, such as a propagating signal.
  • The terms “program” or “software” are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of the present invention as discussed above. Additionally, it should be appreciated that according to one aspect of this embodiment, one or more computer programs that when executed perform methods of the present invention need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present invention.
  • Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments.
  • Also, data structures may be stored in computer-readable media in any suitable form. For simplicity of illustration, data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a computer-readable medium that conveys relationship between the fields. However, any suitable mechanism may be used to establish a relationship between information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationship between data elements.
  • Various aspects of the present invention may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and is therefore not limited in its application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments.
  • Also, the invention may be embodied as a method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
  • Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.
  • Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having,” “containing,” “involving,” and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.

Claims (20)

1. A method of operating a computing device to verbally render information on a display, the method comprising:
receiving user input indicating motion of a pointing device across the display, the motion crossing over content;
when the motion is in a first direction, verbally rendering information on the display in units of a first size; and
when the motion is in a second direction, transverse to the first direction, verbally rendering information on the display in units of a second size.
2. The method of claim 1, wherein:
the units of the first size comprise words.
3. The method of claim 1, wherein:
the units of the second size comprise lines of text.
4. The method of claim 3, wherein:
the information is presented on the display as lines of text; and
the first direction is parallel to a line of text.
5. The method of claim 1, wherein:
units of the first size comprise individual graphical elements.
6. The method of claim 5, wherein:
the graphical elements comprise graphical user interface control objects; and
verbally rendering information on the display in units of the first size comprises rendering verbally a label associated with the control object.
7. The method of claim 1, further comprising:
monitoring motion of the pointing device across the screen while operating in a first mode in which information is rendered in units of the first size as the pointing device indicates each unit of information of the first size;
based on the monitored motion, detecting motion of the pointing device in the second direction, and entering a mode in which information is rendered in units of the second size as the pointing device indicates each unit of information of the second size.
8. The method of claim 1, wherein:
the display is a touch screen.
9. At least one computer-readable storage medium comprising computer-executable instructions that, when executed by at least one processor of a computing device having a display and a sound output, perform of method of operating a user interface, the method comprising:
receiving values indicating indication of a location on the display with a pointing device;
identifying a characteristic of motion of the location of the indication;
identifying a unit of content based on the location of the indication;
selecting content, including the identified unit of content, the selected content comprising a quantity of content that is selected based on the identified characteristic of the motion; and
rendering verbally the selected content.
10. The at least one computer-readable storage medium of claim 9, wherein:
the method further comprises receiving user input to designate a mode of interaction for investigation of displayed content.
11. The at least one computer-readable storage medium of claim 10, wherein:
the mode of interaction comprises a first mode; and
the at least one computer storage medium further comprises computer executable instructions for operating the computing device is a second mode of interaction in which content is rendered verbally without receiving express user input through the display.
12. The at least one computer-readable storage medium of claim 9, wherein:
the identified characteristic of the motion is direction; and
selecting content comprises selecting a word when the direction of motion is a first direction and selecting a line of text when the direction of motion is a second direction.
13. The at least one computer-readable storage medium of claim 9, wherein:
the identified characteristic of the motion is direction;
the method further comprises determining a horizontal reference direction on the display; and
selecting content comprises selecting a word when the direction of motion is within 45 degrees of the horizontal reference direction and selecting a line of text when the direction of motion greater than 45 degrees from the horizontal reference direction.
14. The at least one computer-readable storage medium of claim 9, wherein:
the identified characteristic of the motion is direction; and
selecting content comprises selecting a textual description of a control element when the direction of motion is a first direction and selecting the textual description of the control element in conjunction with a textual description of adjacent displayed information when the direction of motion is a second direction.
15. A computing device, comprising:
a touch screen;
a speaker;
a processor to:
receive from the touch screen input indicating activation of the touch screen with a pointing device;
identify motion of the pointing device across the touch screen based on the received input;
identify a unit of content displayed on the touch screen based on a location of activation of the touch screen;
select content, including the identified unit of content, the selected content comprising a quantity of content that is selected based on a direction of motion of the pointing device across the touch screen; and
generate control signals to the speaker to verbally render the selected content.
16. The computing device of claim 15, wherein:
the computing device further comprises a text-to-speech converter, and generating control signals to the speaker comprises providing the selected content to the text-to-speech converter.
17. The computing device of claim 15, wherein:
the processor is configured to select content in units of a first size based on identified motion in a first direction and to select content in units of a second size based on identified motion in a second direction.
18. The computing device of claim 15, wherein:
the computing device further comprises an orientation sensor;
the processor is further configured to determine a horizontal reference direction on the touch screen; and
selecting content comprises selecting a word when the direction of motion is within 45 degrees of the horizontal reference direction and selecting a line of text when the direction of motion greater than 45 degrees from the horizontal reference direction.
19. The computing device of claim 15, wherein:
when the identified unit of content is a control element, selecting content comprises selecting a textual description of the control element when the direction of motion is a first direction and selecting the textual description of the control element in conjunction with a textual description of adjacent displayed information when the direction of motion is a second direction.
20. The computing device of claim 15, wherein:
the computing device has a tablet form factor.
US13/361,585 2011-09-12 2012-01-30 Assistive reading interface Abandoned US20130063494A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/361,585 US20130063494A1 (en) 2011-09-12 2012-01-30 Assistive reading interface
US15/656,825 US10339833B2 (en) 2011-09-12 2017-07-21 Assistive reading interface

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161533583P 2011-09-12 2011-09-12
US13/361,585 US20130063494A1 (en) 2011-09-12 2012-01-30 Assistive reading interface

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/656,825 Continuation US10339833B2 (en) 2011-09-12 2017-07-21 Assistive reading interface

Publications (1)

Publication Number Publication Date
US20130063494A1 true US20130063494A1 (en) 2013-03-14

Family

ID=47829472

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/361,585 Abandoned US20130063494A1 (en) 2011-09-12 2012-01-30 Assistive reading interface
US15/656,825 Active US10339833B2 (en) 2011-09-12 2017-07-21 Assistive reading interface

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/656,825 Active US10339833B2 (en) 2011-09-12 2017-07-21 Assistive reading interface

Country Status (1)

Country Link
US (2) US20130063494A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160148409A1 (en) * 2013-01-25 2016-05-26 Apple Inc. Accessibility techniques for presentation of symbolic expressions
US9489854B2 (en) 2013-10-31 2016-11-08 Pau-San Haruta Computing technologies for diagnosis and therapy of language-related disorders
US20170300294A1 (en) * 2016-04-18 2017-10-19 Orange Audio assistance method for a control interface of a terminal, program and terminal
DE102017114068A1 (en) 2017-03-23 2018-09-27 David Hill Method for outputting media data
US10698485B2 (en) 2016-06-27 2020-06-30 Microsoft Technology Licensing, Llc Augmenting text narration with haptic feedback
US10938952B2 (en) * 2019-06-13 2021-03-02 Microsoft Technology Licensing, Llc Screen reader summary with popular link(s)
CN112684936A (en) * 2020-12-29 2021-04-20 深圳酷派技术有限公司 Information identification method, storage medium and computer equipment
US11398164B2 (en) 2019-05-23 2022-07-26 Microsoft Technology Licensing, Llc Providing contextually relevant information for ambiguous link(s)
US11430414B2 (en) 2019-10-17 2022-08-30 Microsoft Technology Licensing, Llc Eye gaze control of magnification user interface

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5875428A (en) * 1997-06-27 1999-02-23 Kurzweil Educational Systems, Inc. Reading system displaying scanned images with dual highlights
US20110050593A1 (en) * 2009-09-02 2011-03-03 Kim John T Touch-Screen User Interface
US20110050592A1 (en) * 2009-09-02 2011-03-03 Kim John T Touch-Screen User Interface
US20110167350A1 (en) * 2010-01-06 2011-07-07 Apple Inc. Assist Features For Content Display Device
US20110320206A1 (en) * 2010-06-29 2011-12-29 Hon Hai Precision Industry Co., Ltd. Electronic book reader and text to speech converting method
US8239201B2 (en) * 2008-09-13 2012-08-07 At&T Intellectual Property I, L.P. System and method for audibly presenting selected text
US20120306772A1 (en) * 2011-06-03 2012-12-06 Google Inc. Gestures for Selecting Text
US8370151B2 (en) * 2009-01-15 2013-02-05 K-Nfb Reading Technology, Inc. Systems and methods for multiple voice document narration
US8452600B2 (en) * 2010-08-18 2013-05-28 Apple Inc. Assisted reader
US8498867B2 (en) * 2009-01-15 2013-07-30 K-Nfb Reading Technology, Inc. Systems and methods for selection and use of multiple characters for document narration
US8548813B2 (en) * 1999-06-25 2013-10-01 Adrea, LLC Electronic book with voice emulation features

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6154757A (en) 1997-01-29 2000-11-28 Krause; Philip R. Electronic text reading environment enhancement method and apparatus
US7900145B2 (en) * 2006-11-08 2011-03-01 Yahoo! Inc. System and method for synchronizing data
JP5184832B2 (en) * 2007-07-17 2013-04-17 キヤノン株式会社 Information processing apparatus, control method therefor, and computer program
CN101378540B (en) 2007-08-27 2011-09-14 华为技术有限公司 Method and system for implementing number portability business, and number portability data-base
US9875013B2 (en) 2009-03-16 2018-01-23 Apple Inc. Methods and graphical user interfaces for editing on a multifunction device with a touch screen display
US8681106B2 (en) 2009-06-07 2014-03-25 Apple Inc. Devices, methods, and graphical user interfaces for accessibility using a touch-sensitive surface

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5875428A (en) * 1997-06-27 1999-02-23 Kurzweil Educational Systems, Inc. Reading system displaying scanned images with dual highlights
US8548813B2 (en) * 1999-06-25 2013-10-01 Adrea, LLC Electronic book with voice emulation features
US8489400B2 (en) * 2008-09-13 2013-07-16 At&T Intellectual Property I, L.P. System and method for audibly presenting selected text
US8239201B2 (en) * 2008-09-13 2012-08-07 At&T Intellectual Property I, L.P. System and method for audibly presenting selected text
US8498866B2 (en) * 2009-01-15 2013-07-30 K-Nfb Reading Technology, Inc. Systems and methods for multiple language document narration
US8370151B2 (en) * 2009-01-15 2013-02-05 K-Nfb Reading Technology, Inc. Systems and methods for multiple voice document narration
US8498867B2 (en) * 2009-01-15 2013-07-30 K-Nfb Reading Technology, Inc. Systems and methods for selection and use of multiple characters for document narration
US20110050593A1 (en) * 2009-09-02 2011-03-03 Kim John T Touch-Screen User Interface
US20110050592A1 (en) * 2009-09-02 2011-03-03 Kim John T Touch-Screen User Interface
US20110167350A1 (en) * 2010-01-06 2011-07-07 Apple Inc. Assist Features For Content Display Device
US20110320206A1 (en) * 2010-06-29 2011-12-29 Hon Hai Precision Industry Co., Ltd. Electronic book reader and text to speech converting method
US8452600B2 (en) * 2010-08-18 2013-05-28 Apple Inc. Assisted reader
US20120306772A1 (en) * 2011-06-03 2012-12-06 Google Inc. Gestures for Selecting Text

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160148409A1 (en) * 2013-01-25 2016-05-26 Apple Inc. Accessibility techniques for presentation of symbolic expressions
US10540792B2 (en) * 2013-01-25 2020-01-21 Apple Inc. Accessibility techniques for presentation of symbolic expressions
US9489854B2 (en) 2013-10-31 2016-11-08 Pau-San Haruta Computing technologies for diagnosis and therapy of language-related disorders
US20170300294A1 (en) * 2016-04-18 2017-10-19 Orange Audio assistance method for a control interface of a terminal, program and terminal
US10698485B2 (en) 2016-06-27 2020-06-30 Microsoft Technology Licensing, Llc Augmenting text narration with haptic feedback
DE102017114068A1 (en) 2017-03-23 2018-09-27 David Hill Method for outputting media data
US11398164B2 (en) 2019-05-23 2022-07-26 Microsoft Technology Licensing, Llc Providing contextually relevant information for ambiguous link(s)
US10938952B2 (en) * 2019-06-13 2021-03-02 Microsoft Technology Licensing, Llc Screen reader summary with popular link(s)
US11430414B2 (en) 2019-10-17 2022-08-30 Microsoft Technology Licensing, Llc Eye gaze control of magnification user interface
CN112684936A (en) * 2020-12-29 2021-04-20 深圳酷派技术有限公司 Information identification method, storage medium and computer equipment

Also Published As

Publication number Publication date
US20170323585A1 (en) 2017-11-09
US10339833B2 (en) 2019-07-02

Similar Documents

Publication Publication Date Title
US10339833B2 (en) Assistive reading interface
US11386266B2 (en) Text correction
US10817175B2 (en) Input device enhanced interface
US11126346B2 (en) Digital flash card techniques
US20180239512A1 (en) Context based gesture delineation for user interaction in eyes-free mode
US20210049321A1 (en) Device, method, and graphical user interface for annotating text
KR101668398B1 (en) Translating user interaction with a touch screen into input commands
KR102028119B1 (en) Method for displaying for application an electronic device thereof
US20160070433A1 (en) Devices, methods, and graphical user interfaces for accessibility using a touch-sensitive surface
US20140026101A1 (en) Accessible Menu Navigation Techniques For Electronic Devices
EP3155501B1 (en) Accessibility detection of content properties through tactile interactions
US20150026554A1 (en) Device, method, and graphical user interface for manipulating tables using multicontact gestures
US20110087974A1 (en) User interface controls including capturing user mood in response to a user cue
US20140282002A1 (en) Method and Apparatus for Facilitating Use of Touchscreen Devices
CN106104450B (en) Method for selecting a part of a graphical user interface
CN108885505B (en) Intuitive document navigation with interactive content elements
US20140123036A1 (en) Touch screen display process
US20140210729A1 (en) Gesture based user interface for use in an eyes-free mode
US20140215339A1 (en) Content navigation and selection in an eyes-free mode
US9285978B2 (en) Using a scroll bar in a multiple panel user interface
Nilsson Design guidelines for mobile applications
US10228845B2 (en) Previewing portions of electronic documents
US20200356248A1 (en) Systems and Methods for Providing Continuous-Path and Delete Key Gestures at a Touch-Sensitive Keyboard
KR20190136301A (en) Method for providing of information related to content based on user behavior
CN110945470A (en) Programmable multi-touch on-screen keyboard

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KIRSCHNER, DOUGLAS R.;WREM, NICHOLAS H.;GRIEVES, JASON ANTHONY;AND OTHERS;SIGNING DATES FROM 20120127 TO 20120129;REEL/FRAME:027777/0001

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034544/0541

Effective date: 20141014

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION