WO2006029259A2 - Creating an annotated web page - Google Patents

Creating an annotated web page Download PDF

Info

Publication number
WO2006029259A2
WO2006029259A2 PCT/US2005/031966 US2005031966W WO2006029259A2 WO 2006029259 A2 WO2006029259 A2 WO 2006029259A2 US 2005031966 W US2005031966 W US 2005031966W WO 2006029259 A2 WO2006029259 A2 WO 2006029259A2
Authority
WO
WIPO (PCT)
Prior art keywords
user
annotation
document
annotations
content
Prior art date
Application number
PCT/US2005/031966
Other languages
French (fr)
Other versions
WO2006029259A3 (en
Inventor
Josef Hollander
Mor Schlesinger
Original Assignee
Sharedbook Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/936,788 external-priority patent/US20070118794A1/en
Application filed by Sharedbook Ltd filed Critical Sharedbook Ltd
Priority to EP05794986A priority Critical patent/EP1800222A4/en
Publication of WO2006029259A2 publication Critical patent/WO2006029259A2/en
Publication of WO2006029259A3 publication Critical patent/WO2006029259A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/169Annotation, e.g. comment data or footnotes

Definitions

  • the inventions disclosed herein relate generally to collaborative systems and more particularly to shared annotation systems.
  • One issue associated with network collaboration is synchronicity. For example, users often collaborate by exchanging versions of documents via e-mail or other similar means.
  • a first user edits or otherwise comments a document and then sends the revised version to a second user for further input.
  • the second user makes or otherwise provides their input and then e-mails the new document back to the first user.
  • the first user is editing the document, however, the second user cannot provide input since they do not possess the current version of the document (currently being edited by the first user) and therefore do not know what changes the first user might be making.
  • the first user cannot provide further input while the document is being edited by the second user. It is thus desirable for users to be able to provide synchronous comments and edits without having to wait for other users.
  • Discussions are stored separately from their related documents. When a particular document is requested by a user, any related discussions associated with the identifier for the document are also retrieved.
  • the system discussed in the '564 application has a number of shortcomings. For example, in the '564 patent, only HTML text associated with a discussion is stored. If the discussion is linked to another item, for example a media item, such as a graphic, a video clip, an audio clip, etc., the media file is not stored in the system database containing the HTML text and other data associated with the discussion. Also, only a link to the media is stored.
  • the present invention addresses, among other things, the problems discussed above with shared annotation systems.
  • computerized methods are provided for enabling a plurality of users to collaborate or otherwise provide annotations and other input and feedback related to shared documents and content in a computer network. Users are able to synchronously navigate content via multi- portion displays in which indicators related to the annotations are embedded in document content in a first portion of the display and the related annotations are synchronously presented in at least a second portion of the display.
  • the system also generates custom documents based on annotated content, provides commerce opportunities related to annotated content, persistently presents selected multimedia content while navigating a plurality of document pages, and accepts and indexes annotations related to visual content elements such as graphics and photographs.
  • the system enables a method for automatically navigating a document in a display having at least a first portion and a second portion, the method comprising: receiving an annotation related to the document, the annotation generated by a user at a first client; associating the annotation with a first indication in the document; receiving, from a user at a second client, an input to navigate a first portion of a display at the second client, the input causing the first indication to be displayed in the first portion of the display; and in response to the input, automatically displaying the annotation in a second portion of the display at the second client.
  • the display comprises a browser window, such as an Internet browser.
  • the document comprises an electronic book, a digital photo album containing one or more digital photos, a web page, a text document, or a multimedia document.
  • the annotation comprises a text annotation, such as a comment related to the document.
  • the annotation comprises a graphical annotation, such as a photograph.
  • the annotation comprises an audio annotation, a video annotation, a multimedia annotation, or a discussion group related to the document.
  • the input comprises an input to scroll the first portion of the display or an input to navigate to a portion of the document containing the first indication.
  • the first indication comprises a graphical indication, such as an icon.
  • receiving an annotation comprises receiving form data submitted by the user at the first client, such as receiving HTML form data.
  • associating the annotation with a first indication in the document comprises: identifying a portion of the document to which the annotation relates; and associating the first indication with the portion of the document to which the annotation relates.
  • the annotation comprises a discussion group related to the portion of the document.
  • the annotation is added to a data structure stored in memory, the data structure comprising a list of annotations relating to portions of one or more documents.
  • the list of annotations comprises a list of bookmarks.
  • the system receives input selecting an annotation from the list of bookmarks and displays, in the first portion of the display, at least a portion of a document to which the annotation is related and displays at least the selected annotation in the second portion of the display.
  • associating the first indication comprises embedding the first indication in the portion of the document to which the annotation relates.
  • embedding the first indication comprises: receiving location data related to the portion of the document; processing the location data to determine a first location within the document relative to a location of the portion within the document; and generating a new * version of the document, the new version of the document containing the first indication embedded at the first location.
  • the location data comprises one or more from the group comprising: a document identifier, a section identifier, a chapter identifier, a bookmark identifier, a portion length, and a portion offset.
  • the invention also includes systems and methods for replacing a first version of the document stored in memory with the new version of the document, for example by overwriting a first version of the document with a new version of the document.
  • receiving an annotation comprises receiving an annotation related to an image contained in the document, for example receiving information identifying one or more subjects of the image.
  • the system also includes methods for associating the one or more subjects with the image, such as by updating a data structure stored in memory, the data structure storing associations between one or more images and one or more subjects of the one or more images.
  • the annotation comprises a commercial offer, such as an offer to purchase a product related to the document.
  • the system also includes methods for processing a request by a user at a client to purchase the product, such as methods for transmitting the product and the document to the user.
  • the system also includes methods for communicating, to a user at a client, an offer to purchase the document and a set of annotations related to the document, such as a set of annotations selected by the user.
  • the system processes the user request to purchase the document and the set of annotations, for example by printing the document and the set of annotations.
  • the system prints the annotation and the related portion of the document on the same page.
  • processing the user request comprises transmitting the document and the set of annotations to the user.
  • the system also includes methods for authenticating the user at a first client and authorizing the user at the first client to provide the annotation; and authenticating the user at the second client and authorizing the user at the second client to navigate the document.
  • the system includes methods to annotate content of a web page. An indication is inserted in and associated with content according to markup language describing offsets including a starting point and an endpoint for the indication, the starting point and endpoint offsets corresponding to a number of characters from a location within the content.
  • the system includes program code that captures user inputs identifying selections according to a paragraph identifier, a starting point value, and an ending point value.
  • the system enables a method for selecting an arbitrary string of characters on a web page and posting the selection, including related metadata, to an application server.
  • the related metadata includes positional metadata and content identifiers.
  • the system enables a method for creating a custom memory book including original content supplied by a first party, annotations provided by one or more users, and multimedia elements provided by other users. For example, in some embodiments, users create a memory book by customizing existing content provided by content creators.
  • the original article also generally contains indications and corresponding annotations input by various users responding to the original article.
  • a user can then create any number of custom memory books from the original article by uploading additional multimedia elements and selecting specific annotations to include in their personal memory book.
  • a user uploads their own personal pictures to replace or supplement the pictures in the original article posted by the content provider.
  • a user also uses pictures posted as annotations by other users to replace or supplement pictures of the original article or they use additional pictures provided by the content provider or other content providers.
  • users also select custom annotations to include with the memory book by filtering or otherwise selecting annotations from the set of annotations posted by other users regarding the original article.
  • a user automatically selects annotation from a list of friends who post annotations.
  • users select annotations individually or based on criteria such as ratings from other users or annotation type.
  • the system enables a method for printing and binding the custom memory book, such as by using standard book publishing equipment and techniques.
  • FIG. 1 is a block diagram of a shared annotation system according to an embodiment of the present invention
  • FIG. 2 is a block diagram of functional modules in a shared annotation system according to an embodiment of the present invention.
  • FIG. 3 is a flow chart of a method to synchronously navigate shared annotations according to an embodiment of the present invention
  • FIG. 4a is a block diagram of an exemplary screen display of a shared annotation system according to an embodiment of the present invention.
  • Fig. 4b is a block diagram of two exemplary screen displays of a shared annotation system according to an embodiment of the present invention.
  • Fig. 5 is a flow chart of a method for processing an annotation according to an embodiment of the present invention
  • FIG. 5 A presents an exemplary sample of code for an XHTML formatted page of content according to one embodiment of the invention
  • Fig. 5B presents an exemplary sample of code for an XHTML formatted page of content according to one embodiment of the invention
  • Fig. 6 is a flow chart of a method of annotating a visual element according to an embodiment of the present invention.
  • Fig. 6A is a flow chart of a method of recreating a page of content according an embodiment of the invention.
  • Fig. 6B is a flow chart of a method of processing an element during page creation according to an embodiment of the invention;
  • Fig. 7 is a flow chart of a method of providing a customized document related to a shared annotation system according to an embodiment of the present invention
  • Fig. 8 is a block diagram of a sample page from a customized document related to a shared annotation system according to an embodiment of the present invention
  • Fig. 8A is a screenshot of an exemplary article page of a memory book according to an embodiment of the present invention
  • Fig. 8B is a screenshot of an exemplary comments page of a memory book according to an embodiment of the present invention
  • Fig. 8C is a screenshot of an exemplary dynamic print page according to an embodiment of the invention.
  • Fig. 9 is a flow chart of a method of presenting a selected multimedia element while navigating a document in shared annotation system according to an embodiment of the present invention.
  • the system also generates custom documents based on annotated content, provides commerce opportunities related to annotated content, persistently presents selected multimedia content while navigating a plurality of document pages, and accepts and indexes annotations related to visual content elements such as graphics and photographs. Additional aspects and features of the system will also be appreciated by one skilled in the art as further described below.
  • Fig. 1 presents a block diagram of a shared annotation system according to an embodiment of the present invention.
  • the system includes one or more clients including first client 105, a second client 110, and an nth client 115, connected to a network 120, a content server 125 including a content processor 130 communicatively coupled to a data store 135, and one or more additional computers including a moderator computer 140, an administrator computer 145, and a support computer 150.
  • Clients 105, 110, and 115, and other computers in the system including personal computers and other computing devices known in the art including personal digital assistants ("PDAs"), tablet computers, cellular telephones, and other devices.
  • PDAs personal digital assistants
  • the clients are communicatively coupled to the content server 125 via a computer network 120, such as the Internet or a local area network (“LAN").
  • a computer network 120 such as the Internet or a local area network (“LAN").
  • LAN local area network
  • Users of the client's devices collaborate or otherwise provide annotations and other input and feedback related to shared documents and content in the network.
  • the users collaborate or otherwise provide annotations regarding the content via one or more software modules including a display module.
  • users interact with content and provide annotations via a web browser, such as Microsoft Internet Explorer or Netscape Navigator.
  • the content server 125 contains a content processor 130 and other modules directed to receiving and processing user requests regarding content. Requests include annotations regarding content, requests for new content, navigation inputs regarding content, and other user requests.
  • the content server 125 is communicatively coupled to a data store 135.
  • the data store 135 stores a variety of data including document content for delivery to users, user account and registration information, annotations and other information generated by users regarding content, and other related data.
  • annotations generally include content-related input provided by users including text input, graphical input, audio input, video input, and other types of input, associated in some way with a particular selected character sequence in a primary set of content. For example, a user may input a textual comment or a user may upload a picture related to content.
  • a user may also provide a voice recording or other recording related to content or even a video clip as an annotation.
  • Annotations may also include a discussion group or other similar forum or means to facilitate threaded discourse or other interaction between users regarding a particular portion of a document. For example, a user may find a particular paragraph of a document very important and create a location-specific discussion group regarding the paragraph as an annotation.
  • Additional computers are also connected to the network 120 and interface with content server 125 and client computers to provide additional functionality.
  • moderator computer 140 may be used by a moderator to review and approve user comments and annotations.
  • An administrator computer 145 may manage other aspects of user interaction with the system such as user registration or security related issues.
  • Support personnel may use support computer 150 to interface with users and provide additional assistance or help regarding user concerns. Additional computers of remote clients may also be employed or used by role-based personnel such as a picture moderator, a comments moderator, a topic approver, a new edition creator, a discussion group moderator, etc.
  • Fig. 2 presents a block diagram of functional modules in a shared annotation system according to an embodiment of the invention.
  • the system is implemented using Model View Controller ("MVC") architecture as known in the art.
  • MVC Model View Controller
  • Four tiers are presented including a client tier 153, a presentation tier 163, and an application tier 167, as well as a data store 135 or integration tier containing the data model.
  • modules are distributed among one or more content servers 125 and clients 105, 110, 115.
  • the system may also implement multiple tiers and distribute modules to distribute functionality in order to improve system efficiency or otherwise load balance processing operations.
  • the client tier 153 includes a highlight module 155, a synchronization module 157, an annotation module 159, and a view modes module 161.
  • the client tier includes code, such as JavaScript code, that executes on various pages, such as DHTML pages.
  • the highlight module 155 is generally directed to managing selection and highlighting of annotations and text in the original content. For example, if a user clicks on an image annotation, the highlight module manages highlighting the corresponding text in the first portion of the display as well as the image annotation in the second portion of the display. Conversely, if a user selects or otherwise interacts with an annotation in the second portion of the display, the corresponding text or other visual elements are highlighted in the first portion of the display by the highlight module.
  • the synchronization module 157 manages relationships between original content in the first portion of the display and corresponding annotations in the second portion of the display.
  • annotations are presented corresponding to content in the first portion of the display as the user scrolls the first portion of the display.
  • the first portion of the display also synchronously scrolls ensuring that original content in the first portion corresponding to the annotations in the second portion is consistently displayed.
  • the synchronization module 157 also prevents unnecessary scrolling which might cause flicker. For example, no scrolling is performed if an icon or other indication present in the first portion of the display corresponds to an annotation already visible in the second portion of the display.
  • the second part of the display is scrolled to find the next annotation only when a navigation input changes the display such that an indication in the first portion of the display disappears and vice-versa.
  • the annotation module 159 generally manages and processes annotations of images and other multimedia content. For example, when a user selects a photo for annotation, the annotation module 159 presents a rectangular selection box over the photo that may be resized to precisely indicate the portion of the photo to which an annotation refers. Multiple selection rectangles or other selection shapes may be drawn over a photo each corresponding to individual annotations. Upon receipt of an appropriate input, for example when a save or post annotation(s) button is selected, the annotation module also handles communicating the selection input(s) and related annotation information to other modules of the system as further described herein.
  • the view modes module 161 generally manages and controls presentation modes for content. For example, the view modes module switches between modes such as “embedded mode” in which indications or icons are presented inline with original content, "non- embedded mode” in which indications are presented to the left of the original content with one indication type per paragraph, and “memory book mode” in which indications are aggregated by type and presented inline at the end of individual paragraphs as opposed to directly in the text or to the left of the text.
  • the presentation tier 163 generally includes a number of modules 165 running code within the web container.
  • the code modules 165 generally include a controller responsive to data and inputs received from the client tier 163 as well as the business tier 167.
  • Exemplary code modules 165 correspond to modules of the business tier 167 as further described herein and include a back office module, a book module, a bookmark module, a comment module, a conversion module, an ecommerce module, a print module, an image module, a media module, a profile module, a search module, and a user management module.
  • Code modules 165 provide a bridge between application logic provided by the business tier and client inputs or presentation outputs.
  • the business tier 167 generally includes a number modules including a back office module 169, a book or content module 171, a bookmarks module 173, a user management module 175, a content serialization module ("CSE") 177, a media module 179, a comments module 181, a statistics module 183, a conversion module 185, an ecommerce module 187, a print module 189, a personalization module 191, a profile module 193, a search module 195, a payment gateway module 197, and a print services module 199.
  • CSE content serialization module
  • the user management module 175 is generally responsible for handling user-related operations such as registration, authentication, and membership rights and approvals (such as for administrators, regular members, etc.).
  • the book or content module 171 generally manages and directs content-related operations such as navigation to other pages and tracking user preferences. For example, the content module 171 tracks preferred viewing modes and last pages visited for users. Generally, the content module is not directly responsible for serving content, however, since this is handled and resolved in the presentation tier by the corresponding book code module of the presentation tier and other code modules for the sake of improved performance.
  • the bookmarks module 173 generally manages the user's private bookmarks list for content and annotations.
  • the bookmarks module 173 maintains a data structure containing pointers to locations for content or annotations that a user may wish to revisit or otherwise mark as a favorite.
  • the system automatically navigates to and presents the related content or annotation corresponding to the selected bookmark.
  • the comments module 181 is generally directed to processing operations associated with posing annotations.
  • the comments module 181 manages inputs posting or replying to annotations, applying automatic moderation to posted annotations, and notifying moderators when annotations trigger various notification filters.
  • the comments module 181 also notifies annotation authors when a reply or other corresponding annotation is posted regarding their authored annotation.
  • media module 179 processes graphical annotations and other graphical information provided by users.
  • the media module 179 processes photo, video, and audio annotations processing posts and notifying moderators of certain posts, as well as managing user replies.
  • the media module also processes video annotations by capturing and presenting a particular frame (such as the first frame) as a thumbnail image representing the video in the annotations portion of the display.
  • the content serialization engine 177 interfaces with the database 135 to lock content, update content, and otherwise process user annotations.
  • the CSE 177 facilitates content delivery among multiple users. For example, when a first user provides an annotation regarding a particular page of content, in some embodiments, synchronization module 165 locks that page and prevents access to the page by other users until the annotation process is complete. In some embodiments, the CSE 177 maintains a queue of new annotations and processes annotations by creating new content pages and media pages containing the new annotations as further described herein.
  • the statistics module 183 generally tracks data related to posted annotations. For example, in some embodiments, the statistics module 183 tracks the number of annotations posted for each page in a given document and presents an indication of which page has the most number of new posts or a certain number of posts within a given period of time, such as in memory book mode as further described herein.
  • the print module 189 is generally directed to printing or otherwise outputting content according to user inputs and preferences. For example, the print module 189 creates PDF files or other document files for versions of content output such as dynamic print and memory book creation as further described herein.
  • the conversion module 185 is generally responsible for processing and formatting raw original content for use by the system and for users to annotate.
  • the conversion module parses original content into paragraphs, formats the content for presentation, and creates bookmark IDs or other identifiers for each paragraph used by the CSE 177 to create new pages when annotations are added as further described herein.
  • the ecommerce module 187 processes payments and generally handles monetary transactions associated with use of the system. For example, the ecommerce module manages shopping carts and other purchase vehicles, processes credit card payments and other payments, and also interfaces with other modules such as integration modules including the payment gateway 197 and external print services 199.
  • the personalization module 191 and the profile module 193 are generally responsible for processing inputs regarding user accounts. For example, the profile module 193 processes user administrative requests regarding password and address changes.
  • the personalization module 19 handles other inputs such as associating a personal photo or icon to present next to user postings or in a user's business card, as well as other general information about the user such as hobbies, favorite websites, etc.
  • the search module 195 is generally responsible for indexing and processing search operations on both original text and on annotations.
  • search module 195 allows users to search not only document content, but also annotations provided by other users and other information. Users can search for annotations provided by a particular user, for a particular text string contained in annotations, and input other search expressions to locate information.
  • the system also includes various modules, such as a payment gateway module 197 and a print services module 199, for integration with external or third-party systems.
  • the payment gateway module 197 provides an interface to process all or part of the payments using a third-party payment provider.
  • the print services module 199 provides an interface for printing special jobs, such as hardcover book binding or other types of book creation of content, using a third-party or other external print services provider.
  • the business tier also includes a commons module 201.
  • the commons module generally includes a utility library of various APIs and other system calls used for interfacing with the operating system, hardware components, the data store 135, modules in the various other tiers, etc.
  • the system receives an annotation generated by a first user at a first client, step 230.
  • the annotation is associated with a first indication, step 235.
  • the annotation may be associated with an icon or other indication embedded in the document.
  • the navigation input causes the first indication to be displayed in the first portion of the display at the second client.
  • the system automatically displays the annotation in the second portion of the display at the second client, step 245.
  • the system also processes and navigation inputs navigating the second portion of the display.
  • the system also can receive an input from a second user at a second computer to navigate a second portion of a display at the second client.
  • the navigation input causes the annotation to be displayed in the second portion of the display at the second client and, in response to the input, the system automatically displays the first indication in the first portion of the display at the second client.
  • the system divides content into a plurality of pages. Thus, a book might be divided into chapters and each chapter formatted as a particular HTML or other similarly encoded page.
  • the system loads an entire page of original content into the first portion of the display and also the entire page of related annotations for the page in the second portion of the display.
  • the system first loads only those annotations corresponding to indications immediately displayed upon loading the page into the first portion and then loads annotations corresponding to off-screen indications which achieves, among other benefits, a performance boost in terms of load times.
  • code such as a JavaScript synchronization module monitors user navigation inputs and mouse inputs and states to determine whether and when to synchronously scroll or otherwise display indications and their related annotations in the first and second portions of the display.
  • a JavaScript event or other similar program code returns identifiers corresponding to indications that are visibly displayed in the first portion.
  • the system employs a naming convention correlating indications with annotations.
  • an indication labeled Il would have its corresponding annotation labeled Al and an indication with an identifier of 12 would have its corresponding annotation identified as A2, etc.
  • the system determines, based on the JavaScript event data and indication identifiers, any indications visible in the first portion of the display and then automatically executes JavaScript code or other program code to display, in the second portion of the display, their corresponding annotations according to the naming convention. For example, if the system identifies indication Il as visible in the first portion of the display, then it automatically executes code to display Al in the second portion of the display.
  • the system when a number of indications are visible in the first portion of the display and there is insufficient screen space in the second portion of the display to display all of the corresponding annotations, the system, starting with the first indication displayed, displays as many corresponding annotations as possible in the second portion of the display. [0067] In some embodiments, when a navigation input is received related to the second portion of the display, the system determines, based on the JavaScript event data and annotation identifiers, any annotations visible in the second portion of the display. The system also determines any indications visible in the first portion of the display as previously described herein. If the first portion of the display already shows the indication corresponding to the first annotation appearing in the second portion of the display, then the system does not redraw the screen.
  • the system executes JavaScript code or other program code to display, in the first portion of the display, the indication corresponding to the first annotation appearing in the second portion of the display according to the naming convention. For example, if the system identifies annotation Al as visible in the second portion of the display, then it checks if indication Il is visible in the first portion of the display. If the indication is not visible, the system automatically executes code to display Il in the second portion of the display. [0068]
  • users at a plurality of clients are able to view content, as well as collaboratively annotate and view annotations provided by other users. For example, several users may negotiate a contract by sharing feedback and other annotations to produce a final version of the contract.
  • Fig. 4A presents a block diagram of an exemplary screen display of a shared annotation system according to an embodiment of the invention.
  • a display 250 such as a browser display or other software application display, is divided into a first portion of 252 and a second portion 254.
  • the first portion 252 contains information content provided by a server and the second portion 254 contains annotations related to the content in the first portion.
  • a user requests content from the content server and the content is delivered via the network to the user at a client and presented in the first portion of the display 252.
  • Content may include, for example, the text of a book, graphical content such as a picture album or photo album, a proposed legal document or business agreement, multimedia content, or other types of content.
  • the text of a book appears in the first portion 252 of the display 250.
  • Indications associated with user annotations are embedded within the content of the first portion of the display.
  • indication 256 corresponding to user annotation 262 and indication 258 corresponding to user annotation 264 are embedded in the content of the first portion 252.
  • the actual annotations 262 and 264 are presented in the second portion of the display 254.
  • the display 250 also includes a third portion 260 including additional references to indications contained in the first portion 252.
  • additional indications 266, and 268 corresponding to indications 256 and 258 are presented in a third portion of the display 260. Users can scan the third portion of the display 260 to quickly determine whether indications exist in the content presented in the first portion of the display 252.
  • the system also presents navigation interfaces such as scroll bars 272 and 274, as well as a menu bar 276 at the bottom of the display 250 which provides users with an interface to navigate a document divided into chapters/sections or jump to additional pages, etc.
  • the system also presents standard interface elements such as final, edit, view, favorites, tools and help menus 278 as known in the art and common in Internet browsers.
  • the system presents a plurality of icons 280 designed to provide an interface for common operations that users might want to perform when viewing content such as a document, a photo album, or a book.
  • Fig. 4B presents a block diagram of two exemplary screen displays of a shared annotation system according to one embodiment of the invention.
  • the two screen displays 282 and 300 show versions of the same display at two different points in time.
  • the display is divided into a first portion 284 and a second portion 286.
  • the first portion contains content as well as indications 288 and 290 associated with user annotations 292 and 294 respectively.
  • Navigation means such as scroll bars 296 and 298, are also provided.
  • a user navigating the display 282 for example, by using slider 296, would cause the display 282 to change as shown in a second screen display 300 of the same display at a later point in time after the system processes the navigation input.
  • the user scrolls the content in the first portion 284 such that indication 288 disappears from the first portion 284 and indication 302 appears.
  • annotation 292 associated with indication 288 automatically disappears in the second portion 286 of the display 300 and annotation 304 corresponding to indication 302 automatically appears in the second portion 286.
  • the system also conversely scrolls content in the first portion 284 of the display 282 when a user navigates content in the second portion 286 of the display 282.
  • the system automatically scrolls content in the first portion 284 of the display 282 according to a user input, such as a scroll bar slider 298 or other similar means, to navigate annotations in the second portion 286 of the display 282.
  • a user input such as a scroll bar slider 298 or other similar means
  • Fig. 5 presents a flow chart of a method for processing an annotation according to an embodiment of the present invention.
  • a user selects content via a selection tool or other means, step 330.
  • a user might employ a text tool to highlight and select several words in the text of a document which the user wishes to annotate with a textual comment, an uploaded picture, a video, a sound recording, etc.
  • JavaScript event code or other program code related to mouse inputs and other user inputs captures various metadata regarding the user selection.
  • the event code captures and returns a unique paragraph identifier tag, a starting point value or offset (in characters from the start of the identified paragraph, pixels, or other metrics known in the art), and ending point value or offset.
  • a user can crop one or more areas of a picture the user desires to annotate.
  • a user could crop a single area of a picture for an annotation or a user could crop several different (or overlapping) areas of the same picture for several different annotations.
  • the user selects the area using a rectangular cropping tool.
  • the system captures the x,y coordinates of the corners of the rectangle to create a mapping or overlay representing the selection of the original image.
  • the user may also assign additional attributes to the selection (such as a person name, a product identifier, a price, a location, a theme, a date, etc.).
  • users may also indicate a frame or other location in a video using similar selection means for individual frames of a video.
  • the system expands the selection to an appropriate level of granularity, step 335. A user might select several letters of a word and the system might expand the selection by highlighting the entire word.
  • the system imposes a pre-set limit on the ability of a user to annotate text to a certain level of granularity.
  • a user may only be able to annotate whole words or only words at the end of a sentence.
  • a single word such as "Kennedy” might have as many as seven distinct indications (corresponding to the total number of letters in the word) presented with the word. This would likely render display of content in the first portion of the display extremely cumbersome and severely limit the ability of the system to efficiently present information to users.
  • the system may also limit the number of indications presented related to particular sections of text or other content. Indications may be consolidated or combined in the interest of making content more readable, visually comprehensive, or otherwise accessible. For example, annotations provided by four different users might be associated with a single indication embedded in the content and displayed in the first portion of the display rather than with four separate indications in the first portion. In the second portion, however, each individual annotation provided would automatically be displayed when its corresponding indication is presented in the first portion of the display. [0080] After the user selects the desired content, the user indicates its desire to post an annotation related to the selected content, step 340. For example, a user may select a section of text and then click a "post" button or icon.
  • the system presents a form or other similar input mechanism, step 345, which allows the user to input and submit/upload the desired annotation to the content server, step 350.
  • a form window may open allowing the user to input a text annotation or a tree-view directory structure may be presented allowing the user to select a file (such as a picture, a video, an audio clip, etc.) to upload as an annotation.
  • the annotation input by the user and any related metadata are then uploaded via the network to the content server and stored in the data store for further processing, step 355.
  • the system generally communicates metadata indicating, among other things, the desired position of the annotation within the content of the first portion, the user's identity, the type of annotation, etc.
  • JavaScript code captures the events of a mouse click indicating the beginning of a selection, mouse drag changing the x,y coordinates for the selection, and a mouse up or un-click ending the selection. This data is saved into an HTML form attribute and transmitted to the server when the form is submitted.
  • the system also indicates the position of a desired annotation by providing metadata indicating an offset from a particular starting point within the document content and a selection length corresponding to the user selection of steps 330 and 335. For example, if a user selects text several sentences into a paragraph or other arbitrary section of a document, the system may communicate metadata indicating, from the start of the paragraph or other section, an offset corresponding to the number of characters at which the annotation begins and a length corresponding to the number of characters selected for the annotation.
  • the system uses a content serialization engine (“CSE”) or other similar means to lock the page of the document to which the annotation relates, step 360.
  • CSE content serialization engine
  • this prevents multiple CSEs from accessing and updating the page at the same time.
  • each CSE locks an individual page prior to updating the page to prevent other CSEs from accessing and simultaneously updating the page which would create problems such as content synchronization, etc.
  • the CSE lock also prevents other users from requesting the page from the content server while the system is processing the user's submitted annotation and embedding a related indication in the page of the document.
  • the system parses the metadata associated with the annotation, step 365. Using the length, offset, and other data provided with the metadata, the system determines a location in the document content at which to embed an indication corresponding to the annotation. The system then recreates the original page (including any additional pages created by the annotation) to embed an indication corresponding to the annotation, step 370, and updates the database with the new page, step 371. In some embodiments, the system replaces the old page stored in the database with the new page. In other embodiments, the old page is retained in order to track document versions and related annotations. The CSE lock is removed, step 372 and users at other clients are then able to request, retrieve, and view the new page containing the new indication corresponding to the new annotation. [0084] Fig. 5 A presents an exemplary sample of code for an XHTML formatted page of content containing an indication corresponding to an annotation which would be presented in the first portion of a display according to one embodiment of the invention.
  • the code sample uses various XHTML elements such as Div elements, Span elements, Highlight elements, and Content elements to present the content and corresponding indication.
  • Div element class shrdbk_main 373 is a div element that wraps the whole book text. In some embodiments, this element is used in a non-embedded mode to separate the indications or book items icons from the page text/content. Thus, a user would be able to toggle presentation of content both with and without indications being displayed. [0087] The system also uses a number of different types of Span elements. Span elements are tags generally used to group inline elements in a document. Span element shrdbk_start_element 374 is span element that is used as an indicator for the start location of the related text of the book item.
  • the id attribute contains the type of the book item or indication ('C for comment, T for image, 'A' for audio and 'V for video), an identifier for the indication, and a starting location of this element in a numerical representation corresponding to a number of characters or other metric (e.g. _554).
  • the indication identifier is used in varying embodiments to distinguish between indications and also to assist in content navigation, for example if a user wishes to jump to the next indication, etc.
  • Span element shrdbk_end_element 375 is a span element that is used as an indicator for the end location of the related text of the book item or indicator.
  • the id attribute contains the type of the book item, the book item id, and a location or offset of this element in a numerical representation (e.g. _681).
  • Span element shrdbk_icons 376 is a span element that contains the image of the icon or indication to be embedded. For each location in the content, such as the book text, a different type of indication icon is used to represent each different type of annotation (e.g. - text annotation, multimedia annotation, etc.).
  • the image element that is included for the indication represents the type of the items and the index number of the first item at this location, according to its appearance order within the book text.
  • Highlights Div elements idYellow, idFirstLine, and idLastLine 377 are a set of div elements that are used for highlighting the related text corresponding to the annotation. For example, when a book item is selected, by clicking on its title, the text range that represents the related text is located according to the start and end span elements. Text rectangles are created from the given text range and these div elements positions are set according to the text rectangles.
  • each shrdbk_icons span element there is also a corresponding div element, Content Div 378, which includes a representation of each of the item(s) that the span element contains for the specific location.
  • This div element is generally displayed when the mouse cursor is over the image icon.
  • the div element contains links for the related text of each of the book items and when clicking on those links the related text is highlighted.
  • another role of those links is to synchronize the media area/first portion with the current viewed item.
  • the media area automatically scrolls to the appropriate item in the second portion of the screen.
  • Fig. 5B presents an exemplary sample of code for an XHTML formatted page of content containing an indication corresponding to an annotation which would be presented in the first portion of a display according to one embodiment of the invention.
  • the code sample also uses various XHTML elements such as Div elements, Span elements, Highlight elements, and Content elements to present the content and corresponding indication. More specifically, the code sample provides exemplary span elements for presenting content in embedded and non-embedded modes.
  • span elements 379 are used for displaying icons and other content in non-embedded mode.
  • the element at the beginning of the paragraph is used as an anchor for the book item icon.
  • the content element is placed in the bottom of the HTML document and includes the book item title as well as any relevant functionality.
  • Span elements 380 are used for displaying icons and other content in embedded mode.
  • the element within the paragraph is used as an anchor for the book item icon.
  • the content element is similarly placed in the bottom of the HTML document and includes the book item title as well as any relevant functionality.
  • Span elements 381 present exemplary uses of span elements as start and end anchors for highlighting selected or annotated content.
  • Fig. 6 presents a flow chart of a method of annotating a visual element according to an embodiment of the present invention.
  • users may wish to provide annotations corresponding to visual elements such as pictures or video clips.
  • the user views a visual element, such as a picture, step 385, and selects a picture element to annotate, step 387.
  • the user might use a selection tool to crop or otherwise select picture elements, for example, by drawing a box around or otherwise selecting a person in a photo.
  • the system presents an annotation form or other input means, step 389, and the user inputs and submits the annotation, step 391.
  • the system allows the user to submit multiple annotations for a single picture.
  • control may return to step 387 for the user to select additional picture elements.
  • the user may select a first element and input an annotation for the first element and then select a second element and input a second annotation for the second element, etc.
  • the annotation(s) are then uploaded via the network to a content server and stored in a data store where they are associated with the visual element(s), step 393.
  • the system also maintains and updates an index of annotations corresponding to visual elements, step 395. For example, users may provide annotations identifying subjects in visual elements and the system maintains an index of identified subjects cross-referenced with their corresponding visual elements.
  • Fig. 6A presents a flow chart of a method of recreating a page of content according an embodiment of the invention.
  • content generally comprises various XML tag elements corresponding to user selections and other content related to annotations.
  • the CSE organizes elements into a list corresponding to their location on the page. If no further elements remain to be processed, control proceeds to step 408 and the routine ends. [0100] Otherwise, the system determines whether the next element in the list is associated with a content identifier, step 403. For example, in one embodiment, the system determines whether the element has a sharedbk XML tag identifier. If the element does not have an identifier, then it is generally not associated with an annotation and recreation of the element is generally not required and thus control passes to step 407 and the system proceeds to process the next element in the list.
  • Fig. 6B presents a flow chart of a method of processing an element during page creation according to an embodiment of the invention.
  • the system After the system determines that an element should be recreated (or in some embodiments originally created), the system orders all annotations associated with the element into a list according to their location, step 409. Thus, for a particular sentence, paragraph, page, etc. the system creates an ordered list of all annotations using the offsets and location metadata stored with the annotations. If no further annotations remain to be processed, control passes to step 417 and the routine exits.
  • the system processes the location metadata associated with the annotation to determine the location in the first portion of the display to place an indication or icon corresponding to the annotation in the second portion of the display, step 411.
  • the system processes the annotations in the list to determine whether there are multiple annotations associated with the same location, step 412. If there are multiple annotations, then the CSE creates the XHTML code or other code, inserting a multiple annotation indication or icon, step 413. If there are not multiple annotations, then the CSE creates the XHTML code or other code, inserting a single annotation indication or icon, step 414. For example, in some embodiments, certain indications indicate that they correspond only to a single annotation.
  • An image indication corresponds to an image annotation
  • an audio indication corresponds to an audio annotation
  • the system embeds or otherwise places a multiple annotation indication which indicates that more than one annotation has been made at a particular place in the original content.
  • the CSE also creates XHTML code or other code, generating a rollover action associated with the indication, step 415.
  • the CSE engine retrieves metadata associated with the annotation(s) for a particular location and indication which lists a title for the annotation, the annotation's author(s), etc. The system then proceeds to process the next element, step 416 and control returns to step 410.
  • Fig. 7 presents a flow chart of a method of providing a customized document related to a shared annotation system according to an embodiment of the present invention.
  • a user may view a book on home repair in which the main document content of the book provides chapters on framing, wiring, plumbing, etc. Within each chapter, other users may have provided annotations related to various tasks described, etc.
  • One user might indicate a particular brand of pipe that they found useful in completing a certain project or a particular type of light fixture well-suited to applications.
  • Another user might provide additional photographs of their project with additional text comments, etc. to supplement the information of the original book.
  • users may wish to view and wish to purchase or otherwise obtain customized documents, including these related annotations and other items such as tools required to complete certain projects, etc.
  • a block diagram of a sample page 455 from a customized document according to an embodiment of the present invention is presented.
  • the sample page 455 includes the document content 460 corresponding to the content of the document presented in the first portion of the display.
  • the page 455 also includes annotations and other comments related to the document content 460 such as textual annotations 465, picture annotations 466, audio or video annotations 467, annotations related to discussion group content 468, advertisements 469, links to related merchandise 470, and other information.
  • annotations and other comments related to the document content 460 such as textual annotations 465, picture annotations 466, audio or video annotations 467, annotations related to discussion group content 468, advertisements 469, links to related merchandise 470, and other information.
  • this information could be presented in a variety of manners or layouts. For example, as shown in Fig.
  • the document content 460 is centrally displayed and surrounded by related annotations including callouts to indications contained in the content 460 and other visual cues.
  • the user selects a particular book edition, step 420.
  • a user may select among a number of different books or documents containing content related to a desired subject or a user may only select certain chapters within a book. For example, a user may consult a home repair manual, but only be interested in the chapter on plumbing or on wiring and not wish to be provided with the entire book.
  • the user also determines and selects annotations they wish provided with their customized document/book, step 425.
  • a user may wish to be provided with all annotations related to the desired content, only annotations authored by an arbitrary/particular user, only a specific annotation containing certain information the user finds useful, such as supplemental photos, video, other types of annotations, etc.
  • the system also offers the user a promotion or other offer associated with the content and the user determines whether or not to accept the promotion, step 430.
  • a user purchasing a home repair manual chapter related to dry walling might also be presented with the option to purchase items and merchandise related to the project such as hammers, nails, screws, plaster, tape, drywall, or even other books or information related to the project.
  • the system may also offer a video of how to complete a sample project for an additional premium.
  • Fig. 8A presents a screenshot of an exemplary article page of a memory book according to an embodiment of the present invention.
  • a memory book generally comprises a customized printout of content and related annotations. In some embodiments, memory books are compiled and bound according to user preferences.
  • users create a memory book by customizing existing content provided by content creators.
  • a content provider might use the system to post an original article to the Web containing text, photos, and other multimedia elements recounting or otherwise related to an event such as a Harley Davidson rally or a Britney Spears concert.
  • the original article also generally contains indications and corresponding annotations input by various users responding to the original article.
  • a user can then create any number of custom memory books from the original article by uploading additional multimedia elements and selecting specific annotations to include in their personal memory book.
  • a user attending the Harley Davidson rally can create a memory book containing photos, annotations, and other elements related to that user's own personal experience at the Harley Davidson rally.
  • a user attending the Britney Spears concert creates a memory book related to their own personal concert experience with their own photos from before the show, after the show, photos from during the show, related annotations, the user's own textual inputs, etc.
  • a user who went to the Harley Davidson rally uploads their own pictures taken at the rally to replace or supplement the pictures in the original article posted by the content provider.
  • a user also uses pictures posted as annotations by other users to replace or supplement pictures of the original article or they use additional pictures provided by the content provider or other content providers.
  • Users also select custom annotations to include with the memory book by filtering or otherwise selecting annotations from the set of annotations posted by other users regarding the original article.
  • a user automatically selects annotation from a list of friends who post annotations.
  • users select annotations individually or based on criteria such as ratings from other users, annotation type, etc.
  • a user creates their own personal memory book from the original article.
  • the personal memory book generally contains the text and other content of the original article including additional pictures, text, videos, and related annotations selected or otherwise input by the user.
  • the user then has the option to print out the memory book and have it bound or otherwise preserved, for example as a souvenir.
  • An article page of a memory book generally includes article text of the original content along with embedded photos with captions, embedded indications, and other items as further described herein.
  • the presentation of the article page is formatted as closely as possible to the view a user would be presented with online. In some embodiments, however, the pagination is different since the content is now being produced on a printed page as opposed to on a display.
  • the article page includes one or more of the following: a header 471, embedded images 472, image captions 473, embedded icons or indications 474, and a footer 475.
  • the header 471 generally remains consistent across pages throughout a memory book, thus unifying content presentation, etc.
  • the header includes a graphic, such as a logo, and heading text which may be used by the system to create a table of contents, an index, etc.
  • Embedded images 472 include images originally presented in the original content as well as images selected by a user for inclusion in the memory book.
  • images 472 also contain an image caption 473 which may include the poster's username, the date the photo was posted, a title for the image 472, etc.
  • Embedded icons or indications 474 generally appear in the same location of the content as they do when presented in a display. In some embodiments, however, icons 474 are renumbered for each individual page (e.g. - starting from 1 for the first indication 474 on each page) and thus the numbering scheme for indications 474 may differ from the online version of the book.
  • the article page also contains a footer 475 containing the book's title, page number, publisher information, etc.
  • Fig. 8B presents a screenshot of an exemplary comments page of a memory book according to an embodiment of the present invention.
  • the comments page of a memory book generally includes comments and other annotations input by users online and generally is included on one or more separate pages falling after the article page as opposed to on the same page as the article text itself.
  • the article comments page includes one or more of the following: a header 476, a sub-header 477, a comment or reply icon 478, a comment title 479, a username and date of post 480, a comment text or other annotation content 481, one or more replies 482, and comments by various types of members 483.
  • the header 476 of the article comments page is generally a graphic and corresponds to the header of the article page of the memory book.
  • Sub-headers 477 indicate the printed page in the memory book which contains the article to which the annotations are related.
  • Comments or reply icons 478 are generally graphics indicating a type of comment. For example, a text comment might have a balloon with text in it as an icon 478 and an audio comment might have a musical note as an icon 478.
  • Comment titles 479 indicate any heading a user inputs to associate with their comment.
  • comment titles are printed in different colors according to the type of user. For example, comments by regular members might be printed in black, comments by moderators 482 in red, etc.
  • comment text 481 is also displayed in varying colors according to user types.
  • a username and date of post 480 are also displayed for each annotation. Replies 482 associated with comments may also be presented.
  • Fig. 8C presents a screenshot of an exemplary dynamic print page according to an embodiment of the invention.
  • Dynamic print pages are generally formatted to include comments and other annotations just below the text to which they refer.
  • the page includes the original text 484 including inline indications corresponding to the first portion of the display.
  • the page also includes annotations such as text comments, images, etc. as would be presented online in the second portion of the display.
  • Fig. 9 shows a method of presenting a selected multimedia element while navigating a document in shared annotation system according to an embodiment of the present invention.
  • a multimedia element such as a chart, a table, a picture, etc.
  • a user viewing several pages of a document related to a particular company's financial outlook might find it useful to retain a chart of the stock price or a table of pro forma income projections from one page while viewing information on a second page.
  • the system achieves the goal by allowing users to select a multimedia element and then floating the selected element on top of or integrating the selected element with subsequent pages that are viewed.
  • the user selects a multimedia element in a first page, such as a picture, using various input means previously described herein, step 495.
  • the selected element is identified in the content database, step 500, and floated or otherwise displayed in the browser window, step 505.
  • the user client communicates the selected element identifier to the content server which retrieves another instance of the element and floats the element in the browser window containing the original content or displays the selected element in a new window or frame.
  • the system recreates the first page, removing the selected element and floats or otherwise displays the selected element over the location in the content where the selected element previously resided.
  • the system does not immediately float or otherwise display the selected element, but instead only identifies the selected element and only floats the selected element when the system receives input to navigate to a second page, step 510.
  • the system retrieves the original version of the second page stored in the database, step 515, and creates a new second page to display by modifying the second page and embedding the selected element from the first page, step 520. The modified second page is then presented with the original second page content now including the selected element, step 525.
  • Systems and modules described herein may comprise 'software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described herein.
  • Software and other modules may reside on servers, workstations, personal computers, computerized tablets, PDAs, and other devices suitable for the purposes described herein. Software and other modules may be accessible via local memory, via a network, via a browser or other application in an ASP context, or via other means suitable for the purposes described herein.
  • Data structures described herein may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein.
  • User interface elements described herein may comprise elements from graphical user interfaces, command line interfaces, and other interfaces suitable for the purposes described herein. Screenshots presented and described herein can be displayed differently as known in the art to input, access, change, manipulate, modify, alter, and work with information.

Abstract

The invention relates generally shared annotation systems. More particularly, the invention provides a method for automatically navigating a document (252) in a display (250) having at least a first portion (252) and a second portion (254), the method comprising: receiving an annotation (264) related to the document (252), the annotation (264) generated by a user at a first client; associating the annotation (264) with a first indication (256) in the document (252); receiving, from a user at a second client, an input to navigate a first portion of a display at the second client, the input causing the first indication to be displayed in the first portion of the display; and in response to the input, automatically displaying the annotation in a second portion of the display at the second client.

Description

SHARED ANNOTATION SYSTEM AND METHOD
RELATED APPLICATIONS [0001] This application claims priority of U.S. Patent Application No. 10/936,788, filed September 8, 2004, U.S. Patent Application No. 11/099,768, filed April 6, 2005, and U.S. Patent Application No. 11/099,817, filed April 6, 2005, the contents of each of which are incorporated herein by reference in their entirety.
COPYRIGHT NOTICE [0002] A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosures, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever. BACKGROUND OF THE INVENTION
[0003] The inventions disclosed herein relate generally to collaborative systems and more particularly to shared annotation systems.
[0004] Users often wish to collaborate on shared documents in a network. For example, in a business environment, users at different companies may collaborate on a business agreement such as creating a contract or a license agreement.
[0005] One issue associated with network collaboration is synchronicity. For example, users often collaborate by exchanging versions of documents via e-mail or other similar means. A first user edits or otherwise comments a document and then sends the revised version to a second user for further input. The second user makes or otherwise provides their input and then e-mails the new document back to the first user. While the first user is editing the document, however, the second user cannot provide input since they do not possess the current version of the document (currently being edited by the first user) and therefore do not know what changes the first user might be making. Similarly, the first user cannot provide further input while the document is being edited by the second user. It is thus desirable for users to be able to provide synchronous comments and edits without having to wait for other users.
[0006] Another issue associated with network collaboration is application heterogeneity. In existing systems, users must have the same specialized collaboration software in order to collaborate and share information. For example, one current collaborative system by iMarkup Solutions of Vista, California requires both users to download and install a specialized plug-in in order to extend collaborative functionality to the user systems. Many users find this technically challenging to configure or simply inconvenient. It is thus desirable for users to be able to collaborate using tools that are application agnostic and do not require additional specialized software. [0007] U.S. Patent No. 6,438,564 discusses a system which allows users to associate discussions within documents. Discussions include comments, annotations, and notes and are associated with documents by associating the discussion with a document identifier. Discussions are stored separately from their related documents. When a particular document is requested by a user, any related discussions associated with the identifier for the document are also retrieved. The system discussed in the '564 application has a number of shortcomings. For example, in the '564 patent, only HTML text associated with a discussion is stored. If the discussion is linked to another item, for example a media item, such as a graphic, a video clip, an audio clip, etc., the media file is not stored in the system database containing the HTML text and other data associated with the discussion. Also, only a link to the media is stored. Thus, if a user desires to use a media item in a discussion, they must first upload the item to a separate web server or else the link in the '564 patent system database to the item will be invalid. This presents users with a significant inconvenience. Further, the system only parses HTML tag data such as paragraphs, lists, images, and tables, to determine a location for a discussion within a document. Discussions are thus limited to hanging off of paragraphs, lists, images, tables, etc. and a user is not, for example, able to link a discussion to an arbitrary word or phrase within the document. This lack of flexibility limits the user's ability to freely comment within a document and also presents a significant limitation with respect to the level of granularity at which a given document may be discussed. Using the '564 patent system, for example, a user could not comment on individual words in a poem which might be highly desirable given the importance of individual word choice in poetry. [0008] There is thus a need for systems and methods which are application agnostic and allow users to synchronously share annotations regarding a particular document. There is also a need for systems and methods which permit users to place annotations at any arbitrary location within a document. SUMMARY OF THE INVENTION
[0009] The present invention addresses, among other things, the problems discussed above with shared annotation systems. In accordance with some aspects of the present invention, computerized methods are provided for enabling a plurality of users to collaborate or otherwise provide annotations and other input and feedback related to shared documents and content in a computer network. Users are able to synchronously navigate content via multi- portion displays in which indicators related to the annotations are embedded in document content in a first portion of the display and the related annotations are synchronously presented in at least a second portion of the display. In some embodiments, the system also generates custom documents based on annotated content, provides commerce opportunities related to annotated content, persistently presents selected multimedia content while navigating a plurality of document pages, and accepts and indexes annotations related to visual content elements such as graphics and photographs.
[0010] In one embodiment, the system enables a method for automatically navigating a document in a display having at least a first portion and a second portion, the method comprising: receiving an annotation related to the document, the annotation generated by a user at a first client; associating the annotation with a first indication in the document; receiving, from a user at a second client, an input to navigate a first portion of a display at the second client, the input causing the first indication to be displayed in the first portion of the display; and in response to the input, automatically displaying the annotation in a second portion of the display at the second client.
[0011] In some embodiments, the display comprises a browser window, such as an Internet browser. In some embodiments, the document comprises an electronic book, a digital photo album containing one or more digital photos, a web page, a text document, or a multimedia document. In some embodiments, the annotation comprises a text annotation, such as a comment related to the document. In other embodiments, the annotation comprises a graphical annotation, such as a photograph. In other embodiments, the annotation comprises an audio annotation, a video annotation, a multimedia annotation, or a discussion group related to the document. In some embodiments, the input comprises an input to scroll the first portion of the display or an input to navigate to a portion of the document containing the first indication. In some embodiments, the first indication comprises a graphical indication, such as an icon. In some embodiments, receiving an annotation comprises receiving form data submitted by the user at the first client, such as receiving HTML form data. [0012] In some embodiments, associating the annotation with a first indication in the document comprises: identifying a portion of the document to which the annotation relates; and associating the first indication with the portion of the document to which the annotation relates. For example, in some embodiments, the annotation comprises a discussion group related to the portion of the document. In some embodiments, the annotation is added to a data structure stored in memory, the data structure comprising a list of annotations relating to portions of one or more documents. In some embodiments, the list of annotations comprises a list of bookmarks. In some embodiments, the system receives input selecting an annotation from the list of bookmarks and displays, in the first portion of the display, at least a portion of a document to which the annotation is related and displays at least the selected annotation in the second portion of the display. [0013] In some embodiments, associating the first indication comprises embedding the first indication in the portion of the document to which the annotation relates. In some embodiments, embedding the first indication comprises: receiving location data related to the portion of the document; processing the location data to determine a first location within the document relative to a location of the portion within the document; and generating a new * version of the document, the new version of the document containing the first indication embedded at the first location. For example, in some embodiments, the location data comprises one or more from the group comprising: a document identifier, a section identifier, a chapter identifier, a bookmark identifier, a portion length, and a portion offset. [0014] In some embodiments, the invention also includes systems and methods for replacing a first version of the document stored in memory with the new version of the document, for example by overwriting a first version of the document with a new version of the document. [0015] In some embodiments, receiving an annotation comprises receiving an annotation related to an image contained in the document, for example receiving information identifying one or more subjects of the image. In some embodiments, the system also includes methods for associating the one or more subjects with the image, such as by updating a data structure stored in memory, the data structure storing associations between one or more images and one or more subjects of the one or more images.
[0016] In some embodiments, the annotation comprises a commercial offer, such as an offer to purchase a product related to the document. In some embodiments, the system also includes methods for processing a request by a user at a client to purchase the product, such as methods for transmitting the product and the document to the user. In some embodiments, the system also includes methods for communicating, to a user at a client, an offer to purchase the document and a set of annotations related to the document, such as a set of annotations selected by the user. The system processes the user request to purchase the document and the set of annotations, for example by printing the document and the set of annotations. In some embodiments, for each annotation related to a portion of the document, the system prints the annotation and the related portion of the document on the same page. In some embodiments, processing the user request comprises transmitting the document and the set of annotations to the user. [0017] In some embodiments, the system also includes methods for authenticating the user at a first client and authorizing the user at the first client to provide the annotation; and authenticating the user at the second client and authorizing the user at the second client to navigate the document. [0018] In accordance with another aspect of the present inventions, the system includes methods to annotate content of a web page. An indication is inserted in and associated with content according to markup language describing offsets including a starting point and an endpoint for the indication, the starting point and endpoint offsets corresponding to a number of characters from a location within the content. In some embodiments, the system includes program code that captures user inputs identifying selections according to a paragraph identifier, a starting point value, and an ending point value. In some embodiments, the system enables a method for selecting an arbitrary string of characters on a web page and posting the selection, including related metadata, to an application server. In some embodiments, the related metadata includes positional metadata and content identifiers. [0019] In one embodiment, the system enables a method for creating a custom memory book including original content supplied by a first party, annotations provided by one or more users, and multimedia elements provided by other users. For example, in some embodiments, users create a memory book by customizing existing content provided by content creators. In some embodiments, the original article also generally contains indications and corresponding annotations input by various users responding to the original article. A user can then create any number of custom memory books from the original article by uploading additional multimedia elements and selecting specific annotations to include in their personal memory book. In some embodiments, a user uploads their own personal pictures to replace or supplement the pictures in the original article posted by the content provider. In some embodiments, a user also uses pictures posted as annotations by other users to replace or supplement pictures of the original article or they use additional pictures provided by the content provider or other content providers. In some embodiments, users also select custom annotations to include with the memory book by filtering or otherwise selecting annotations from the set of annotations posted by other users regarding the original article. In one embodiment, a user automatically selects annotation from a list of friends who post annotations. In other embodiments, users select annotations individually or based on criteria such as ratings from other users or annotation type. In some embodiments, the system enables a method for printing and binding the custom memory book, such as by using standard book publishing equipment and techniques. BRIEF DESCRIPTION OF THE DRAWINGS
[0020] The invention is illustrated in the figures of the accompanying drawings which are meant to be exemplary and not limiting, in which like references are intended to refer to like or corresponding parts, and in which: [0021] Fig. 1 is a block diagram of a shared annotation system according to an embodiment of the present invention;
[0022] Fig. 2 is a block diagram of functional modules in a shared annotation system according to an embodiment of the present invention;
[0023] Fig. 3 is a flow chart of a method to synchronously navigate shared annotations according to an embodiment of the present invention;
[0024] Fig. 4a is a block diagram of an exemplary screen display of a shared annotation system according to an embodiment of the present invention;
[0025] Fig. 4b is a block diagram of two exemplary screen displays of a shared annotation system according to an embodiment of the present invention; [0026] Fig. 5 is a flow chart of a method for processing an annotation according to an embodiment of the present invention;
[0027] Fig. 5 A presents an exemplary sample of code for an XHTML formatted page of content according to one embodiment of the invention;
[0028] Fig. 5B presents an exemplary sample of code for an XHTML formatted page of content according to one embodiment of the invention;
[0029] Fig. 6 is a flow chart of a method of annotating a visual element according to an embodiment of the present invention;
[0030] Fig. 6A is a flow chart of a method of recreating a page of content according an embodiment of the invention; [0031] Fig. 6B is a flow chart of a method of processing an element during page creation according to an embodiment of the invention;
[0032] Fig. 7 is a flow chart of a method of providing a customized document related to a shared annotation system according to an embodiment of the present invention; [0033] Fig. 8 is a block diagram of a sample page from a customized document related to a shared annotation system according to an embodiment of the present invention; [0034] Fig. 8A is a screenshot of an exemplary article page of a memory book according to an embodiment of the present invention; [0035] Fig. 8B is a screenshot of an exemplary comments page of a memory book according to an embodiment of the present invention;
[0036] Fig. 8C is a screenshot of an exemplary dynamic print page according to an embodiment of the invention; and
[0037] Fig. 9 is a flow chart of a method of presenting a selected multimedia element while navigating a document in shared annotation system according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS [0038] Preferred embodiments of the invention are now described with reference to the drawings. As described further below, systems and methods are presented regarding a shared annotation system. A plurality of users collaborate or otherwise provide annotations and other input and feedback related to shared documents and content in a computer network. Users are able to synchronously navigate content via multi-portion displays in which indicators related to the annotations are embedded in document content in a first portion of the display and the related annotations are synchronously presented in at least a second portion of the display. In some embodiments, the system also generates custom documents based on annotated content, provides commerce opportunities related to annotated content, persistently presents selected multimedia content while navigating a plurality of document pages, and accepts and indexes annotations related to visual content elements such as graphics and photographs. Additional aspects and features of the system will also be appreciated by one skilled in the art as further described below. [0039] Fig. 1 presents a block diagram of a shared annotation system according to an embodiment of the present invention. As shown, the system includes one or more clients including first client 105, a second client 110, and an nth client 115, connected to a network 120, a content server 125 including a content processor 130 communicatively coupled to a data store 135, and one or more additional computers including a moderator computer 140, an administrator computer 145, and a support computer 150. Clients 105, 110, and 115, and other computers in the system, including personal computers and other computing devices known in the art including personal digital assistants ("PDAs"), tablet computers, cellular telephones, and other devices. The clients are communicatively coupled to the content server 125 via a computer network 120, such as the Internet or a local area network ("LAN"). Users of the client's devices collaborate or otherwise provide annotations and other input and feedback related to shared documents and content in the network. The users collaborate or otherwise provide annotations regarding the content via one or more software modules including a display module. For example, in some embodiments users interact with content and provide annotations via a web browser, such as Microsoft Internet Explorer or Netscape Navigator.
[0040] The content server 125 contains a content processor 130 and other modules directed to receiving and processing user requests regarding content. Requests include annotations regarding content, requests for new content, navigation inputs regarding content, and other user requests. The content server 125 is communicatively coupled to a data store 135. The data store 135 stores a variety of data including document content for delivery to users, user account and registration information, annotations and other information generated by users regarding content, and other related data. As used herein, annotations generally include content-related input provided by users including text input, graphical input, audio input, video input, and other types of input, associated in some way with a particular selected character sequence in a primary set of content. For example, a user may input a textual comment or a user may upload a picture related to content. A user may also provide a voice recording or other recording related to content or even a video clip as an annotation. Annotations may also include a discussion group or other similar forum or means to facilitate threaded discourse or other interaction between users regarding a particular portion of a document. For example, a user may find a particular paragraph of a document very important and create a location-specific discussion group regarding the paragraph as an annotation. [0041] Additional computers are also connected to the network 120 and interface with content server 125 and client computers to provide additional functionality. For example, moderator computer 140 may be used by a moderator to review and approve user comments and annotations. An administrator computer 145 may manage other aspects of user interaction with the system such as user registration or security related issues. Support personnel may use support computer 150 to interface with users and provide additional assistance or help regarding user concerns. Additional computers of remote clients may also be employed or used by role-based personnel such as a picture moderator, a comments moderator, a topic approver, a new edition creator, a discussion group moderator, etc.
[0042] Fig. 2 presents a block diagram of functional modules in a shared annotation system according to an embodiment of the invention. The system is implemented using Model View Controller ("MVC") architecture as known in the art. Four tiers are presented including a client tier 153, a presentation tier 163, and an application tier 167, as well as a data store 135 or integration tier containing the data model. In some embodiments, modules are distributed among one or more content servers 125 and clients 105, 110, 115. The system may also implement multiple tiers and distribute modules to distribute functionality in order to improve system efficiency or otherwise load balance processing operations.
[0043] The client tier 153 includes a highlight module 155, a synchronization module 157, an annotation module 159, and a view modes module 161. The client tier includes code, such as JavaScript code, that executes on various pages, such as DHTML pages. The highlight module 155 is generally directed to managing selection and highlighting of annotations and text in the original content. For example, if a user clicks on an image annotation, the highlight module manages highlighting the corresponding text in the first portion of the display as well as the image annotation in the second portion of the display. Conversely, if a user selects or otherwise interacts with an annotation in the second portion of the display, the corresponding text or other visual elements are highlighted in the first portion of the display by the highlight module. [0044] The synchronization module 157 manages relationships between original content in the first portion of the display and corresponding annotations in the second portion of the display. In the second portion of the display, annotations are presented corresponding to content in the first portion of the display as the user scrolls the first portion of the display. Similarly, when the user scrolls the second portion of the display containing annotations, the first portion of the display also synchronously scrolls ensuring that original content in the first portion corresponding to the annotations in the second portion is consistently displayed. The synchronization module 157 also prevents unnecessary scrolling which might cause flicker. For example, no scrolling is performed if an icon or other indication present in the first portion of the display corresponds to an annotation already visible in the second portion of the display. Thus, the second part of the display is scrolled to find the next annotation only when a navigation input changes the display such that an indication in the first portion of the display disappears and vice-versa.
[0045] The annotation module 159 generally manages and processes annotations of images and other multimedia content. For example, when a user selects a photo for annotation, the annotation module 159 presents a rectangular selection box over the photo that may be resized to precisely indicate the portion of the photo to which an annotation refers. Multiple selection rectangles or other selection shapes may be drawn over a photo each corresponding to individual annotations. Upon receipt of an appropriate input, for example when a save or post annotation(s) button is selected, the annotation module also handles communicating the selection input(s) and related annotation information to other modules of the system as further described herein.
[0046] The view modes module 161 generally manages and controls presentation modes for content. For example, the view modes module switches between modes such as "embedded mode" in which indications or icons are presented inline with original content, "non- embedded mode" in which indications are presented to the left of the original content with one indication type per paragraph, and "memory book mode" in which indications are aggregated by type and presented inline at the end of individual paragraphs as opposed to directly in the text or to the left of the text. [0047] The presentation tier 163 generally includes a number of modules 165 running code within the web container. The code modules 165 generally include a controller responsive to data and inputs received from the client tier 163 as well as the business tier 167. Exemplary code modules 165 correspond to modules of the business tier 167 as further described herein and include a back office module, a book module, a bookmark module, a comment module, a conversion module, an ecommerce module, a print module, an image module, a media module, a profile module, a search module, and a user management module. Code modules 165 provide a bridge between application logic provided by the business tier and client inputs or presentation outputs.
[0048] The business tier 167 generally includes a number modules including a back office module 169, a book or content module 171, a bookmarks module 173, a user management module 175, a content serialization module ("CSE") 177, a media module 179, a comments module 181, a statistics module 183, a conversion module 185, an ecommerce module 187, a print module 189, a personalization module 191, a profile module 193, a search module 195, a payment gateway module 197, and a print services module 199. These various modules support a variety of internal administrative operations and actions, as well as process and respond to user actions in the presentation tier.
[0049] The user management module 175 is generally responsible for handling user-related operations such as registration, authentication, and membership rights and approvals (such as for administrators, regular members, etc.). [0050] The book or content module 171 generally manages and directs content-related operations such as navigation to other pages and tracking user preferences. For example, the content module 171 tracks preferred viewing modes and last pages visited for users. Generally, the content module is not directly responsible for serving content, however, since this is handled and resolved in the presentation tier by the corresponding book code module of the presentation tier and other code modules for the sake of improved performance. [0051] The bookmarks module 173 generally manages the user's private bookmarks list for content and annotations. For example, the bookmarks module 173 maintains a data structure containing pointers to locations for content or annotations that a user may wish to revisit or otherwise mark as a favorite. When an input is received selecting a bookmark, the system automatically navigates to and presents the related content or annotation corresponding to the selected bookmark. [0052] The comments module 181 is generally directed to processing operations associated with posing annotations. For example, the comments module 181 manages inputs posting or replying to annotations, applying automatic moderation to posted annotations, and notifying moderators when annotations trigger various notification filters. In some embodiments, the comments module 181 also notifies annotation authors when a reply or other corresponding annotation is posted regarding their authored annotation. Similarly, media module 179 processes graphical annotations and other graphical information provided by users. For example, the media module 179 processes photo, video, and audio annotations processing posts and notifying moderators of certain posts, as well as managing user replies. In some embodiments, the media module also processes video annotations by capturing and presenting a particular frame (such as the first frame) as a thumbnail image representing the video in the annotations portion of the display.
[0053] The content serialization engine 177 interfaces with the database 135 to lock content, update content, and otherwise process user annotations. The CSE 177 facilitates content delivery among multiple users. For example, when a first user provides an annotation regarding a particular page of content, in some embodiments, synchronization module 165 locks that page and prevents access to the page by other users until the annotation process is complete. In some embodiments, the CSE 177 maintains a queue of new annotations and processes annotations by creating new content pages and media pages containing the new annotations as further described herein.
[0054] The statistics module 183 generally tracks data related to posted annotations. For example, in some embodiments, the statistics module 183 tracks the number of annotations posted for each page in a given document and presents an indication of which page has the most number of new posts or a certain number of posts within a given period of time, such as in memory book mode as further described herein. [0055] The print module 189 is generally directed to printing or otherwise outputting content according to user inputs and preferences. For example, the print module 189 creates PDF files or other document files for versions of content output such as dynamic print and memory book creation as further described herein. [0056] The conversion module 185 is generally responsible for processing and formatting raw original content for use by the system and for users to annotate. For example, the conversion module parses original content into paragraphs, formats the content for presentation, and creates bookmark IDs or other identifiers for each paragraph used by the CSE 177 to create new pages when annotations are added as further described herein. [0057] The ecommerce module 187 processes payments and generally handles monetary transactions associated with use of the system. For example, the ecommerce module manages shopping carts and other purchase vehicles, processes credit card payments and other payments, and also interfaces with other modules such as integration modules including the payment gateway 197 and external print services 199. [0058] The personalization module 191 and the profile module 193 are generally responsible for processing inputs regarding user accounts. For example, the profile module 193 processes user administrative requests regarding password and address changes. The personalization module 191, sometimes in conjunction with the profile module 193, handles other inputs such as associating a personal photo or icon to present next to user postings or in a user's business card, as well as other general information about the user such as hobbies, favorite websites, etc.
[0059] The search module 195 is generally responsible for indexing and processing search operations on both original text and on annotations. For example, search module 195 allows users to search not only document content, but also annotations provided by other users and other information. Users can search for annotations provided by a particular user, for a particular text string contained in annotations, and input other search expressions to locate information.
[0060] The system also includes various modules, such as a payment gateway module 197 and a print services module 199, for integration with external or third-party systems. For example, in some embodiments the payment gateway module 197 provides an interface to process all or part of the payments using a third-party payment provider. In other embodiments, the print services module 199 provides an interface for printing special jobs, such as hardcover book binding or other types of book creation of content, using a third-party or other external print services provider. [0061] The business tier also includes a commons module 201. The commons module generally includes a utility library of various APIs and other system calls used for interfacing with the operating system, hardware components, the data store 135, modules in the various other tiers, etc. [0062] Fig. 3 is a flow chart of a method to synchronously navigate shared annotations according to an embodiment of the invention. The system receives an annotation generated by a first user at a first client, step 230. For example, the system receives a text comment related to a document or a picture related to the document. The annotation is associated with a first indication, step 235. For example, the annotation may be associated with an icon or other indication embedded in the document. The system receives input from a second user at a second client to navigate a first portion of a display at the second client, step 240. The navigation input causes the first indication to be displayed in the first portion of the display at the second client. In response to the input, the system automatically displays the annotation in the second portion of the display at the second client, step 245. [0063] Conversely, the system also processes and navigation inputs navigating the second portion of the display. Thus, the system also can receive an input from a second user at a second computer to navigate a second portion of a display at the second client. The navigation input causes the annotation to be displayed in the second portion of the display at the second client and, in response to the input, the system automatically displays the first indication in the first portion of the display at the second client. [0064] In some embodiments, the system divides content into a plurality of pages. Thus, a book might be divided into chapters and each chapter formatted as a particular HTML or other similarly encoded page. The system loads an entire page of original content into the first portion of the display and also the entire page of related annotations for the page in the second portion of the display. In some embodiments, the system first loads only those annotations corresponding to indications immediately displayed upon loading the page into the first portion and then loads annotations corresponding to off-screen indications which achieves, among other benefits, a performance boost in terms of load times. As further described herein, code such as a JavaScript synchronization module monitors user navigation inputs and mouse inputs and states to determine whether and when to synchronously scroll or otherwise display indications and their related annotations in the first and second portions of the display.
[0065] For example, a JavaScript event or other similar program code returns identifiers corresponding to indications that are visibly displayed in the first portion. In some embodiments, the system employs a naming convention correlating indications with annotations. Thus an indication labeled Il would have its corresponding annotation labeled Al and an indication with an identifier of 12 would have its corresponding annotation identified as A2, etc.
[0066] When a navigation in put is received relating to the first portion of the display, the system determines, based on the JavaScript event data and indication identifiers, any indications visible in the first portion of the display and then automatically executes JavaScript code or other program code to display, in the second portion of the display, their corresponding annotations according to the naming convention. For example, if the system identifies indication Il as visible in the first portion of the display, then it automatically executes code to display Al in the second portion of the display. In some embodiments, when a number of indications are visible in the first portion of the display and there is insufficient screen space in the second portion of the display to display all of the corresponding annotations, the system, starting with the first indication displayed, displays as many corresponding annotations as possible in the second portion of the display. [0067] In some embodiments, when a navigation input is received related to the second portion of the display, the system determines, based on the JavaScript event data and annotation identifiers, any annotations visible in the second portion of the display. The system also determines any indications visible in the first portion of the display as previously described herein. If the first portion of the display already shows the indication corresponding to the first annotation appearing in the second portion of the display, then the system does not redraw the screen. If the corresponding indication in the first portion is not displayed, then the system executes JavaScript code or other program code to display, in the first portion of the display, the indication corresponding to the first annotation appearing in the second portion of the display according to the naming convention. For example, if the system identifies annotation Al as visible in the second portion of the display, then it checks if indication Il is visible in the first portion of the display. If the indication is not visible, the system automatically executes code to display Il in the second portion of the display. [0068] Thus users at a plurality of clients are able to view content, as well as collaboratively annotate and view annotations provided by other users. For example, several users may negotiate a contract by sharing feedback and other annotations to produce a final version of the contract. The annotations would later serve as a record of positions regarding various clauses of the contract, how the document was created, who was in favor of various positions, etc. In some embodiments, the system also provides user authentication and secure access to content, allowing only a limited number of authorized users to access and/or annotate content. Thus, adverse parties are presented with a secure space in which they can collaboratively and synchronously annotate content. As another example, a school might post a number of photographs containing unidentified subjects. The system would provide a means for registered alumni or other parties to identify the subjects for the school archives, etc. [0069] Fig. 4A presents a block diagram of an exemplary screen display of a shared annotation system according to an embodiment of the invention. A display 250, such as a browser display or other software application display, is divided into a first portion of 252 and a second portion 254. The first portion 252 contains information content provided by a server and the second portion 254 contains annotations related to the content in the first portion. [0070] A user requests content from the content server and the content is delivered via the network to the user at a client and presented in the first portion of the display 252. Content may include, for example, the text of a book, graphical content such as a picture album or photo album, a proposed legal document or business agreement, multimedia content, or other types of content. For example, the text of a book appears in the first portion 252 of the display 250. Indications associated with user annotations are embedded within the content of the first portion of the display. Thus, indication 256 corresponding to user annotation 262 and indication 258 corresponding to user annotation 264 are embedded in the content of the first portion 252. The actual annotations 262 and 264 are presented in the second portion of the display 254. [0071] In some embodiments the display 250 also includes a third portion 260 including additional references to indications contained in the first portion 252. For example, as shown, additional indications 266, and 268 corresponding to indications 256 and 258 are presented in a third portion of the display 260. Users can scan the third portion of the display 260 to quickly determine whether indications exist in the content presented in the first portion of the display 252.
[0072] The system also presents navigation interfaces such as scroll bars 272 and 274, as well as a menu bar 276 at the bottom of the display 250 which provides users with an interface to navigate a document divided into chapters/sections or jump to additional pages, etc. The system also presents standard interface elements such as final, edit, view, favorites, tools and help menus 278 as known in the art and common in Internet browsers. [0073] In addition, the system presents a plurality of icons 280 designed to provide an interface for common operations that users might want to perform when viewing content such as a document, a photo album, or a book. Icons presented allow users to zoom in, zoom out, add a comment or annotation at a specific location within the content, highlight a specified region within the content, annotate a picture for a specified location, annotate video for a specified location, annotate audio for a specified location, create or interact with a discussion group related to the content at a specified location, perform a search, or resize the portions of the display. [0074] Fig. 4B presents a block diagram of two exemplary screen displays of a shared annotation system according to one embodiment of the invention. The two screen displays 282 and 300 show versions of the same display at two different points in time. The display is divided into a first portion 284 and a second portion 286. The first portion contains content as well as indications 288 and 290 associated with user annotations 292 and 294 respectively. Navigation means, such as scroll bars 296 and 298, are also provided. [0075] As previously described, a user navigating the display 282, for example, by using slider 296, would cause the display 282 to change as shown in a second screen display 300 of the same display at a later point in time after the system processes the navigation input. The user scrolls the content in the first portion 284 such that indication 288 disappears from the first portion 284 and indication 302 appears. Similarly, annotation 292 associated with indication 288 automatically disappears in the second portion 286 of the display 300 and annotation 304 corresponding to indication 302 automatically appears in the second portion 286. As previously discussed, the system also conversely scrolls content in the first portion 284 of the display 282 when a user navigates content in the second portion 286 of the display 282. For example, the system automatically scrolls content in the first portion 284 of the display 282 according to a user input, such as a scroll bar slider 298 or other similar means, to navigate annotations in the second portion 286 of the display 282. Thus, an indication 288 corresponding to an annotation 292 in the second portion 286 of the display 282 would automatically appear or disappear in the first portion 284 of the display 282 when the corresponding annotation 292 appears or disappears in the second portion 286 of the display 282 according to a user navigation input.
[0076] Fig. 5 presents a flow chart of a method for processing an annotation according to an embodiment of the present invention. A user selects content via a selection tool or other means, step 330. For example, a user might employ a text tool to highlight and select several words in the text of a document which the user wishes to annotate with a textual comment, an uploaded picture, a video, a sound recording, etc. JavaScript event code or other program code related to mouse inputs and other user inputs captures various metadata regarding the user selection. For example, the event code captures and returns a unique paragraph identifier tag, a starting point value or offset (in characters from the start of the identified paragraph, pixels, or other metrics known in the art), and ending point value or offset. [0077] While the example discussed herein with respect to Fig. 5 relates to processing a text selection, those skilled in the art will recognize that the process could similarly apply to selecting other forms of multimedia content including pictures, video, etc. For example, in one embodiment a user can crop one or more areas of a picture the user desires to annotate. For example, a user could crop a single area of a picture for an annotation or a user could crop several different (or overlapping) areas of the same picture for several different annotations. The user selects the area using a rectangular cropping tool. The system captures the x,y coordinates of the corners of the rectangle to create a mapping or overlay representing the selection of the original image. Once the image area is selected the user may also assign additional attributes to the selection (such as a person name, a product identifier, a price, a location, a theme, a date, etc.). In some embodiments, users may also indicate a frame or other location in a video using similar selection means for individual frames of a video. [0078] The system expands the selection to an appropriate level of granularity, step 335. A user might select several letters of a word and the system might expand the selection by highlighting the entire word. In some embodiments, for example to preserve system resources or to limit annotations from cluttering a screen or for other design-related considerations or specified goals, the system imposes a pre-set limit on the ability of a user to annotate text to a certain level of granularity. Thus a user may only be able to annotate whole words or only words at the end of a sentence. For example, if a user were able to annotate every individual letter of words in a text, a single word such a "Kennedy" might have as many as seven distinct indications (corresponding to the total number of letters in the word) presented with the word. This would likely render display of content in the first portion of the display extremely cumbersome and severely limit the ability of the system to efficiently present information to users. [0079] Similarly, the system may also limit the number of indications presented related to particular sections of text or other content. Indications may be consolidated or combined in the interest of making content more readable, visually comprehensive, or otherwise accessible. For example, annotations provided by four different users might be associated with a single indication embedded in the content and displayed in the first portion of the display rather than with four separate indications in the first portion. In the second portion, however, each individual annotation provided would automatically be displayed when its corresponding indication is presented in the first portion of the display. [0080] After the user selects the desired content, the user indicates its desire to post an annotation related to the selected content, step 340. For example, a user may select a section of text and then click a "post" button or icon. The system presents a form or other similar input mechanism, step 345, which allows the user to input and submit/upload the desired annotation to the content server, step 350. For example, a form window may open allowing the user to input a text annotation or a tree-view directory structure may be presented allowing the user to select a file (such as a picture, a video, an audio clip, etc.) to upload as an annotation.
[0081] The annotation input by the user and any related metadata are then uploaded via the network to the content server and stored in the data store for further processing, step 355. The system generally communicates metadata indicating, among other things, the desired position of the annotation within the content of the first portion, the user's identity, the type of annotation, etc. For example, in one embodiment, JavaScript code captures the events of a mouse click indicating the beginning of a selection, mouse drag changing the x,y coordinates for the selection, and a mouse up or un-click ending the selection. This data is saved into an HTML form attribute and transmitted to the server when the form is submitted. In some embodiments, as further described herein, the system also indicates the position of a desired annotation by providing metadata indicating an offset from a particular starting point within the document content and a selection length corresponding to the user selection of steps 330 and 335. For example, if a user selects text several sentences into a paragraph or other arbitrary section of a document, the system may communicate metadata indicating, from the start of the paragraph or other section, an offset corresponding to the number of characters at which the annotation begins and a length corresponding to the number of characters selected for the annotation.
[0082] In some embodiments, the system uses a content serialization engine ("CSE") or other similar means to lock the page of the document to which the annotation relates, step 360. In some embodiments, this prevents multiple CSEs from accessing and updating the page at the same time. For example, in a parallel processing environment or other environment supporting multiple CSEs in the same system, each CSE locks an individual page prior to updating the page to prevent other CSEs from accessing and simultaneously updating the page which would create problems such as content synchronization, etc. In some embodiments, the CSE lock also prevents other users from requesting the page from the content server while the system is processing the user's submitted annotation and embedding a related indication in the page of the document.
[0083] As discussed, the system parses the metadata associated with the annotation, step 365. Using the length, offset, and other data provided with the metadata, the system determines a location in the document content at which to embed an indication corresponding to the annotation. The system then recreates the original page (including any additional pages created by the annotation) to embed an indication corresponding to the annotation, step 370, and updates the database with the new page, step 371. In some embodiments, the system replaces the old page stored in the database with the new page. In other embodiments, the old page is retained in order to track document versions and related annotations. The CSE lock is removed, step 372 and users at other clients are then able to request, retrieve, and view the new page containing the new indication corresponding to the new annotation. [0084] Fig. 5 A presents an exemplary sample of code for an XHTML formatted page of content containing an indication corresponding to an annotation which would be presented in the first portion of a display according to one embodiment of the invention.
[0085] The code sample uses various XHTML elements such as Div elements, Span elements, Highlight elements, and Content elements to present the content and corresponding indication.
[0086] Div element class shrdbk_main 373 is a div element that wraps the whole book text. In some embodiments, this element is used in a non-embedded mode to separate the indications or book items icons from the page text/content. Thus, a user would be able to toggle presentation of content both with and without indications being displayed. [0087] The system also uses a number of different types of Span elements. Span elements are tags generally used to group inline elements in a document. Span element shrdbk_start_element 374 is span element that is used as an indicator for the start location of the related text of the book item. The id attribute contains the type of the book item or indication ('C for comment, T for image, 'A' for audio and 'V for video), an identifier for the indication, and a starting location of this element in a numerical representation corresponding to a number of characters or other metric (e.g. _554). The indication identifier is used in varying embodiments to distinguish between indications and also to assist in content navigation, for example if a user wishes to jump to the next indication, etc. [0088] Span element shrdbk_end_element 375 is a span element that is used as an indicator for the end location of the related text of the book item or indicator. The id attribute contains the type of the book item, the book item id, and a location or offset of this element in a numerical representation (e.g. _681). [0089] Span element shrdbk_icons 376 is a span element that contains the image of the icon or indication to be embedded. For each location in the content, such as the book text, a different type of indication icon is used to represent each different type of annotation (e.g. - text annotation, multimedia annotation, etc.). The image element that is included for the indication represents the type of the items and the index number of the first item at this location, according to its appearance order within the book text.
[0090] Highlights Div elements idYellow, idFirstLine, and idLastLine 377 are a set of div elements that are used for highlighting the related text corresponding to the annotation. For example, when a book item is selected, by clicking on its title, the text range that represents the related text is located according to the start and end span elements. Text rectangles are created from the given text range and these div elements positions are set according to the text rectangles.
[0091] For each shrdbk_icons span element there is also a corresponding div element, Content Div 378, which includes a representation of each of the item(s) that the span element contains for the specific location. This div element is generally displayed when the mouse cursor is over the image icon. The div element contains links for the related text of each of the book items and when clicking on those links the related text is highlighted. In some embodiments, another role of those links is to synchronize the media area/first portion with the current viewed item. Thus, when the user clicks on one of the links except from the highlighting of the related text, the media area automatically scrolls to the appropriate item in the second portion of the screen. In some embodiments, if the current displayed items have a different type from the item that was clicked, the type of the viewed media is changed to the equivalent type according to clicked link. [0092] Fig. 5B presents an exemplary sample of code for an XHTML formatted page of content containing an indication corresponding to an annotation which would be presented in the first portion of a display according to one embodiment of the invention. [0093] The code sample also uses various XHTML elements such as Div elements, Span elements, Highlight elements, and Content elements to present the content and corresponding indication. More specifically, the code sample provides exemplary span elements for presenting content in embedded and non-embedded modes.
[0094] For example, span elements 379 are used for displaying icons and other content in non-embedded mode. The element at the beginning of the paragraph is used as an anchor for the book item icon. The content element is placed in the bottom of the HTML document and includes the book item title as well as any relevant functionality.
[0095] Span elements 380 are used for displaying icons and other content in embedded mode. Here, the element within the paragraph is used as an anchor for the book item icon. The content element is similarly placed in the bottom of the HTML document and includes the book item title as well as any relevant functionality.
[0096] In addition, Span elements 381 present exemplary uses of span elements as start and end anchors for highlighting selected or annotated content.
[0097] Fig. 6 presents a flow chart of a method of annotating a visual element according to an embodiment of the present invention. In some embodiments, users may wish to provide annotations corresponding to visual elements such as pictures or video clips. Thus, the user views a visual element, such as a picture, step 385, and selects a picture element to annotate, step 387. As discussed, in some embodiments, the user might use a selection tool to crop or otherwise select picture elements, for example, by drawing a box around or otherwise selecting a person in a photo. [0098] The system then presents an annotation form or other input means, step 389, and the user inputs and submits the annotation, step 391. In some embodiments, the system allows the user to submit multiple annotations for a single picture. Thus, in these embodiments, control may return to step 387 for the user to select additional picture elements. For example, the user may select a first element and input an annotation for the first element and then select a second element and input a second annotation for the second element, etc. The annotation(s) are then uploaded via the network to a content server and stored in a data store where they are associated with the visual element(s), step 393. In some embodiments, the system also maintains and updates an index of annotations corresponding to visual elements, step 395. For example, users may provide annotations identifying subjects in visual elements and the system maintains an index of identified subjects cross-referenced with their corresponding visual elements. Using search means known in the art, users could access such an index to locate all visual elements in a content- document, such as a photo album, a book, etc., in which a particular subject appears. The content serialization engine then locks the page, embeds any required indications in the original content as previously described herein, and updates the original page in the data store as previously described herein, step 397. [0099] Fig. 6A presents a flow chart of a method of recreating a page of content according an embodiment of the invention. Once the content serialization engine locks the page, the system retrieves the existing page from the data store and a list of all related annotation to content and indications of the page, step 399. The system determines if any elements of the page remain to be processed, step 401. For example, content generally comprises various XML tag elements corresponding to user selections and other content related to annotations. In one embodiment, the CSE organizes elements into a list corresponding to their location on the page. If no further elements remain to be processed, control proceeds to step 408 and the routine ends. [0100] Otherwise, the system determines whether the next element in the list is associated with a content identifier, step 403. For example, in one embodiment, the system determines whether the element has a sharedbk XML tag identifier. If the element does not have an identifier, then it is generally not associated with an annotation and recreation of the element is generally not required and thus control passes to step 407 and the system proceeds to process the next element in the list.
[0101] Otherwise, the system checks for annotations related to the element, step 405, and recreates the element, step 406 as further described herein. For example, in some embodiments, elements associated with annotations are associated with unique content identifiers. Thus, an element and all its related annotations might share the same or related content identifiers according to embodiments of the invention. After the system determines which annotations relate to the current element, the system recreates the element, step 406, inserting any necessary indications, rollovers, or other items as further described herein. [0102], Fig. 6B presents a flow chart of a method of processing an element during page creation according to an embodiment of the invention. After the system determines that an element should be recreated (or in some embodiments originally created), the system orders all annotations associated with the element into a list according to their location, step 409. Thus, for a particular sentence, paragraph, page, etc. the system creates an ordered list of all annotations using the offsets and location metadata stored with the annotations. If no further annotations remain to be processed, control passes to step 417 and the routine exits.
Otherwise, the system processes the location metadata associated with the annotation to determine the location in the first portion of the display to place an indication or icon corresponding to the annotation in the second portion of the display, step 411. [0103] The system processes the annotations in the list to determine whether there are multiple annotations associated with the same location, step 412. If there are multiple annotations, then the CSE creates the XHTML code or other code, inserting a multiple annotation indication or icon, step 413. If there are not multiple annotations, then the CSE creates the XHTML code or other code, inserting a single annotation indication or icon, step 414. For example, in some embodiments, certain indications indicate that they correspond only to a single annotation. An image indication corresponds to an image annotation, an audio indication corresponds to an audio annotation, etc. In other embodiments, if multiple annotations are made at the same location in the original content in the first portion of the display, the system embeds or otherwise places a multiple annotation indication which indicates that more than one annotation has been made at a particular place in the original content.
[0104] In some embodiments, the CSE also creates XHTML code or other code, generating a rollover action associated with the indication, step 415. For example, the CSE engine retrieves metadata associated with the annotation(s) for a particular location and indication which lists a title for the annotation, the annotation's author(s), etc. The system then proceeds to process the next element, step 416 and control returns to step 410.
[0105] Fig. 7 presents a flow chart of a method of providing a customized document related to a shared annotation system according to an embodiment of the present invention. A user may view a book on home repair in which the main document content of the book provides chapters on framing, wiring, plumbing, etc. Within each chapter, other users may have provided annotations related to various tasks described, etc. One user might indicate a particular brand of pipe that they found useful in completing a certain project or a particular type of light fixture well-suited to applications. Another user might provide additional photographs of their project with additional text comments, etc. to supplement the information of the original book. Thus, users may wish to view and wish to purchase or otherwise obtain customized documents, including these related annotations and other items such as tools required to complete certain projects, etc.
[0106] For example, as shown in Fig. 8, a block diagram of a sample page 455 from a customized document according to an embodiment of the present invention is presented. The sample page 455 includes the document content 460 corresponding to the content of the document presented in the first portion of the display. In some embodiments, the page 455 also includes annotations and other comments related to the document content 460 such as textual annotations 465, picture annotations 466, audio or video annotations 467, annotations related to discussion group content 468, advertisements 469, links to related merchandise 470, and other information. Those skilled in the art will appreciate that this information could be presented in a variety of manners or layouts. For example, as shown in Fig. 8, the document content 460 is centrally displayed and surrounded by related annotations including callouts to indications contained in the content 460 and other visual cues. [0107] Thus, returning to Fig.7, the user selects a particular book edition, step 420. A user may select among a number of different books or documents containing content related to a desired subject or a user may only select certain chapters within a book. For example, a user may consult a home repair manual, but only be interested in the chapter on plumbing or on wiring and not wish to be provided with the entire book. [0108] The user also determines and selects annotations they wish provided with their customized document/book, step 425. A user may wish to be provided with all annotations related to the desired content, only annotations authored by an arbitrary/particular user, only a specific annotation containing certain information the user finds useful, such as supplemental photos, video, other types of annotations, etc. In some embodiments, the system also offers the user a promotion or other offer associated with the content and the user determines whether or not to accept the promotion, step 430. Thus, a user purchasing a home repair manual chapter related to dry walling might also be presented with the option to purchase items and merchandise related to the project such as hammers, nails, screws, plaster, tape, drywall, or even other books or information related to the project. For example, the system may also offer a video of how to complete a sample project for an additional premium. [0109] If the user accepts the promotion or offer, the user selects the related merchandise or otherwise complies with responding to and accepting the offer, step 435. Otherwise, control passes directly to step 440 and the user selects a particular format for the customized document. For example, a user may wish a hardcopy paper version of a customized document or they may prefer to receive the document electronically or some combination thereof. As necessary, the user also selects a delivery method, such as via mail, express mail, download, etc., step 445. The user also inputs any necessary payment information, personal information, registration information, license information, or other information required to complete and process the transaction, step 450. [0110] Fig. 8A presents a screenshot of an exemplary article page of a memory book according to an embodiment of the present invention. A memory book generally comprises a customized printout of content and related annotations. In some embodiments, memory books are compiled and bound according to user preferences.
[0111] For example, in some embodiments, users create a memory book by customizing existing content provided by content creators. Thus, for example, a content provider might use the system to post an original article to the Web containing text, photos, and other multimedia elements recounting or otherwise related to an event such as a Harley Davidson rally or a Britney Spears concert. The original article also generally contains indications and corresponding annotations input by various users responding to the original article. A user can then create any number of custom memory books from the original article by uploading additional multimedia elements and selecting specific annotations to include in their personal memory book. For example, a user attending the Harley Davidson rally can create a memory book containing photos, annotations, and other elements related to that user's own personal experience at the Harley Davidson rally. As another example, a user attending the Britney Spears concert creates a memory book related to their own personal concert experience with their own photos from before the show, after the show, photos from during the show, related annotations, the user's own textual inputs, etc.
[0112] For example, a user who went to the Harley Davidson rally uploads their own pictures taken at the rally to replace or supplement the pictures in the original article posted by the content provider. In some embodiments, a user also uses pictures posted as annotations by other users to replace or supplement pictures of the original article or they use additional pictures provided by the content provider or other content providers. [0113] Users also select custom annotations to include with the memory book by filtering or otherwise selecting annotations from the set of annotations posted by other users regarding the original article. In one embodiment, a user automatically selects annotation from a list of friends who post annotations. In other embodiments, users select annotations individually or based on criteria such as ratings from other users, annotation type, etc. [0114] Thus, a user creates their own personal memory book from the original article. The personal memory book generally contains the text and other content of the original article including additional pictures, text, videos, and related annotations selected or otherwise input by the user. As further described herein, the user then has the option to print out the memory book and have it bound or otherwise preserved, for example as a souvenir. [0115] An article page of a memory book generally includes article text of the original content along with embedded photos with captions, embedded indications, and other items as further described herein. Generally, the presentation of the article page is formatted as closely as possible to the view a user would be presented with online. In some embodiments, however, the pagination is different since the content is now being produced on a printed page as opposed to on a display. Indications and other content elements, however, are generally presented in the same location within the content as they are presented in a display, thus enabling users to quickly reference between online and printed versions. [0116] The article page includes one or more of the following: a header 471, embedded images 472, image captions 473, embedded icons or indications 474, and a footer 475. The header 471 generally remains consistent across pages throughout a memory book, thus unifying content presentation, etc. In some embodiments, the header includes a graphic, such as a logo, and heading text which may be used by the system to create a table of contents, an index, etc. Embedded images 472 include images originally presented in the original content as well as images selected by a user for inclusion in the memory book. For example, a user creating a memory book of a trip might select only particular photos from a set of photos for inclusion within the memory book. In some embodiments, images 472 also contain an image caption 473 which may include the poster's username, the date the photo was posted, a title for the image 472, etc. Embedded icons or indications 474 generally appear in the same location of the content as they do when presented in a display. In some embodiments, however, icons 474 are renumbered for each individual page (e.g. - starting from 1 for the first indication 474 on each page) and thus the numbering scheme for indications 474 may differ from the online version of the book. In some embodiments, the article page also contains a footer 475 containing the book's title, page number, publisher information, etc. [0117] Fig. 8B presents a screenshot of an exemplary comments page of a memory book according to an embodiment of the present invention. The comments page of a memory book generally includes comments and other annotations input by users online and generally is included on one or more separate pages falling after the article page as opposed to on the same page as the article text itself. The article comments page includes one or more of the following: a header 476, a sub-header 477, a comment or reply icon 478, a comment title 479, a username and date of post 480, a comment text or other annotation content 481, one or more replies 482, and comments by various types of members 483. [0118] The header 476 of the article comments page is generally a graphic and corresponds to the header of the article page of the memory book. Sub-headers 477 indicate the printed page in the memory book which contains the article to which the annotations are related. Comments or reply icons 478 are generally graphics indicating a type of comment. For example, a text comment might have a balloon with text in it as an icon 478 and an audio comment might have a musical note as an icon 478. Comment titles 479 indicate any heading a user inputs to associate with their comment. In some embodiments, comment titles are printed in different colors according to the type of user. For example, comments by regular members might be printed in black, comments by moderators 482 in red, etc. In some embodiments, comment text 481 is also displayed in varying colors according to user types. In some embodiments, a username and date of post 480 are also displayed for each annotation. Replies 482 associated with comments may also be presented.
[0119] Fig. 8C presents a screenshot of an exemplary dynamic print page according to an embodiment of the invention.1 Dynamic print pages are generally formatted to include comments and other annotations just below the text to which they refer. As shown, the page includes the original text 484 including inline indications corresponding to the first portion of the display. The page also includes annotations such as text comments, images, etc. as would be presented online in the second portion of the display.
[0120] Fig. 9 shows a method of presenting a selected multimedia element while navigating a document in shared annotation system according to an embodiment of the present invention. When viewing a multi-page document or viewing several documents, users may wish to visually retain presentation of a multimedia element, such as a chart, a table, a picture, etc. from one page while viewing content on another different page. For example, a user viewing several pages of a document related to a particular company's financial outlook might find it useful to retain a chart of the stock price or a table of pro forma income projections from one page while viewing information on a second page. In some embodiments, the system achieves the goal by allowing users to select a multimedia element and then floating the selected element on top of or integrating the selected element with subsequent pages that are viewed.
[0121] Thus, the user selects a multimedia element in a first page, such as a picture, using various input means previously described herein, step 495. The selected element is identified in the content database, step 500, and floated or otherwise displayed in the browser window, step 505. For example, the user client communicates the selected element identifier to the content server which retrieves another instance of the element and floats the element in the browser window containing the original content or displays the selected element in a new window or frame. In some embodiments, the system recreates the first page, removing the selected element and floats or otherwise displays the selected element over the location in the content where the selected element previously resided. In other embodiments, the system does not immediately float or otherwise display the selected element, but instead only identifies the selected element and only floats the selected element when the system receives input to navigate to a second page, step 510. In some embodiments, the system retrieves the original version of the second page stored in the database, step 515, and creates a new second page to display by modifying the second page and embedding the selected element from the first page, step 520. The modified second page is then presented with the original second page content now including the selected element, step 525. [0122] Systems and modules described herein may comprise 'software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described herein. Software and other modules may reside on servers, workstations, personal computers, computerized tablets, PDAs, and other devices suitable for the purposes described herein. Software and other modules may be accessible via local memory, via a network, via a browser or other application in an ASP context, or via other means suitable for the purposes described herein. Data structures described herein may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein. User interface elements described herein may comprise elements from graphical user interfaces, command line interfaces, and other interfaces suitable for the purposes described herein. Screenshots presented and described herein can be displayed differently as known in the art to input, access, change, manipulate, modify, alter, and work with information. [0123] While the invention has been described and illustrated in connection with preferred embodiments, many variations and modifications as will be evident to those skilled in this art may be made without departing from the spirit and scope of the invention, and the invention is thus not to be limited to the precise details of methodology or construction set forth above as such variations and modification are intended to be included within the scope of the invention.

Claims

WHAT IS CLAIMED IS:
1. A method for automatically navigating a document in a display having at least a first portion and a second portion, the method comprising: receiving an annotation related to the document, the annotation generated by a user at a first client; associating the annotation with a first indication in the document; receiving, from a user at a second client, an input to navigate a first portion of a display at the second client, the input causing the first indication to be displayed in the first portion of the display; and in response to the input, automatically displaying the annotation in a second portion of the display at the second client.
2. The method of claim 1, wherein the display comprises a browser window.
3. The method of claim 2, wherein the browser comprises an Internet browser.
4. The method of claim 1, wherein the document comprises an electronic . book.
5. The method of claim 1, wherein the document comprises a digital photo album containing one or more digital photos.
6. The method of claim 1, wherein the document comprises a web page.
7. The method of claim 1, wherein the document comprises a text document.
8. The method of claim 1, wherein the document comprises a multimedia document.
9. The method of claim 1, wherein the annotation comprises a text annotation.
10. The method of claim 9, wherein the text annotation comprises a comment related to the document.
11. The method of claim 1, wherein the annotation comprises a graphical annotation.
12. The method of claim 11, wherein the graphical annotation comprises a photograph.
13. The method of claim 1, wherein the annotation comprises an audio annotation.
14. The method of claim 1, wherein the annotation comprises a video annotation.
15. The method of claim 1, wherein the annotation comprises a multimedia annotation.
16. The method of claim 1, wherein the annotation comprises a discussion group related to the document.
17. The method of claim 1, wherein the input comprises an input to scroll the first portion of the display.
18. The method of claim 1, wherein the input comprises an input to navigate to a portion of the document containing the first indication.
19. The method of claim 1, wherein the first indication comprises a graphical indication.
20. The method of claim 1, wherein the first indication comprises an icon.
21. The method of claim 1, wherein receiving an annotation comprises receiving form data submitted by the user at the first client.
22. The method of claim 21, wherein receiving form data comprises receiving HTML form data.
23. The method of claim 1, wherein associating the annotation with a first indication in the document comprises: identifying a portion of the document to which the annotation relates; and associating the first indication with the portion of the document to which the annotation relates.
24. The method of claim 23, wherein the annotation comprises a discussion group related to the portion of the document.
25. The method of claim 23, the method further comprising adding the annotation to a data structure stored in memory, the data structure comprising a list of annotations relating to portions of one or more documents.
26. The method of claim 25, wherein the list of annotations comprises a list of bookmarks.
27. The method of claim 26, wherein selecting an annotation from the list of bookmarks displays, in the first portion of the display, at least a portion of a document to which the annotation is related and displays at least the selected annotation in the second portion of the display.
28. The method of claim 23, wherein associating the first indication comprises embedding the first indication in the portion of the document to which the annotation relates.
29. The method of claim 28, wherein embedding the first indication comprises: receiving location data related to the portion of the document; processing the location data to determine a first location within the document relative to a location of the portion within the document; and generating a new version of the document, the new version of the document containing the first indication embedded at the first location.
30. The method of claim 29, wherein the location data comprises one or more from the group comprising: a document identifier, a section identifier, a chapter identifier, a bookmark identifier, a portion length, and a portion offset.
31. The method of claim 29, the method further comprising replacing a first version of the document stored in memory with the new version of the document.
32. The method of claim 31, wherein replacing a first version of the document comprises overwriting a first version of the document.
33. The method of claim 1, wherein receiving an annotation comprises receiving an annotation related to an image contained in the document.
34. The method of claim 33, wherein receiving an annotation related to an image comprises receiving information identifying one or more subjects of the image.
35. The method of claim 34, the method further comprising associating the one or more subjects with the image.
36. The method of claim 35, wherein associating the one or more subjects with the image comprises updating a data structure stored in memory, the data structure storing associations between one or more images and one or more subjects of the one or more images.
37. The method of claim 1, wherein the annotation comprises a commercial offer.
38. The method of claim 37, wherein the commercial offer comprises an offer to purchase a product related to the document.
39. The method of claim 38, the method further comprising processing a request by a user at a client to purchase the product.
40. The method of claim 39, the method further comprising transmitting the product and the document to the user.
41. The method of claim 1, the method further comprising communicating, to a user at a client, an offer to purchase the document and a set of annotations related to the document.
42. The method of claim 41, wherein the set of annotations related to the document comprises a set of annotations selected by the user.
43. The method of claim 41, the method further comprising processing a user request to purchase the document and the set of annotations.
44. The method of claim 43, wherein processing the user request comprises printing the document and the set of annotations.
45. The method of claim 44, comprising, for each annotation related to a portion of the document, printing the annotation and the related portion of the document on the same page.
46. The method of claim 43, wherein processing the user request comprises * transmitting the document and the set of annotations to the user.
47. The method of claim 1, the method further comprising: authenticating the user at a first client and authorizing the user at the first client to provide the annotation; and authenticating the user at the second client and authorizing the user at the second client to navigate the document.
48. A method for creating a custom book, the method comprising: storing original content on a server; receiving annotations to the original content from a plurality of users accessing the content through remote client devices, the annotations including indications of portions of the original content and annotation content, and storing the annotations in association with the original content on the server; allowing a first user to select from the original content and received annotations a set of material to be organized in a book; and at the first user's request, printing and binding the selected set of material to create the custom book.
49. The method of claim 48, wherein the original content comprises content generated by a party other than the first user or the plurality of users.
50. The method of claim 48, wherein the original content includes additional content elements uploaded to the server by a user.
51. The method of claim 50, wherein the additional content elements include pictures.
52. The method of claim 50, wherein the additional content elements replace part of the original content.
53. The method of claim 48, wherein allowing the first user to select comprises allowing the first user to select each received annotation individually.
54. The method of claim 48, wherein allowing the first user to select comprises allowing the first user to select criteria for selection of original content or the annotations.
55. The method of claim 54, wherein allowing the user to select criteria comprises allowing the user to select annotations based on the user or users from which they were received.
56. The method of claim 54, wherein allowing the user to select criteria comprises allowing the user to select annotations based on a type of annotation.
57. The method of claim 48, wherein receiving annotations from the plurality of users comprises receiving annotations over the internet.
58. The method of claim 57, wherein storing the original content comprises storing the content on a web server in the form of one or more web pages, and wherein receiving annotations comprises receiving annotations input by the plurality of users using web browsers on their remote client devices.
59. The method of claim 48, wherein receiving annotations comprises receiving one or more text annotations representing comments on the original content.
60. The method of claim 59, wherein receiving annotations comprises receiving one or more annotations related to an image contained in the original content.
61. The method of claim 60, wherein receiving one or more annotations related to an image comprises receiving information identifying one or more subjects of the image.
62. The method of claim 61, comprising associating the one or more identified subjects with the image.
63. The method of claim 48, wherein receiving annotations comprises receiving multimedia annotations including graphical data, audio data, or video data.
64. The method of claim 48, comprising communicating to the first user an offer to purchase the original content.
65. The method of claim 64, comprising processing a request by the first user to purchase the original content.
66. The method of claim 48, wherein printing comprises, for each annotation related to a portion of the original content, printing the annotation and related portion on the same page of the book.
67. The method of claim 48, comprising authenticating each of the plurality of users before receiving an annotation from that user.
68. A custom book created according to the method of claim 48.
69. The method of claim 48, wherein the original content includes one or more photos, and wherein the method comprises allowing the first user to select a photo to replace a photo from the original content.
70. The method of claim 69, wherein allowing the first user to select a photo comprises allowing the first user to select a photo supplied by the party providing the original content.
71. The method of claim 69, wherein allowing the first user to select a photo comprises allowing the first user to select a photo uploaded to the server by a user.
72. The method of claim 71, wherein the photo is uploaded to the server by the first party.
73. A method for allowing users of web browsers to annotate an arbitrary portion of a web page, the method comprising: allowing a user to select the arbitrary portion of the web page using a selection tool; capturing as metadata location data describing the user's selected portion; transmitting the metadata to a web server; and at the server, updating the web page to include an indication of the user's selected portion.
74. The method of claim 73, wherein the step of capturing is performed by JavaScript downloaded with the web page.
75. The method of claim 73, wherein allowing a user to select comprises allowing the user to select an arbitrary string of characters from text in the web page.
76. The method of claim 75, wherein capturing location data comprises capturing a paragraph identifier tag for a paragraph within the web page in which the user's selected character string is located.
77. The method of claim 75, wherein capturing location data comprises: capturing a starting point value representing an offset of the start of the user's selected character string from the start of a section of the web page; and capturing an ending point value representing an offset of the end of the user's selected character string from the start of the section of the web page or from the starting point value.
78. The method of claim 77, wherein capturing the starting and ending point values comprises capturing offsets as numbers of characters from the start of the section.
79. The method of claim 77, wherein capturing location data comprises capturing mouse clicks representing a beginning of the character string, a coordinate change in the character string, and an ending of the character string.
80. The method of claim 77, wherein the section is a paragraph represented in the web page by a paragraph identifier tag.
81. The method of claim 73, wherein allowing a user to select comprises allowing the user to select a portion of an image embedded in the web page.
82. The method of claim 81, wherein allowing the user to select comprises allowing the user to select using a cropping tool.
83. The method of claim 81, wherein capturing location data comprises capturing x,y coordinates of the selected portion of the image.
84. The method of claim 83, wherein allowing the user to select comprises allowing the user to select using a rectangular cropping tool, and wherein capturing x,y coordinates comprises capturing x,y coordinates of the corners of a rectangle created using the rectangular cropping tool.
85. The method of claim 73, wherein transmitting the metadata to the web server comprises uploading the metadata to the web server as an attribute of an input form.
86. The method of claim 85, comprising allowing a user to input content for the annotation in the input form.
87. The method of claim 73, comprising transmitting with the metadata, to the web server, content provided by the user for an annotation.
88. The method of claim 73, comprising automatically modifying the captured location data for an annotation in accordance with pre-set criteria.
89. The method of claim 88, wherein modifying the location data comprises expanding the location data.
90. The method of claim 89, wherein allowing a user to select comprises allowing the user to select an arbitrary string of characters from text in the web page, and wherein expanding the location data comprises expanding location data to include whole words in which characters from the word are included in the user's selected character string.
91. The method of claim 88, wherein modifying the location data comprises limiting the number of annotations allowed from multiple users related to the same selected portion of the web page.
92. The method of claim 73, wherein capturing location data comprises capturing a paragraph identifier tag for a paragraph within the web page in which the user's selected portion is located.
93. The method of claim 73, wherein capturing location data comprises: capturing a starting point value representing an offset of the start of the user's selected portion from the start of a section of the web page; and capturing an ending point value representing an offset of the end of the user's selected portion from the start of the section of the web page or from the starting point value.
94. The method of claim 93, wherein capturing the starting and ending point values comprises capturing offsets as numbers of characters from the start of the section.
95. The method of claim 93, wherein capturing the location data comprises capturing mouse clicks representing a beginning of the selected portion, a coordinate change in the selected portion, and an ending of the selected portion.
96. The method of claim 93, wherein the section is a paragraph represented in the web page by a paragraph identifier tag.
PCT/US2005/031966 2004-09-08 2005-09-07 Creating an annotated web page WO2006029259A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP05794986A EP1800222A4 (en) 2004-09-08 2005-09-07 Shared annotation system and method

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US10/936,788 US20070118794A1 (en) 2004-09-08 2004-09-08 Shared annotation system and method
US10/936,788 2004-09-08
US11/099,768 US20060053364A1 (en) 2004-09-08 2005-04-06 System and method for arbitrary annotation of web pages copyright notice
US11/099,817 US7506246B2 (en) 2004-09-08 2005-04-06 Printing a custom online book and creating groups of annotations made by various users using annotation identifiers before the printing
US11/099,768 2005-04-06
US11/099,817 2005-04-06

Publications (2)

Publication Number Publication Date
WO2006029259A2 true WO2006029259A2 (en) 2006-03-16
WO2006029259A3 WO2006029259A3 (en) 2006-10-26

Family

ID=36036997

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/031966 WO2006029259A2 (en) 2004-09-08 2005-09-07 Creating an annotated web page

Country Status (2)

Country Link
EP (1) EP1800222A4 (en)
WO (1) WO2006029259A2 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008031625A3 (en) * 2006-09-15 2008-12-11 Exbiblio Bv Capture and display of annotations in paper and electronic documents
US7975215B2 (en) 2007-05-14 2011-07-05 Microsoft Corporation Sharing editable ink annotated images with annotation-unaware applications
WO2012016505A1 (en) * 2010-08-02 2012-02-09 联想(北京)有限公司 File processing method and file processing device
US8261094B2 (en) 2004-04-19 2012-09-04 Google Inc. Secure data gathering from rendered documents
US8346620B2 (en) 2004-07-19 2013-01-01 Google Inc. Automatic modification of web pages
US8442331B2 (en) 2004-02-15 2013-05-14 Google Inc. Capturing text from rendered documents using supplemental information
WO2013070422A1 (en) * 2011-11-07 2013-05-16 Thomson Reuters Global Resources Systems, methods, and interfaces for providing electronic book versions within an access device
US8489624B2 (en) 2004-05-17 2013-07-16 Google, Inc. Processing techniques for text capture from a rendered document
WO2015002585A1 (en) * 2013-07-03 2015-01-08 Telefonaktiebolaget L M Ericsson (Publ) Providing an electronic book to a user equipment
US9030699B2 (en) 2004-04-19 2015-05-12 Google Inc. Association of a portable scanner with input/output and storage devices
US9075779B2 (en) 2009-03-12 2015-07-07 Google Inc. Performing actions based on capturing information from rendered documents, such as documents under copyright
US9081799B2 (en) 2009-12-04 2015-07-14 Google Inc. Using gestalt information to identify locations in printed information
US9680697B2 (en) 2013-12-17 2017-06-13 International Business Machines Corporation Dynamic product installation based on user feedback
US10769431B2 (en) 2004-09-27 2020-09-08 Google Llc Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device
CN113867867A (en) * 2021-09-28 2021-12-31 北京达佳互联信息技术有限公司 Interface processing method and device, electronic equipment and computer readable storage medium

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7707039B2 (en) 2004-02-15 2010-04-27 Exbiblio B.V. Automatic modification of web pages
US10635723B2 (en) 2004-02-15 2020-04-28 Google Llc Search engines and systems with handheld document data capture devices
US20060041484A1 (en) 2004-04-01 2006-02-23 King Martin T Methods and systems for initiating application processes by data capture from rendered documents
US7812860B2 (en) 2004-04-01 2010-10-12 Exbiblio B.V. Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device
US7894670B2 (en) 2004-04-01 2011-02-22 Exbiblio B.V. Triggering actions in response to optically or acoustically capturing keywords from a rendered document
US8793162B2 (en) 2004-04-01 2014-07-29 Google Inc. Adding information or functionality to a rendered document via association with an electronic counterpart
US8146156B2 (en) 2004-04-01 2012-03-27 Google Inc. Archive of text captures from rendered documents
US8621349B2 (en) 2004-04-01 2013-12-31 Google Inc. Publishing techniques for adding value to a rendered document
US9143638B2 (en) 2004-04-01 2015-09-22 Google Inc. Data capture from rendered documents using handheld device
US9116890B2 (en) 2004-04-01 2015-08-25 Google Inc. Triggering actions in response to optically or acoustically capturing keywords from a rendered document
US20060081714A1 (en) 2004-08-23 2006-04-20 King Martin T Portable scanning device
WO2008028674A2 (en) 2006-09-08 2008-03-13 Exbiblio B.V. Optical scanners, such as hand-held optical scanners
US20070300142A1 (en) 2005-04-01 2007-12-27 King Martin T Contextual dynamic advertising based upon captured rendered text
US20080313172A1 (en) 2004-12-03 2008-12-18 King Martin T Determining actions involving captured information and electronic content associated with rendered documents
US8713418B2 (en) 2004-04-12 2014-04-29 Google Inc. Adding value to a rendered document
US8620083B2 (en) 2004-12-03 2013-12-31 Google Inc. Method and system for character recognition
US8874504B2 (en) 2004-12-03 2014-10-28 Google Inc. Processing techniques for visual capture data from a rendered document
CN105930311B (en) 2009-02-18 2018-10-09 谷歌有限责任公司 Execute method, mobile device and the readable medium with the associated action of rendered document
WO2010105245A2 (en) 2009-03-12 2010-09-16 Exbiblio B.V. Automatically providing content associated with captured information, such as information captured in real-time
US9323784B2 (en) 2009-12-09 2016-04-26 Google Inc. Image search using text-based elements within the contents of images

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5202828A (en) * 1991-05-15 1993-04-13 Apple Computer, Inc. User interface system having programmable user interface elements
US6081829A (en) * 1996-01-31 2000-06-27 Silicon Graphics, Inc. General purpose web annotations without modifying browser
JP3085245B2 (en) * 1997-05-14 2000-09-04 日本電気株式会社 Document management method, document management system, and machine-readable recording medium recording program
US6687878B1 (en) * 1999-03-15 2004-02-03 Real Time Image Ltd. Synchronizing/updating local client notes with annotations previously made by other clients in a notes database
US6507865B1 (en) * 1999-08-30 2003-01-14 Zaplet, Inc. Method and system for group content collaboration
US6859909B1 (en) * 2000-03-07 2005-02-22 Microsoft Corporation System and method for annotating web-based documents
US20020026398A1 (en) * 2000-08-24 2002-02-28 Sheth Beerud D. Storefront for an electronic marketplace for services

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of EP1800222A4 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8442331B2 (en) 2004-02-15 2013-05-14 Google Inc. Capturing text from rendered documents using supplemental information
US8261094B2 (en) 2004-04-19 2012-09-04 Google Inc. Secure data gathering from rendered documents
US9030699B2 (en) 2004-04-19 2015-05-12 Google Inc. Association of a portable scanner with input/output and storage devices
US8489624B2 (en) 2004-05-17 2013-07-16 Google, Inc. Processing techniques for text capture from a rendered document
US8346620B2 (en) 2004-07-19 2013-01-01 Google Inc. Automatic modification of web pages
US10769431B2 (en) 2004-09-27 2020-09-08 Google Llc Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device
WO2008031625A3 (en) * 2006-09-15 2008-12-11 Exbiblio Bv Capture and display of annotations in paper and electronic documents
US7975215B2 (en) 2007-05-14 2011-07-05 Microsoft Corporation Sharing editable ink annotated images with annotation-unaware applications
US9075779B2 (en) 2009-03-12 2015-07-07 Google Inc. Performing actions based on capturing information from rendered documents, such as documents under copyright
US9081799B2 (en) 2009-12-04 2015-07-14 Google Inc. Using gestalt information to identify locations in printed information
WO2012016505A1 (en) * 2010-08-02 2012-02-09 联想(北京)有限公司 File processing method and file processing device
US10210148B2 (en) 2010-08-02 2019-02-19 Lenovo (Beijing) Limited Method and apparatus for file processing
US8977952B2 (en) 2011-11-07 2015-03-10 Thomson Reuters Global Resources Electronic book version and annotation maintenance
CN103988197B (en) * 2011-11-07 2018-06-26 汤姆森路透社全球资源公司 For providing system, method and the interface of e-book version in access mechanism
CN103988197A (en) * 2011-11-07 2014-08-13 汤姆森路透社全球资源公司 Systems, methods, and interfaces for providing electronic book versions within an access device
WO2013070422A1 (en) * 2011-11-07 2013-05-16 Thomson Reuters Global Resources Systems, methods, and interfaces for providing electronic book versions within an access device
WO2015002585A1 (en) * 2013-07-03 2015-01-08 Telefonaktiebolaget L M Ericsson (Publ) Providing an electronic book to a user equipment
US9680697B2 (en) 2013-12-17 2017-06-13 International Business Machines Corporation Dynamic product installation based on user feedback
US10594550B2 (en) 2013-12-17 2020-03-17 International Business Machines Corporation Dynamic product installation based on user feedback
US11502899B2 (en) 2013-12-17 2022-11-15 International Business Machines Corporation Dynamic product installation based on user feedback
CN113867867A (en) * 2021-09-28 2021-12-31 北京达佳互联信息技术有限公司 Interface processing method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
EP1800222A2 (en) 2007-06-27
EP1800222A4 (en) 2009-08-05
WO2006029259A3 (en) 2006-10-26

Similar Documents

Publication Publication Date Title
US7506246B2 (en) Printing a custom online book and creating groups of annotations made by various users using annotation identifiers before the printing
WO2006029259A2 (en) Creating an annotated web page
JP6095596B2 (en) Rendering the visual column of the document with supplemental information content
US10853560B2 (en) Providing annotations of a digital work
US7747941B2 (en) Webpage generation tool and method
US8131647B2 (en) Method and system for providing annotations of a digital work
US10346525B2 (en) Electronic newspaper
US9275021B2 (en) System and method for providing a two-part graphic design and interactive document application
US9372835B2 (en) System and method for presentation creation
US20080092054A1 (en) Method and system for displaying photos, videos, rss and other media content in full-screen immersive view and grid-view using a browser feature
US20180165255A1 (en) System and method to facilitate content distribution
KR100955750B1 (en) System and method for providing multiple renditions of document content
KR100496981B1 (en) A PDF Document Providing Method Using XML
JP2011103035A (en) Written information management system, written information management method, and written information management program
JP2005208768A (en) Image page preparation support device
WO2001052032A1 (en) Method and apparatus for displaying, retrieving, filing and organizing various kinds of data and images

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2005794986

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2005794986

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2005794986

Country of ref document: EP