US8509563B2 - Generation of documents from images - Google Patents

Generation of documents from images Download PDF

Info

Publication number
US8509563B2
US8509563B2 US11/275,908 US27590806A US8509563B2 US 8509563 B2 US8509563 B2 US 8509563B2 US 27590806 A US27590806 A US 27590806A US 8509563 B2 US8509563 B2 US 8509563B2
Authority
US
United States
Prior art keywords
document
images
image
module
composite image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US11/275,908
Other versions
US20070177183A1 (en
Inventor
Merle Michael Robinson
Matthieu T. Uyttendaele
Zhengyou Zhang
Patrice Y. Simard
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US11/275,908 priority Critical patent/US8509563B2/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: UYTTENDAELE, MATTHEIU T., ROBINSON, MERLE MICHAEL, SIMARD, PATRICE Y., ZHANG, ZHENGYOU
Publication of US20070177183A1 publication Critical patent/US20070177183A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION CORRECTIVE ASSIGNMENT TO CORRECT THE SPELLING OF MATTHEIU T. UYTTENDAELE. PREVIOUSLY RECORDED ON REEL 017488 FRAME 0089. ASSIGNOR(S) HEREBY CONFIRMS THE NAME IS SPELLED AS FOLLOWS ON THE RECORDED ASSIGNMENT: MATTHIEU T. UYTTENDAELE. Assignors: UYTTENDAELE, MATTHEIU T., ROBINSON, MERLE MICHAEL, SIMARD, PATRICE Y., ZHANG, ZHENGYOU
Application granted granted Critical
Publication of US8509563B2 publication Critical patent/US8509563B2/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities

Definitions

  • Hard copy documents for example, in magazine articles, printed photographs, books, newspapers, and so on—cannot easily be used in a digital environment—for example, with a desktop or laptop computer, a phone or personal digital assistant (PDA), or a data network like the Internet.
  • PDA personal digital assistant
  • a data network like the Internet.
  • Described herein are various technologies and techniques directed to generating soft copy (digital) documents using images of hard copy documents. More particularly, described herein are, among other things, systems, methods, and data structures that facilitate generation of soft copy documents from images.
  • Some implementations of the generation system described herein may include functionality for acquiring images, like, for example, a digital camera, including a mobile telephone that incorporates a camera. Once the necessary images have been acquired, the system may combine or improve the suitability of the images for later use through one or more techniques. The system may also determine the type of document or documents represented by the image or images. The system may recognize a variety of elements in the images, including text, tables, diagrams, and so on. Given multiple possibilities for recognized entities, the system may present a user with selectable choices, so the user can influence the nature of the elements that are recognized. Ultimately, the system may generate one or more types of soft copy documents, such as word processor documents, spreadsheets, and so on.
  • FIG. 1 is an illustration of an exemplary computing device in which the various technologies described herein may be implemented.
  • FIG. 2 is an illustration of an exemplary system in which generation of documents may be carried out.
  • FIG. 3 is an illustration of an exemplary operational flow that includes various operations that may be performed when generating documents.
  • Described herein are various technologies and techniques directed to generating soft copy (digital) documents using images of hard copy documents. More particularly, described herein are, among other things, systems, methods, and data structures that facilitate generation of soft copy documents from images.
  • a user might take a number of photographs of a magazine article using a camera phone.
  • the document generation system might then process the acquired digital images to, for example, remove shadows that could make document generation more difficult.
  • the system might then combine the images to create a single composite image for each page of the magazine article, and then recognize elements in each image.
  • the recognized elements might include text, graphics, tabular data, and photographs with captions.
  • the system might generate a single word processor document with multiple pages containing text, graphics, tables, and photographs.
  • the recognized elements might be represented as distinct elements themselves, rather than just as text. For example, tabular data might be represented using a word processor table rather than just as text separated by white space.
  • FIG. 1 and the related discussion are intended to provide a brief, general description of an exemplary computing environment in which the various technologies described herein may be implemented. Although not required, the technologies are described herein, at least in part, in the general context of computer-executable instructions, such as program modules that are executed by a controller, processor, personal computer, or other computing device, such as the computing device 100 illustrated in FIG. 1 .
  • program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Tasks performed by the program modules are described below with the aid of one or more block diagrams and operational flowcharts.
  • computer-readable media may be any media that can store or embody information that is encoded in a form that can be accessed and understood by a computer.
  • Typical forms of computer-readable media include, without limitation, both volatile and nonvolatile memory, data storage devices, including removable and/or non-removable media, and communications media.
  • Communication media embodies computer-readable information in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communications media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
  • the computing device 100 includes at least one processing unit 102 and memory 104 .
  • the memory 104 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.), or some combination of the two.
  • This most basic configuration is illustrated in FIG. 1 by dashed line 106 .
  • the computing device 100 may also have additional features and functionality.
  • the computing device 100 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in FIG. 1 by the removable storage 108 and the non-removable storage 110 .
  • the computing device 100 may also contain one or more communications connection(s) 112 that allow the computing device 100 to communicate with other devices.
  • the computing device 100 may also have one or more input device(s) 114 such as keyboard, mouse, pen, voice input device, touch input device, image input device (like a camera or scanner), etc.
  • One or more output device(s) 116 such as a display, speakers, printer, etc. may also be included in the computing device 100 .
  • the technologies described herein may be practiced with computing devices other than the computing device 100 illustrated in FIG. 1 .
  • the technologies described herein may likewise be practiced in hand-held devices including mobile telephones and PDAs, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like.
  • the technologies described herein may also be implemented in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote memory storage devices.
  • FIG. 2 illustrated therein is a system 200 in which generation of documents may be performed. Included in the system are a document generation system 210 , an image acquisition module 220 , an image processing module 225 , an image creation module 230 , an image feedback module 255 , an element recognition module 245 , a recognition choices module 250 , a document type module 240 , a document generation module 260 , and a background module 235 .
  • the document generation system 210 contains various modules, discussed below, that perform a variety of tasks to generate soft copy documents from digital images. It should be understood that while the document generation system 210 contains various modules, in one or more alternative implementations, a single module may perform more than one of the tasks associated with the modules in the system 210 . For example, and without limitation, a single image module might perform the tasks associated with the image acquisition module 220 , the image processing module 225 , the image creation module 230 and the image feedback module 255 . Similarly, in one or more alternative implementations, the modules may perform additional tasks not shown or discussed. Furthermore, in one or more alternative implementations, the modules may reside on more than one device or computer system.
  • the image acquisition module 220 and the image feedback module 255 might reside on one device, perhaps on a device with a camera, while the remaining modules reside on another device, perhaps on a device with a greater amount of computer processing ability.
  • the communication between the devices may be accomplished using a variety of methods, including by using a wireless connection of some kind, including Wi-Fi or cellular data such as GPRS, EV-DO, EDGE, HSDPA, or the like; by using a wired connection, including wired Ethernet, Firewire, and USB; or by any other communication mechanism with which computing devices may communicate.
  • not all of the modules may be necessary to generate soft copy documents.
  • the document generation system may not provide feedback to the user while images are being acquired, and accordingly, the image feedback module 255 may not be required.
  • the image acquisition module 220 acquires one or more digital images of the hard copy document or documents from which the soft copy document or documents are to be generated.
  • Generation of soft copy documents requires sufficient resolution and image quality. Therefore, the number of images that must be acquired to generate documents depends generally on the resolution and quality of the images produced by the image acquisition module.
  • the image acquisition module 220 may include a digital camera.
  • the image acquisition module could be a mobile telephone with camera functionality, a PDA with camera functionality, or any other device that includes camera functionality.
  • “mobile telephone” should be understood to encompass any mobile device that includes telephony functionality, including devices traditionally known as mobile telephones, as well as PDAs, pagers, laptop computers, and the like, that include telephony functionality.
  • a user may take one or more photographs of each desired page or section of the hard copy document.
  • Some camera devices may have sufficient resolution to enable element recognition with a single photograph of, say, a single page of the hard copy document.
  • Other camera devices including those included with many mobile telephones, and/or those without adjustable focus lenses —may generate a single image with insufficient detail to enable document generation.
  • the user may take multiple photographs, perhaps each of a different part of the hard copy document, so that all of the images, when considered together, provide enough detail to enable document generation.
  • a camera with a resolution of two megapixels and an adjustable focus lens generated images of an 8.5′′ ⁇ 11′′ page sufficient for further use.
  • cameras with lower resolutions and/or without adjustable focus lenses may require multiple images of the same page or section.
  • Other modules, including the image processing module 225 and the image creation module 230 may further process the images acquired by the image acquisition module so that other modules, including the element recognition module 245 and the document type module 240 , may perform their associated tasks.
  • the image acquisition module 220 may be an image scanner, photocopy machine, facsimile machine, or other device that can create digital images.
  • an image scanner or photocopy machine may generate digital images by scanning pages of a hard copy document.
  • a facsimile machine may similarly generate an image from a transmitted or received facsimile document.
  • more than one image may be required to enable document generation, similar to how multiple images may be required with some digital camera devices, as explained previously.
  • the image acquisition module 220 may also be a device that can acquire images originally created on another device.
  • the image acquisition module might comprise a computer connected to a digital camera via a USB cable or a computer connected over a network to another computer containing a library of images.
  • the image acquisition module may be used to acquire images of multiple pages or sections of the same hard copy document, or of different hard copy documents, independent of whether multiple images are acquired of the same page or section (for purposes of enabling document generation, as explained previously).
  • the image processing module 225 may perform various image enhancement processing operations on the images acquired using the image acquisition module 220 . This processing may generally be directed toward improving the suitability of the images for document generation, including processing that increases the ability of or makes more efficient the operation of other modules, like the element recognition module 245 and the document type module 240 .
  • the images may have been created when the camera was not aligned correctly with the hard copy document. This misalignment may result in an image with “keystone effects,” which in this context are image artifacts where angles in the image are different from angles in the hard copy.
  • the image processing module 225 may remove keystone effects, and thereby improve the operation of other modules in the system, including the element recognition module 245 .
  • parts of the images may be in shadow because of, for example, uneven lighting when the image was created.
  • the image processing module 225 may remove or lessen shadows in the images to improve the operation of other modules in the system. Removing keystone effects and shadows in images may be accomplished using various methods.
  • the image processing module 225 may also perform other image processing that improves the quality of images for use in other modules of the system, as are recognized in the art. Furthermore, some image processing may operate with one or more of the other modules. For example, some processing may be more efficiently or successfully performed when working with the image creation module 230 , discussed below, or with the composite images that may be generated by the image creation module.
  • the image creation module 230 creates a composite digital image that represents a page or section of the hard copy document, or the entirety of the hard copy document.
  • the image creation module may join or “stitch together” the acquired images to create a single composite image.
  • the image acquisition module 220 includes a digital camera
  • a user may have taken multiple photographs of a single hard copy page so that text and other elements are readable in the images generated by the camera.
  • the image creation module may identify and stitch together the images that make up the same page, creating a single composite image of the entire hard copy page. Joining or stitching together the acquired images may be accomplished using various methods.
  • the image creation module 230 may recognize the hard copy page or section with which the image is associated and generate multiple composite images, one for each hard copy page or section.
  • the composite image created by the image creation module 230 may be the same as the single acquired image, or the same as the acquired image suitably processed by the image processing module 225 .
  • the image creation module may not be necessary, and other modules in the system may work directly with the acquired or processed image.
  • the operation of the image creation module 230 may be performed without user input.
  • the image creation module may not be able to correctly link an image with a composite image.
  • the image creation module may present to the user of the system multiple choices and enable the user to link the image with the composite image, for example, by specifying where in the composite image the particular image should be used, or with which composite image the particular image should be associated.
  • modules and other parts of the invention described herein may work with images acquired directly from the image acquisition module 220 , images processed by the image processing module 225 , or from one or more composite images generated by the image creation module 230 .
  • the image feedback module 255 provides feedback to the user while images are being acquired.
  • the image feedback module may indicate that images of some portions of the hard copy document have not been acquired. For example, where the left side of a hard copy page has been captured but the right side has not, the image feedback module may indicate that the user must still take one or more photographs of the right side of the page.
  • the image feedback module may provide feedback in multiple ways.
  • the image feedback module may show graphically, for example on an LCD screen that is part of a digital camera or camera phone, which parts of the hard copy page or section have not been captured at all, or have not been captured with sufficient resolution. It may do so by displaying shaded regions, overlaid “on top” of the image of the hard copy document, that show the portions of the hard copy document that should still be captured.
  • the document type module 240 identifies the type of hard copy document, or the type of different portions of the hard copy document. It may work with the element recognition module 245 to enable the element recognition module to better recognize text and elements, and to enable the document generation system 210 to more accurately recreate a soft copy document from a hard copy document.
  • the document type module 240 may use multiple features or patterns of the image, images, or composite image to identify the type of document, or the types of sections of the document. Recognized types could include, without limitation, some or all of the following: word processor documents; spreadsheets; presentation documents; and forms, including forms like facsimile transmittal forms and business cards, and user-defined forms, for example for use with surveys. For additional information different kinds of recognized elements, refer to the following discussion of the element recognition module 245 .
  • the element recognition module 245 recognizes a wide variety of elements in the image or composite image. These elements can then be used by the document generation system 210 to generate a soft copy document containing some or all of the recognized elements.
  • the element recognition module 245 may simply recognize text, perhaps by implementing one or more optical character recognition (OCR) algorithms.
  • OCR optical character recognition
  • the element recognition module 245 may recognize additional “super-elements,” aside from text.
  • a “super-element” is an element that has some additional meaning or structure, beyond simple text.
  • a super-element may be comprised, in whole or in part, of text, but is not solely comprised of text.
  • Some examples of super-elements are images with metadata, such as images with captions; bulleted lists; equations; annotations; tables; charts; and forms.
  • the element recognition module may recognize a super-element solely from the image data contained in the image or composite image.
  • the element recognition module may also use other information, including information generated by other modules, such as the document type module 240 .
  • the document type module 240 identifies the images as containing slides in a presentation
  • the element recognition module may use this information to recognize text and graphics as elements common to presentations, like slides or charts, and not just as a series of unrelated text and images.
  • the element recognition module may use this information to recognize numbers and text as part of a table, not just as a series of unrelated numbers, text, and white space.
  • the element recognition module 245 may recognize numerous super-elements in addition to text and simple images. The following paragraphs discuss some of these super-elements in detail.
  • the element recognition module 245 may recognize a graphic or image and a caption associated with the graphic or image.
  • the recognized super-element in this case might be a captioned graphic.
  • the recognized graphic or image is a part of the image used for soft copy document generation.
  • an image of a magazine page provided by the image acquisition module 220 might contain text as well as graphics or images, such as photographs printed on the magazine page.
  • Some of the super-elements in this case might be the photographs on the magazine page, with any associated captions.
  • Tables or spreadsheets might be recognized, for example and without limitation, in cases when images of a printed spreadsheet are used. They might also be recognized in other cases, such as when an image of a magazine page that contains tables is used.
  • the element recognition module 245 may generate soft copy tables or spreadsheets, perhaps according to a user choice. For example, if a printed spreadsheet is photographed, the user may choose to generate a soft copy spreadsheet document. Alternatively, if a magazine page with tables is photographed, the user may choose to generate a soft copy word processor document with embedded tables. Beyond recognition of tables or spreadsheets, the element recognition module may be able to impute logic in the soft copy spreadsheet, table, or other element it generates.
  • a “form” is part of a document that contains some known fields or structure.
  • the element recognition module 245 may be able to recognize specific elements of a hard copy document. When these specific elements are recognized, the document generation system 210 may use them to generate a soft copy document, like a word processor document. Recognized elements may also be used to, for example, update a database.
  • the element recognition module may also have the ability to improve recognition accuracy by ignoring text or other markings outside the context of the known data on the form.
  • Forms may follow any format and contain any fields. Some examples of forms are facsimile transmittal forms, business cards, and the like, as well as custom or user-defined forms.
  • the element recognition module 245 may look specifically for fields such as “To,” “From, ” “Fax Number,” “Subject,” and so on.
  • the element recognition module may look for text that contains the name of a person, the name of position (such as “President,”), the name of a company, and so on.
  • a database for example, a database of sent or received facsimiles, or a database of contacts like that maintained by personal organizer or customer relationship management software—may be updated with the data recognized in the hard copy document.
  • the element recognition module may provide the ability to recognize the fields or structure of custom or user-defined forms.
  • a user may provide a template that denotes information like the name and location of specified fields.
  • the element recognition module 245 may use this template to recognize elements in associated hard copy documents.
  • One situation in which this type of functionality may be useful is in digitizing survey information that exists in hard copy form.
  • a survey company for example, might print paper copies of a survey with elements like checkboxes, multiple choice questions, and free text entry questions. They might also create a template in a form required by the element recognition module which names the questions, specifies the format of the questions, and so on. Given this template, the element recognition module might then interpret images of completed hard copy survey forms and, perhaps in concert with the document generation module 260 , for example, might update a database with the answers to the survey questions.
  • the element recognition module 245 may recognize a set of text as a bulleted list and enable the document generation system to generate, for example, a word processor document that contains a bulleted list, rather than simply rows of text.
  • the word processor can manipulate the bulleted list as a bulleted list, instead of, for example, as rows of text preceded by a graphic that looks like a text bullet.
  • the element recognition module 245 may also recognize other super-elements including, without limitation, equations and charts, in addition to other super-elements, like tables and spreadsheets, which can be embedded in a soft copy document.
  • an image might contain an organizational chart that shows the reporting relationship of people in a company. This chart might be comprised of squares containing text and lines that join the squares. In some instances, the element recognition module might recognize this chart as a single atomic graphic and do no further recognition. In other instances, the element recognition module might recognize the chart as a drawing with squares, lines, and text, and enable the editing of the resulting soft copy drawing in, for example, an application with graphics editing functionality. In yet other instances, the element recognition module might recognize the chart as an organizational chart, and enable editing of the resulting soft copy chart in an application with specific functionality directed toward the creation and editing of organizational charts.
  • the element recognition module 245 may be able to recognize handwritten text or other figures. Such handwritten elements of the hard copy may be recognized as graphics or images and not further interpreted. In other instances, handwritten content may be recognized as text. In either case, the recognized super-element may be output as part of the generated document as an annotation or note, or as part of the document itself. For example, when the element recognition module interprets an image of a page of printed text with some handwritten notes, it may incorporate the recognized handwritten text into the document itself. In other cases, it may interpret the handwritten text but keep it separate from the main flow of the document—for example, by using “Note” or “Annotation” functionality in the generated soft copy document. Handwritten drawings may be maintained as graphics or images and not further interpreted, or may be interpreted, for example, in the same way a printed chart or other super-element is recognized.
  • the element recognition module 245 may maintain specific features of the recognized elements.
  • the element recognition module may recognize the following features of text, without limitation: font name, font style (bold, underline, and so on), font size, text color, and so on.
  • the element recognition module may recognize features, again without limitation, like color and shape.
  • the element recognition module 245 may not be able to recognize some elements automatically with some sufficient or defined level of certainty.
  • the recognition choices module 250 may be used to present the user with a choice between multiple possible elements. For example and without limitation, if the element recognition module determines that there is a 60% chance that particular element should be recognized as one element, and a 40% chance that the element should be recognized as another element, the recognition choices module may enable the user to choose between the two likely elements.
  • the recognition choices module 250 may display a dialog that shows the relevant part of the image being processed by the element recognition module, as well as the elements that have been identified by the element recognition module as possible matches, perhaps with the most likely element selected by default. The user may then choose between the presented options, and the document generation system 210 may use the selected option to determine which element to include in the generated soft copy document.
  • any operation of the recognition choices module 250 or, for that matter, any operation of a module that requires user input may be automated so that no user input is required.
  • the most likely candidate element may be chosen automatically, even if there is only, for example, a 51% chance that the most likely candidate element is the correct element.
  • the background module 235 identifies and manipulates background images or data present in the images being processed. For example, each slide in a hard copy presentation with multiple slides might have the same background.
  • the background module may identify this background so that it can be manipulated separately from the text or other elements in the foreground of the slide. For example, it might identify and remove any background image or images before using modules like the element recognition module, and may then add the background image or images to the generated soft copy document after element recognition is complete.
  • the background module may identify the background image or images by using multiple images, for, by example, deducing that similar elements occurring in all of the images are part of the background.
  • the document generation module 260 generates one or more soft copy documents, of one or more document formats.
  • the document generation module may generate one or more word processor documents, spreadsheet documents, presentation documents, text documents, portable document format documents, and so on.
  • a word processor document may be, for example and without limitation, a Microsoft Word document, suitable for use in the Microsoft Word application, which is produced by Microsoft Corporation of Redmond, Wash. Such a word processor document might also be suitable for use in any other application which can view or edit Microsoft Word documents.
  • a spreadsheet document may be, for example and without limitation, a Microsoft Excel document, suitable for use in the Microsoft Excel application, also produced by Microsoft Corporation of Redmond, Wash.
  • a presentation document may be a Microsoft PowerPoint document and a portable document format document might be a Portable Document Format (PDF) document, suitable for use with an Adobe Acrobat application, produced by Adobe Systems Corporation of San Jose, Calif.
  • PDF Portable Document Format
  • the particular document format or formats used by the document generation module may be determined in various ways. For example, they may be determined automatically by the document generation system 210 , or may be the result of user choice. In some implementations, the document generation system 210 may default to generating a word processor document when the recognized elements include text. In other implementations, the user may choose the type or types of documents to be generated.
  • the document generation module 260 may generate a single soft copy document with multiple pages, or multiple documents each with a single page, or multiple documents with one or more pages, depending on the nature of the images from which the documents are generated, user choice, or other input.
  • Generated soft copy documents may be the same as any other soft copy documents of the same particular format. Such documents may be manipulated, transmitted, or otherwise used in any manner, in the same fashion as soft copy documents generated, for example, from scratch using an appropriate application.
  • the document generation module 260 might also take other actions with the information produced by the document generation system 210 .
  • the document generation module might also update a database, send email, or take any other action based on the nature of the recognized information.
  • FIG. 3 illustrated therein is an exemplary generalized operational flow 300 including various operations that may be performed when generating documents.
  • the following description of FIG. 3 is made with reference to the system 200 of FIG. 2 .
  • the operational flow described with respect to FIG. 3 is not intended to be limited to be used with the elements of the system 200 .
  • the exemplary operational flow 300 indicates a particular order of operation execution, in one or more alternative implementations the operations may be ordered differently.
  • the exemplary operational flow contains multiple discrete steps, it should be recognized that in some implementations, some or all of these operations may be combined or executed contemporaneously.
  • the image acquisition module 220 acquires one or more images of the hard copy document or documents from which the soft copy document or documents are to be generated.
  • this operation involves taking one or more photographs of the hard copy document or documents.
  • this operation involves scanning hard copy documents using a scanner, and so on. If the image acquisition module cannot acquire single images with sufficient quality for document generation, this operation may require acquiring multiple images of the hard copy document which may then be further manipulated in succeeding operations, like operation 314 , where multiple images may be combined to form a single composite image.
  • the image feedback module 255 may provide feedback to the user while images are acquired. For example, if a user takes multiple photographs of a single page of a hard copy document using a digital camera, the image feedback module may indicate to the user which parts of the hard copy page have not yet been photographed, or have not yet been photographed with sufficient detail to enable document generation.
  • the image processing module 225 processes the images acquired in operation 310 to improve the quality of the images for use in generating soft copy documents.
  • This image processing may include removal of keystone effects, removal or minimization of shadows, and so on.
  • this operation may be performed using the composite image or images generated by operation 314 —in such implementations, operation 312 may be performed after operation 314 , or may be performed, at least in part, both before and after operation 314 .
  • the image creation module 230 may combine multiple images acquired during operation 310 into one or more composite images that better represent a page or section of a hard copy document. For example, when multiple images of a single page have been acquired in operation 310 to ensure that the resolution of the images is high enough to enable document generation, this operation may combine the images and create a single composite image. In instances where a single image acquired in operation 310 has sufficient resolution to enable document generation, this operation may generate a composite image that is the same as the acquired or the processed image. In an alternative embodiment, when a single image is sufficient for document generation, this operation may not be necessary and subsequent operations may work with the image acquired in the acquire images operation 310 or the process images operation 312 . Further, unless otherwise noted, the composite image or images generated by this operation may be used interchangeably with images acquired in the acquire images operation 310 , or processed by the process images operation 312 .
  • the background module 235 may use one or more images resulting from operations 310 , 312 , and 314 to identify background graphics, images, or other content.
  • operation 316 may also remove the identified background from the images to be used by some subsequent operations, like the recognize elements operation 320 .
  • the background may then be added to the soft copy documents, for example, in the add background operation 324 .
  • the document type module 240 may recognize the type of or types of hard copy documents or sections represented by the images. Information about the type or types of documents or sections may be used in subsequent operations, including, for example, the recognize elements operation 320 .
  • the element recognition module 245 recognizes text and other elements, including “super-elements,” all as explained previously with reference to the element recognition module 245 .
  • This operation may recognize, for example and without limitation, text, simple images, images with captions or other additional data, bulleted lists, equations, annotations, tables, spreadsheets, presentations, charts, forms, and so on.
  • the recognition choices module 250 may present one or more choices to the user for elements which cannot be recognized automatically with some sufficient level of certainty, to enable the user to choose the most appropriate element. In other implementations, such as implementations where large numbers of hard copy documents are processed without human intervention, this operation may not be executed.
  • background graphics, images, or other content that was identified and removed, for example in identify background operation 316 may be added by the background module 235 to the soft copy document or documents to be generated.
  • the document generation module 260 may generate one or more soft copy documents using the information acquired in previous operations.
  • the generated soft copy documents may be of many forms.
  • the document generation module may also perform actions aside from generating traditional documents, including, for example, updating databases using the information identified in previous operations.

Abstract

A system for generating soft copy (digital) versions of hard copy documents uses images of the hard copy documents. The images may be captured using a device suitable for capturing images, like a camera phone. Once available, the images may be processed to improve their suitability for document generation. The images may then be processed to recognize and generate soft copy versions of the documents represented by the images.

Description

BACKGROUND
Information contained in hard copy documents—for example, in magazine articles, printed photographs, books, newspapers, and so on—cannot easily be used in a digital environment—for example, with a desktop or laptop computer, a phone or personal digital assistant (PDA), or a data network like the Internet. While some documents can be scanned or otherwise digitized to create a soft copy or digital document, the use of present devices, like scanners or photocopy machines, can be difficult and time-consuming enough that many people choose instead to re-enter data contained in hard copy documents by hand.
SUMMARY
The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and does not identify key or critical elements of the invention or delineate the scope of the invention. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
Described herein are various technologies and techniques directed to generating soft copy (digital) documents using images of hard copy documents. More particularly, described herein are, among other things, systems, methods, and data structures that facilitate generation of soft copy documents from images.
Some implementations of the generation system described herein may include functionality for acquiring images, like, for example, a digital camera, including a mobile telephone that incorporates a camera. Once the necessary images have been acquired, the system may combine or improve the suitability of the images for later use through one or more techniques. The system may also determine the type of document or documents represented by the image or images. The system may recognize a variety of elements in the images, including text, tables, diagrams, and so on. Given multiple possibilities for recognized entities, the system may present a user with selectable choices, so the user can influence the nature of the elements that are recognized. Ultimately, the system may generate one or more types of soft copy documents, such as word processor documents, spreadsheets, and so on.
DESCRIPTION OF THE DRAWINGS
FIG. 1 is an illustration of an exemplary computing device in which the various technologies described herein may be implemented.
FIG. 2 is an illustration of an exemplary system in which generation of documents may be carried out.
FIG. 3 is an illustration of an exemplary operational flow that includes various operations that may be performed when generating documents.
DETAILED DESCRIPTION
Described herein are various technologies and techniques directed to generating soft copy (digital) documents using images of hard copy documents. More particularly, described herein are, among other things, systems, methods, and data structures that facilitate generation of soft copy documents from images.
With one embodiment, a user might take a number of photographs of a magazine article using a camera phone. The document generation system might then process the acquired digital images to, for example, remove shadows that could make document generation more difficult. The system might then combine the images to create a single composite image for each page of the magazine article, and then recognize elements in each image. In the case of a magazine article, the recognized elements might include text, graphics, tabular data, and photographs with captions. Using the recognized elements, the system might generate a single word processor document with multiple pages containing text, graphics, tables, and photographs. Furthermore, the recognized elements might be represented as distinct elements themselves, rather than just as text. For example, tabular data might be represented using a word processor table rather than just as text separated by white space.
Example Computing Environment
FIG. 1 and the related discussion are intended to provide a brief, general description of an exemplary computing environment in which the various technologies described herein may be implemented. Although not required, the technologies are described herein, at least in part, in the general context of computer-executable instructions, such as program modules that are executed by a controller, processor, personal computer, or other computing device, such as the computing device 100 illustrated in FIG. 1.
Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Tasks performed by the program modules are described below with the aid of one or more block diagrams and operational flowcharts.
Those skilled in the art can implement the description, block diagrams, and flowcharts in the form of computer-executable instructions, which may be embodied in one or more forms of computer-readable media. As used herein, computer-readable media may be any media that can store or embody information that is encoded in a form that can be accessed and understood by a computer. Typical forms of computer-readable media include, without limitation, both volatile and nonvolatile memory, data storage devices, including removable and/or non-removable media, and communications media.
Communication media embodies computer-readable information in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communications media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
Turning now to FIG. 1, in its most basic configuration, the computing device 100 includes at least one processing unit 102 and memory 104. Depending on the exact configuration and type of computing device, the memory 104 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.), or some combination of the two. This most basic configuration is illustrated in FIG. 1 by dashed line 106. Additionally, the computing device 100 may also have additional features and functionality. For example, the computing device 100 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in FIG. 1 by the removable storage 108 and the non-removable storage 110.
The computing device 100 may also contain one or more communications connection(s) 112 that allow the computing device 100 to communicate with other devices. The computing device 100 may also have one or more input device(s) 114 such as keyboard, mouse, pen, voice input device, touch input device, image input device (like a camera or scanner), etc. One or more output device(s) 116 such as a display, speakers, printer, etc. may also be included in the computing device 100.
Those skilled in the art will appreciate that the technologies described herein may be practiced with computing devices other than the computing device 100 illustrated in FIG. 1. For example, and without limitation, the technologies described herein may likewise be practiced in hand-held devices including mobile telephones and PDAs, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like.
The technologies described herein may also be implemented in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
While described herein as being implemented in software, it will be appreciated that the technologies described herein may alternatively be implemented all or in part as hardware, firmware, or various combinations of software, hardware, and/or firmware.
Turning now to FIG. 2, illustrated therein is a system 200 in which generation of documents may be performed. Included in the system are a document generation system 210, an image acquisition module 220, an image processing module 225, an image creation module 230, an image feedback module 255, an element recognition module 245, a recognition choices module 250, a document type module 240, a document generation module 260, and a background module 235.
The document generation system 210 contains various modules, discussed below, that perform a variety of tasks to generate soft copy documents from digital images. It should be understood that while the document generation system 210 contains various modules, in one or more alternative implementations, a single module may perform more than one of the tasks associated with the modules in the system 210. For example, and without limitation, a single image module might perform the tasks associated with the image acquisition module 220, the image processing module 225, the image creation module 230 and the image feedback module 255. Similarly, in one or more alternative implementations, the modules may perform additional tasks not shown or discussed. Furthermore, in one or more alternative implementations, the modules may reside on more than one device or computer system. For example, and without limitation, the image acquisition module 220 and the image feedback module 255 might reside on one device, perhaps on a device with a camera, while the remaining modules reside on another device, perhaps on a device with a greater amount of computer processing ability. When more than one device is involved, the communication between the devices may be accomplished using a variety of methods, including by using a wireless connection of some kind, including Wi-Fi or cellular data such as GPRS, EV-DO, EDGE, HSDPA, or the like; by using a wired connection, including wired Ethernet, Firewire, and USB; or by any other communication mechanism with which computing devices may communicate. In addition, not all of the modules may be necessary to generate soft copy documents. For example, the document generation system may not provide feedback to the user while images are being acquired, and accordingly, the image feedback module 255 may not be required.
The image acquisition module 220 acquires one or more digital images of the hard copy document or documents from which the soft copy document or documents are to be generated. Generation of soft copy documents, including recognizing elements in images, requires sufficient resolution and image quality. Therefore, the number of images that must be acquired to generate documents depends generally on the resolution and quality of the images produced by the image acquisition module.
In some implementations, the image acquisition module 220 may include a digital camera. For example, the image acquisition module could be a mobile telephone with camera functionality, a PDA with camera functionality, or any other device that includes camera functionality. In the context of this application, “mobile telephone” should be understood to encompass any mobile device that includes telephony functionality, including devices traditionally known as mobile telephones, as well as PDAs, pagers, laptop computers, and the like, that include telephony functionality.
To acquire images, a user may take one or more photographs of each desired page or section of the hard copy document. Some camera devices may have sufficient resolution to enable element recognition with a single photograph of, say, a single page of the hard copy document. Other camera devices—including those included with many mobile telephones, and/or those without adjustable focus lenses —may generate a single image with insufficient detail to enable document generation. In such a case, the user may take multiple photographs, perhaps each of a different part of the hard copy document, so that all of the images, when considered together, provide enough detail to enable document generation. In general, it may be necessary for the captured images to display the text and other elements of the hard copy document in such detail that a person, while viewing the image, can read and understand the text and other elements. During testing, a camera with a resolution of two megapixels and an adjustable focus lens generated images of an 8.5″×11″ page sufficient for further use. In some implementations, cameras with lower resolutions and/or without adjustable focus lenses may require multiple images of the same page or section. Other modules, including the image processing module 225 and the image creation module 230, may further process the images acquired by the image acquisition module so that other modules, including the element recognition module 245 and the document type module 240, may perform their associated tasks.
In other implementations, the image acquisition module 220 may be an image scanner, photocopy machine, facsimile machine, or other device that can create digital images. For example, an image scanner or photocopy machine may generate digital images by scanning pages of a hard copy document. A facsimile machine may similarly generate an image from a transmitted or received facsimile document. Depending on the quality and level of detail of the images generated by the scanner or other device, more than one image may be required to enable document generation, similar to how multiple images may be required with some digital camera devices, as explained previously.
The image acquisition module 220 may also be a device that can acquire images originally created on another device. For example, the image acquisition module might comprise a computer connected to a digital camera via a USB cable or a computer connected over a network to another computer containing a library of images.
The image acquisition module may be used to acquire images of multiple pages or sections of the same hard copy document, or of different hard copy documents, independent of whether multiple images are acquired of the same page or section (for purposes of enabling document generation, as explained previously).
The image processing module 225 may perform various image enhancement processing operations on the images acquired using the image acquisition module 220. This processing may generally be directed toward improving the suitability of the images for document generation, including processing that increases the ability of or makes more efficient the operation of other modules, like the element recognition module 245 and the document type module 240.
In some implementations, including those where the image acquisition module 220 includes a digital camera, the images may have been created when the camera was not aligned correctly with the hard copy document. This misalignment may result in an image with “keystone effects,” which in this context are image artifacts where angles in the image are different from angles in the hard copy. The image processing module 225 may remove keystone effects, and thereby improve the operation of other modules in the system, including the element recognition module 245.
In the same or other implementations, parts of the images may be in shadow because of, for example, uneven lighting when the image was created. The image processing module 225 may remove or lessen shadows in the images to improve the operation of other modules in the system. Removing keystone effects and shadows in images may be accomplished using various methods.
The image processing module 225 may also perform other image processing that improves the quality of images for use in other modules of the system, as are recognized in the art. Furthermore, some image processing may operate with one or more of the other modules. For example, some processing may be more efficiently or successfully performed when working with the image creation module 230, discussed below, or with the composite images that may be generated by the image creation module.
The image creation module 230 creates a composite digital image that represents a page or section of the hard copy document, or the entirety of the hard copy document. In some instances, for example when multiple images are acquired, the image creation module may join or “stitch together” the acquired images to create a single composite image. For example, in an implementation where the image acquisition module 220 includes a digital camera, a user may have taken multiple photographs of a single hard copy page so that text and other elements are readable in the images generated by the camera. In such a case, the image creation module may identify and stitch together the images that make up the same page, creating a single composite image of the entire hard copy page. Joining or stitching together the acquired images may be accomplished using various methods.
If images of multiple hard copy pages or sections have been captured using the image acquisition module, the image creation module 230 may recognize the hard copy page or section with which the image is associated and generate multiple composite images, one for each hard copy page or section.
In cases where a single acquired image is sufficient to enable element recognition, the composite image created by the image creation module 230 may be the same as the single acquired image, or the same as the acquired image suitably processed by the image processing module 225. Alternatively, in such cases, the image creation module may not be necessary, and other modules in the system may work directly with the acquired or processed image.
In a preferred embodiment, the operation of the image creation module 230 may be performed without user input. However, in some implementations and with some images, the image creation module may not be able to correctly link an image with a composite image. In such a case, the image creation module may present to the user of the system multiple choices and enable the user to link the image with the composite image, for example, by specifying where in the composite image the particular image should be used, or with which composite image the particular image should be associated.
It should be understood that, unless noted or understood otherwise, the modules and other parts of the invention described herein may work with images acquired directly from the image acquisition module 220, images processed by the image processing module 225, or from one or more composite images generated by the image creation module 230.
The image feedback module 255 provides feedback to the user while images are being acquired. In some implementations, including those where the image acquisition module 220 includes a digital camera, the image feedback module may indicate that images of some portions of the hard copy document have not been acquired. For example, where the left side of a hard copy page has been captured but the right side has not, the image feedback module may indicate that the user must still take one or more photographs of the right side of the page.
The image feedback module may provide feedback in multiple ways. In some implementations, the image feedback module may show graphically, for example on an LCD screen that is part of a digital camera or camera phone, which parts of the hard copy page or section have not been captured at all, or have not been captured with sufficient resolution. It may do so by displaying shaded regions, overlaid “on top” of the image of the hard copy document, that show the portions of the hard copy document that should still be captured.
The document type module 240 identifies the type of hard copy document, or the type of different portions of the hard copy document. It may work with the element recognition module 245 to enable the element recognition module to better recognize text and elements, and to enable the document generation system 210 to more accurately recreate a soft copy document from a hard copy document.
The document type module 240 may use multiple features or patterns of the image, images, or composite image to identify the type of document, or the types of sections of the document. Recognized types could include, without limitation, some or all of the following: word processor documents; spreadsheets; presentation documents; and forms, including forms like facsimile transmittal forms and business cards, and user-defined forms, for example for use with surveys. For additional information different kinds of recognized elements, refer to the following discussion of the element recognition module 245.
The element recognition module 245 recognizes a wide variety of elements in the image or composite image. These elements can then be used by the document generation system 210 to generate a soft copy document containing some or all of the recognized elements.
In one embodiment, the element recognition module 245 may simply recognize text, perhaps by implementing one or more optical character recognition (OCR) algorithms.
In the same or other embodiments, the element recognition module 245 may recognize additional “super-elements,” aside from text. In this context, a “super-element” is an element that has some additional meaning or structure, beyond simple text. A super-element may be comprised, in whole or in part, of text, but is not solely comprised of text. Some examples of super-elements, without limitation, are images with metadata, such as images with captions; bulleted lists; equations; annotations; tables; charts; and forms.
In some cases, the element recognition module may recognize a super-element solely from the image data contained in the image or composite image. In some cases, the element recognition module may also use other information, including information generated by other modules, such as the document type module 240. For example and without limitation, and as explained in more detail below, if the document type module 240 identifies the images as containing slides in a presentation, the element recognition module may use this information to recognize text and graphics as elements common to presentations, like slides or charts, and not just as a series of unrelated text and images. As another example and again without limitation, if the document type module 240 identifies part of the image as a spreadsheet or table, the element recognition module may use this information to recognize numbers and text as part of a table, not just as a series of unrelated numbers, text, and white space.
The element recognition module 245 may recognize numerous super-elements in addition to text and simple images. The following paragraphs discuss some of these super-elements in detail.
Some possible super-elements are graphics or images that contain metadata. For example, the element recognition module 245 may recognize a graphic or image and a caption associated with the graphic or image. The recognized super-element in this case might be a captioned graphic. In this context, the recognized graphic or image is a part of the image used for soft copy document generation. For example, an image of a magazine page provided by the image acquisition module 220 might contain text as well as graphics or images, such as photographs printed on the magazine page. Some of the super-elements in this case might be the photographs on the magazine page, with any associated captions.
Another possible super-element might be a table or spreadsheet. Tables or spreadsheets might be recognized, for example and without limitation, in cases when images of a printed spreadsheet are used. They might also be recognized in other cases, such as when an image of a magazine page that contains tables is used. Where the element recognition module 245 identifies tables or spreadsheets, it may generate soft copy tables or spreadsheets, perhaps according to a user choice. For example, if a printed spreadsheet is photographed, the user may choose to generate a soft copy spreadsheet document. Alternatively, if a magazine page with tables is photographed, the user may choose to generate a soft copy word processor document with embedded tables. Beyond recognition of tables or spreadsheets, the element recognition module may be able to impute logic in the soft copy spreadsheet, table, or other element it generates. For example, a photograph of a spreadsheet with cells containing the numbers 1, 1, and 2 may not simply be recognized as a spreadsheet with three cells containing the literal values 1, 1, and 2. Instead, the third cell—the cell that contains 2—may be generated so that it contains a formula summing the previous two cells. For example, assuming the first two cells are named A1 and B1, the cells in the generated soft copy spreadsheet might contain 1, 1, and a formula like “=SUM(A1:B1)”.
Another possible super-element might be a portion of a form. In this context, a “form” is part of a document that contains some known fields or structure. Using these known fields or structure, the element recognition module 245 may be able to recognize specific elements of a hard copy document. When these specific elements are recognized, the document generation system 210 may use them to generate a soft copy document, like a word processor document. Recognized elements may also be used to, for example, update a database. The element recognition module may also have the ability to improve recognition accuracy by ignoring text or other markings outside the context of the known data on the form.
Forms may follow any format and contain any fields. Some examples of forms are facsimile transmittal forms, business cards, and the like, as well as custom or user-defined forms. In the case of a facsimile transmittal form, the element recognition module 245 may look specifically for fields such as “To,” “From, ” “Fax Number,” “Subject,” and so on. In the case of a business card, the element recognition module may look for text that contains the name of a person, the name of position (such as “President,”), the name of a company, and so on. In both examples, in addition to or in lieu of generating a soft copy document, a database—for example, a database of sent or received facsimiles, or a database of contacts like that maintained by personal organizer or customer relationship management software—may be updated with the data recognized in the hard copy document.
In addition, the element recognition module may provide the ability to recognize the fields or structure of custom or user-defined forms. In such a system, a user may provide a template that denotes information like the name and location of specified fields. The element recognition module 245 may use this template to recognize elements in associated hard copy documents. One situation in which this type of functionality may be useful is in digitizing survey information that exists in hard copy form. A survey company, for example, might print paper copies of a survey with elements like checkboxes, multiple choice questions, and free text entry questions. They might also create a template in a form required by the element recognition module which names the questions, specifies the format of the questions, and so on. Given this template, the element recognition module might then interpret images of completed hard copy survey forms and, perhaps in concert with the document generation module 260, for example, might update a database with the answers to the survey questions.
Many types of documents might also contain a number of different super-elements that do not, in and of themselves, exist as a single document. Like with other super-elements, these super-elements may be made part of the generated soft copy document in such a way that they can be manipulated in an application as a super-element. For example, the element recognition module 245 may recognize a set of text as a bulleted list and enable the document generation system to generate, for example, a word processor document that contains a bulleted list, rather than simply rows of text. When such a word processor document is edited in a word processor application, the word processor can manipulate the bulleted list as a bulleted list, instead of, for example, as rows of text preceded by a graphic that looks like a text bullet.
Similar to how a bulleted list is recognized and output in a generated soft copy document, the element recognition module 245 may also recognize other super-elements including, without limitation, equations and charts, in addition to other super-elements, like tables and spreadsheets, which can be embedded in a soft copy document.
In the case of a chart or graphic, it may be possible for the element recognition module 245 to impute additional meaning to the elements of the chart or graphic. For example, an image might contain an organizational chart that shows the reporting relationship of people in a company. This chart might be comprised of squares containing text and lines that join the squares. In some instances, the element recognition module might recognize this chart as a single atomic graphic and do no further recognition. In other instances, the element recognition module might recognize the chart as a drawing with squares, lines, and text, and enable the editing of the resulting soft copy drawing in, for example, an application with graphics editing functionality. In yet other instances, the element recognition module might recognize the chart as an organizational chart, and enable editing of the resulting soft copy chart in an application with specific functionality directed toward the creation and editing of organizational charts.
The element recognition module 245 may be able to recognize handwritten text or other figures. Such handwritten elements of the hard copy may be recognized as graphics or images and not further interpreted. In other instances, handwritten content may be recognized as text. In either case, the recognized super-element may be output as part of the generated document as an annotation or note, or as part of the document itself. For example, when the element recognition module interprets an image of a page of printed text with some handwritten notes, it may incorporate the recognized handwritten text into the document itself. In other cases, it may interpret the handwritten text but keep it separate from the main flow of the document—for example, by using “Note” or “Annotation” functionality in the generated soft copy document. Handwritten drawings may be maintained as graphics or images and not further interpreted, or may be interpreted, for example, in the same way a printed chart or other super-element is recognized.
Throughout the recognition of elements, the element recognition module 245 may maintain specific features of the recognized elements. For example, the element recognition module may recognize the following features of text, without limitation: font name, font style (bold, underline, and so on), font size, text color, and so on. For elements in general, the element recognition module may recognize features, again without limitation, like color and shape.
In some instances, the element recognition module 245 may not be able to recognize some elements automatically with some sufficient or defined level of certainty. In such instances, the recognition choices module 250 may be used to present the user with a choice between multiple possible elements. For example and without limitation, if the element recognition module determines that there is a 60% chance that particular element should be recognized as one element, and a 40% chance that the element should be recognized as another element, the recognition choices module may enable the user to choose between the two likely elements. In one implementation, the recognition choices module 250 may display a dialog that shows the relevant part of the image being processed by the element recognition module, as well as the elements that have been identified by the element recognition module as possible matches, perhaps with the most likely element selected by default. The user may then choose between the presented options, and the document generation system 210 may use the selected option to determine which element to include in the generated soft copy document.
If the document generation system 210 is used in a batch or automated fashion—for example, to generate soft copy documents from a large set of images—any operation of the recognition choices module 250 or, for that matter, any operation of a module that requires user input, may be automated so that no user input is required. In one implementation, in the case of element recognition, instead of using the recognition choices module, the most likely candidate element may be chosen automatically, even if there is only, for example, a 51% chance that the most likely candidate element is the correct element.
The background module 235 identifies and manipulates background images or data present in the images being processed. For example, each slide in a hard copy presentation with multiple slides might have the same background. The background module may identify this background so that it can be manipulated separately from the text or other elements in the foreground of the slide. For example, it might identify and remove any background image or images before using modules like the element recognition module, and may then add the background image or images to the generated soft copy document after element recognition is complete. When multiple images have the same background image or images, the background module may identify the background image or images by using multiple images, for, by example, deducing that similar elements occurring in all of the images are part of the background.
Given the operations of other modules, the document generation module 260 generates one or more soft copy documents, of one or more document formats. For example, the document generation module may generate one or more word processor documents, spreadsheet documents, presentation documents, text documents, portable document format documents, and so on. A word processor document may be, for example and without limitation, a Microsoft Word document, suitable for use in the Microsoft Word application, which is produced by Microsoft Corporation of Redmond, Wash. Such a word processor document might also be suitable for use in any other application which can view or edit Microsoft Word documents. A spreadsheet document may be, for example and without limitation, a Microsoft Excel document, suitable for use in the Microsoft Excel application, also produced by Microsoft Corporation of Redmond, Wash. Similarly, a presentation document may be a Microsoft PowerPoint document and a portable document format document might be a Portable Document Format (PDF) document, suitable for use with an Adobe Acrobat application, produced by Adobe Systems Corporation of San Jose, Calif.
The particular document format or formats used by the document generation module may be determined in various ways. For example, they may be determined automatically by the document generation system 210, or may be the result of user choice. In some implementations, the document generation system 210 may default to generating a word processor document when the recognized elements include text. In other implementations, the user may choose the type or types of documents to be generated.
The document generation module 260 may generate a single soft copy document with multiple pages, or multiple documents each with a single page, or multiple documents with one or more pages, depending on the nature of the images from which the documents are generated, user choice, or other input.
Generated soft copy documents may be the same as any other soft copy documents of the same particular format. Such documents may be manipulated, transmitted, or otherwise used in any manner, in the same fashion as soft copy documents generated, for example, from scratch using an appropriate application.
In addition to generating traditional soft copy documents, in some implementations the document generation module 260 might also take other actions with the information produced by the document generation system 210. For example, instead of or in addition to generating a word processor document, the document generation module might also update a database, send email, or take any other action based on the nature of the recognized information.
Turning now to FIG. 3, illustrated therein is an exemplary generalized operational flow 300 including various operations that may be performed when generating documents. The following description of FIG. 3 is made with reference to the system 200 of FIG. 2. However, it should be understood that the operational flow described with respect to FIG. 3 is not intended to be limited to be used with the elements of the system 200. In addition, it should be understood that, while the exemplary operational flow 300 indicates a particular order of operation execution, in one or more alternative implementations the operations may be ordered differently. Furthermore, while the exemplary operational flow contains multiple discrete steps, it should be recognized that in some implementations, some or all of these operations may be combined or executed contemporaneously.
As shown, in one implementation of operation 310, the image acquisition module 220 acquires one or more images of the hard copy document or documents from which the soft copy document or documents are to be generated. In some implementations, this operation involves taking one or more photographs of the hard copy document or documents. In other implementations, this operation involves scanning hard copy documents using a scanner, and so on. If the image acquisition module cannot acquire single images with sufficient quality for document generation, this operation may require acquiring multiple images of the hard copy document which may then be further manipulated in succeeding operations, like operation 314, where multiple images may be combined to form a single composite image.
In some implementations of operation 310, the image feedback module 255 may provide feedback to the user while images are acquired. For example, if a user takes multiple photographs of a single page of a hard copy document using a digital camera, the image feedback module may indicate to the user which parts of the hard copy page have not yet been photographed, or have not yet been photographed with sufficient detail to enable document generation.
In one implementation of operation 312, the image processing module 225 processes the images acquired in operation 310 to improve the quality of the images for use in generating soft copy documents. This image processing may include removal of keystone effects, removal or minimization of shadows, and so on. In other implementations, this operation may be performed using the composite image or images generated by operation 314—in such implementations, operation 312 may be performed after operation 314, or may be performed, at least in part, both before and after operation 314.
In one implementation of operation 314, the image creation module 230 may combine multiple images acquired during operation 310 into one or more composite images that better represent a page or section of a hard copy document. For example, when multiple images of a single page have been acquired in operation 310 to ensure that the resolution of the images is high enough to enable document generation, this operation may combine the images and create a single composite image. In instances where a single image acquired in operation 310 has sufficient resolution to enable document generation, this operation may generate a composite image that is the same as the acquired or the processed image. In an alternative embodiment, when a single image is sufficient for document generation, this operation may not be necessary and subsequent operations may work with the image acquired in the acquire images operation 310 or the process images operation 312. Further, unless otherwise noted, the composite image or images generated by this operation may be used interchangeably with images acquired in the acquire images operation 310, or processed by the process images operation 312.
In an implementation of operation 316, the background module 235 may use one or more images resulting from operations 310, 312, and 314 to identify background graphics, images, or other content. In at least one implementation, operation 316 may also remove the identified background from the images to be used by some subsequent operations, like the recognize elements operation 320. In such an implementation, the background may then be added to the soft copy documents, for example, in the add background operation 324.
In one implementation of operation 318, the document type module 240 may recognize the type of or types of hard copy documents or sections represented by the images. Information about the type or types of documents or sections may be used in subsequent operations, including, for example, the recognize elements operation 320.
In an implementation of operation 320, the element recognition module 245 recognizes text and other elements, including “super-elements,” all as explained previously with reference to the element recognition module 245. This operation may recognize, for example and without limitation, text, simple images, images with captions or other additional data, bulleted lists, equations, annotations, tables, spreadsheets, presentations, charts, forms, and so on.
In one implementation of operation 322, the recognition choices module 250 may present one or more choices to the user for elements which cannot be recognized automatically with some sufficient level of certainty, to enable the user to choose the most appropriate element. In other implementations, such as implementations where large numbers of hard copy documents are processed without human intervention, this operation may not be executed.
In an implementation of operation 324, background graphics, images, or other content that was identified and removed, for example in identify background operation 316, may be added by the background module 235 to the soft copy document or documents to be generated.
Finally, in one implementation of operation 326, the document generation module 260 may generate one or more soft copy documents using the information acquired in previous operations. As explained previously with reference to the document generation module 260, the generated soft copy documents may be of many forms. Furthermore, also as explained previously, the document generation module may also perform actions aside from generating traditional documents, including, for example, updating databases using the information identified in previous operations.
Although some particular implementations of systems and methods have been illustrated in the accompanying drawings and described in the foregoing Detailed Description, it will be understood that the systems and methods shown and described are not limited to the particular implementations described, but are capable of numerous rearrangements, modifications and substitutions without departing from the spirit set forth and defined by the following claims.

Claims (18)

We claim:
1. A method for generating a soft copy document from a hard copy document, comprising:
acquiring via photographs taken by a user at least two images of a single page of the hard copy document;
providing feedback to the user indicating any parts of the single page not photographed and further indicating any of the at least two images that are of insufficient quality for generating the soft copy document;
processing the acquired images to be better suited for soft copy document generation;
combining the processed images resulting in a composite image that represents the single page of the hard copy document;
recognizing one or more particular text elements in the composite image;
identifying, by a document type module, one or more document types of the composite image, wherein the document type module is configured for identifying a plurality of document types including a word processor document type, a spreadsheet document type, a presentation document type, and form types of forms including business cards, facsimiles, and a form complying with a pre-defined template;
recognizing further particular text elements in the composite image according to characteristics of the identified one or more document types;
generating the soft copy document as at least one of the identified one or more document types, the soft copy document including the recognized one or more particular text elements and the recognized further particular text elements; and
wherein the method is performed by one or more computing devices.
2. The method of claim 1, wherein the processing step further comprises one or more of the following operations:
removing keystone effects from the two or more images; and
removing shadowing from the two or more images.
3. The method of claim 1, wherein the processing step further comprises increasing a resolution of at least one of the images.
4. The method of claim 1, wherein the recognizing particular elements step further comprises:
if a particular element cannot be identified with a defined level of certainty, presenting to a user one or more selectable choices for the particular element;
receiving a selected choice of the one or more selectable choices; and
recognizing the particular element as the selected choice.
5. The method of claim 1, further comprising:
identifying one or more background images in the composite image;
removing the background images before recognizing elements; and
adding the background images to the soft copy document after recognizing elements.
6. The method of claim 1, wherein the steps of processing the images, creating the composite image, recognizing elements, and generating the soft copy document are performed automatically without human intervention.
7. The method of claim 1, wherein the step of acquiring the images further comprises taking two or more photographs of the one or more hard copy documents using a digital camera.
8. The method of claim 7, wherein the digital camera further comprises a camera integrated with a mobile telephone.
9. The method of claim 1, wherein the step of acquiring the images further comprises scanning the one or more hard copy documents using a device with document scanning functionality.
10. A device, comprising:
a memory;
a document generation module implemented at least in part in the memory and configured for generating a soft copy document from a hard copy document;
an image acquisition module configured for acquiring via photographs taken by a user at least two images of a single page of the hard copy document;
an image feedback module configured for providing feedback to the user indicating any parts of the single page not photographed and further indicating any of the at least two images that are of insufficient quality for generating the soft copy document;
an image processing module configured for processing the acquired images to be better suited for soft copy document generation;
an image creation module configured for combining the processed images resulting in a composite image that represents the single page of the hard copy document;
a document type module configured for identifying one or more document types of the composite image, wherein the document type module is configured for identifying a plurality of document types including a word processor document type, a spreadsheet document type, a presentation document type, and form types of forms including business cards, facsimiles, and a form complying with a pre-defined template;
an element recognition module configured for recognizing one or more particular text elements in the composite image according to characteristics of the identified one or more document types; and
wherein at least one of the modules is configured to operate on the device.
11. The device of claim 10, wherein the device further comprises a mobile telephone.
12. The device of claim 10, wherein the image acquisition module is further configured to acquire images generated by a second device.
13. The device of claim 12, wherein the second device includes document scanning functionality.
14. The device of claim 10, further comprising an image feedback module configured to provide feedback to a user acquiring the images, said feedback indicating that an image of at least a portion of the hard copy document has not been satisfactorily acquired.
15. One or more data storage devices that are not signal or carrier wave comprising computer-executable instructions that, when executed by a computer, cause the computer to perform a method comprising:
acquiring via photographs taken by a user at least two images of a single page of a hard copy document;
providing feedback to the user indicating any parts of the single page not photographed and further indicating any of the at least two images that are of insufficient quality for generating the soft copy document;
processing the acquired images to be better suited for soft copy document generation;
combining the processed images resulting in a composite image that represents the single page of the hard copy document;
recognizing particular text elements in the composite image;
identifying, by a document type module, one or more document types of the composite image, wherein the document type module is configured for identifying a plurality of document types including a word processor document type, a spreadsheet document type, a presentation document type, and form types of forms including business cards, facsimiles, and a form complying with a pre-defined template;
recognizing further particular text elements in the composite image according to characteristics of the identified one or more document types; and
generating the soft copy document as at least one of the identified one or more document types, the generated soft copy document including the recognized particular text elements and the recognized further particular text elements.
16. The one or more data storage devices of claim 15, wherein the processing step further comprises one or more of the following operations:
removing keystone effects; and
removing shadowing.
17. The one or more data storage devices of claim 15, wherein the processing step further comprises increasing the resolution of at least one of the images.
18. The one or more data storage devices of claim 15, wherein the creating step further comprises joining two or more images to create the composite image.
US11/275,908 2006-02-02 2006-02-02 Generation of documents from images Expired - Fee Related US8509563B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/275,908 US8509563B2 (en) 2006-02-02 2006-02-02 Generation of documents from images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/275,908 US8509563B2 (en) 2006-02-02 2006-02-02 Generation of documents from images

Publications (2)

Publication Number Publication Date
US20070177183A1 US20070177183A1 (en) 2007-08-02
US8509563B2 true US8509563B2 (en) 2013-08-13

Family

ID=38321780

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/275,908 Expired - Fee Related US8509563B2 (en) 2006-02-02 2006-02-02 Generation of documents from images

Country Status (1)

Country Link
US (1) US8509563B2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120137207A1 (en) * 2010-11-29 2012-05-31 Heinz Christopher J Systems and methods for converting a pdf file
WO2016028827A1 (en) * 2014-08-21 2016-02-25 Microsoft Technology Licensing, Llc Enhanced interpretation of character arrangements
US9397723B2 (en) 2014-08-26 2016-07-19 Microsoft Technology Licensing, Llc Spread spectrum wireless over non-contiguous channels
US9513671B2 (en) 2014-08-01 2016-12-06 Microsoft Technology Licensing, Llc Peripheral retention device
US9705637B2 (en) 2014-08-19 2017-07-11 Microsoft Technology Licensing, Llc Guard band utilization for wireless data communication
US9805483B2 (en) 2014-08-21 2017-10-31 Microsoft Technology Licensing, Llc Enhanced recognition of charted data
US10156889B2 (en) 2014-09-15 2018-12-18 Microsoft Technology Licensing, Llc Inductive peripheral retention device
US10191986B2 (en) 2014-08-11 2019-01-29 Microsoft Technology Licensing, Llc Web resource compatibility with web applications
US20210168290A1 (en) * 2017-05-22 2021-06-03 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and non-transitory storage medium

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101291195B1 (en) * 2007-11-22 2013-07-31 삼성전자주식회사 Apparatus and method for recognizing characters
US20110265005A1 (en) * 2010-04-22 2011-10-27 Research In Motion Limited Method, system and apparatus for managing message attachments
JP2012203784A (en) * 2011-03-28 2012-10-22 Fuji Xerox Co Ltd Image processing apparatus and program
JP2013070212A (en) * 2011-09-22 2013-04-18 Fuji Xerox Co Ltd Image processor and image processing program
US20130275451A1 (en) * 2011-10-31 2013-10-17 Christopher Scott Lewis Systems And Methods For Contract Assurance
EP2807604A1 (en) 2012-01-23 2014-12-03 Microsoft Corporation Vector graphics classification engine
CN104094282B (en) * 2012-01-23 2017-11-21 微软技术许可有限责任公司 Rimless form detecting and alarm
US9953008B2 (en) 2013-01-18 2018-04-24 Microsoft Technology Licensing, Llc Grouping fixed format document elements to preserve graphical data semantics after reflow by manipulating a bounding box vertically and horizontally
CN104346615B (en) * 2013-08-08 2019-02-19 北大方正集团有限公司 The extraction element and extracting method of composite diagram in format document
US20170220858A1 (en) * 2016-02-01 2017-08-03 Microsoft Technology Licensing, Llc Optical recognition of tables
US10990814B2 (en) 2018-09-21 2021-04-27 Microsoft Technology Licensing, Llc Converting an image into a structured table
US11520465B2 (en) * 2019-05-06 2022-12-06 Apple Inc. Curated media library
US20240103646A1 (en) * 2022-09-22 2024-03-28 Microsoft Technology Licensing, Llc Universal highlighter for contextual notetaking

Citations (146)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US1030881A (en) 1911-02-24 1912-07-02 Allen S Erquhart Pipe-wrench.
US4040009A (en) * 1975-04-11 1977-08-02 Hitachi, Ltd. Pattern recognition system
US5020112A (en) 1989-10-31 1991-05-28 At&T Bell Laboratories Image recognition method using two-dimensional stochastic grammars
US5235650A (en) 1989-02-02 1993-08-10 Samsung Electronics Co. Ltd. Pattern classifier for character recognition
US5373566A (en) 1992-12-24 1994-12-13 Motorola, Inc. Neural network-based diacritical marker recognition system and method
US5432868A (en) 1992-08-07 1995-07-11 Nippondenso Co., Ltd. Information medium recognition device
US5440662A (en) 1992-12-11 1995-08-08 At&T Corp. Keyword/non-keyword classification in isolated word speech recognition
US5442715A (en) 1992-04-06 1995-08-15 Eastman Kodak Company Method and apparatus for cursive script recognition
US5479523A (en) 1994-03-16 1995-12-26 Eastman Kodak Company Constructing classification weights matrices for pattern recognition systems using reduced element feature subsets
US5511148A (en) * 1993-04-30 1996-04-23 Xerox Corporation Interactive copying system
US5579436A (en) 1992-03-02 1996-11-26 Lucent Technologies Inc. Recognition unit model training based on competing word and word string models
US5594676A (en) 1994-12-22 1997-01-14 Genesis Microchip Inc. Digital image warping system
US5625748A (en) 1994-04-18 1997-04-29 Bbn Corporation Topic discriminator using posterior probability or confidence scores
US5625707A (en) 1993-04-29 1997-04-29 Canon Inc. Training a neural network using centroid dithering by randomly displacing a template
US5627942A (en) 1989-12-22 1997-05-06 British Telecommunications Public Limited Company Trainable neural network having short-term memory for altering input layer topology during training
US5649032A (en) 1994-11-14 1997-07-15 David Sarnoff Research Center, Inc. System for automatically aligning images to form a mosaic image
US5680479A (en) * 1992-04-24 1997-10-21 Canon Kabushiki Kaisha Method and apparatus for character recognition
US5687286A (en) 1992-11-02 1997-11-11 Bar-Yam; Yaneer Neural networks with subdivision
US5749066A (en) 1995-04-24 1998-05-05 Ericsson Messaging Systems Inc. Method and apparatus for developing a neural network for phoneme recognition
US5787194A (en) 1994-11-08 1998-07-28 International Business Machines Corporation System and method for image processing using segmentation of images and classification and merging of image segments using a cost function
US5818977A (en) 1996-03-12 1998-10-06 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Supply And Services And Of Public Works Photometric measurement apparatus
US5832435A (en) 1993-03-19 1998-11-03 Nynex Science & Technology Inc. Methods for controlling the generation of speech from text representing one or more names
US5920477A (en) * 1991-12-23 1999-07-06 Hoffberg; Steven M. Human factored interface incorporating adaptive pattern recognition based controller apparatus
US5930746A (en) 1996-03-20 1999-07-27 The Government Of Singapore Parsing and translating natural language sentences automatically
US5943443A (en) 1996-06-26 1999-08-24 Fuji Xerox Co., Ltd. Method and apparatus for image based document processing
US5960397A (en) 1997-05-27 1999-09-28 At&T Corp System and method of recognizing an acoustic environment to adapt a set of based recognition models to the current acoustic environment for subsequent speech recognition
US5987171A (en) 1994-11-10 1999-11-16 Canon Kabushiki Kaisha Page analysis system
US6011558A (en) 1997-09-23 2000-01-04 Industrial Technology Research Institute Intelligent stitcher for panoramic image-based virtual worlds
US6011634A (en) * 1992-11-27 2000-01-04 Kabushiki Kaisha Toshiba Portable facsimile equipment
US6041299A (en) 1997-03-11 2000-03-21 Atr Interpreting Telecommunications Research Laboratories Apparatus for calculating a posterior probability of phoneme symbol, and speech recognition apparatus
US6076057A (en) 1997-05-21 2000-06-13 At&T Corp Unsupervised HMM adaptation based on speech-silence discrimination
US6097854A (en) 1997-08-01 2000-08-01 Microsoft Corporation Image mosaic construction system and apparatus with patch-based alignment, global block adjustment and pair-wise motion-based local warping
US6157747A (en) 1997-08-01 2000-12-05 Microsoft Corporation 3-dimensional image rotation method and apparatus for producing image mosaics
US6178398B1 (en) 1997-11-18 2001-01-23 Motorola, Inc. Method, device and system for noise-tolerant language understanding
US6246413B1 (en) 1998-08-17 2001-06-12 Mgi Software Corporation Method and system for creating panoramas
US6259826B1 (en) 1997-06-12 2001-07-10 Hewlett-Packard Company Image processing method and device
US20010014229A1 (en) * 1999-12-20 2001-08-16 Hironobu Nakata Digital image forming apparatus
US20010019636A1 (en) * 2000-03-03 2001-09-06 Hewlett-Packard Company Image capture systems
US20020010719A1 (en) * 1998-01-30 2002-01-24 Julian M. Kupiec Method and system for generating document summaries with location information
US6363171B1 (en) 1994-01-13 2002-03-26 Stmicroelectronics S.R.L. Apparatus for recognizing alphanumeric characters
US6377704B1 (en) 1998-04-30 2002-04-23 Xerox Corporation Method for inset detection in document layout analysis
JP2002133389A (en) 2000-10-26 2002-05-10 Nippon Telegr & Teleph Corp <Ntt> Data classification learning method, data classification method, data classification learner, data classifier, storage medium with data classification learning program recorded, and recording medium with data classification program recorded
US6393054B1 (en) 1998-04-20 2002-05-21 Hewlett-Packard Company System and method for automatically detecting shot boundary and key frame from a compressed video data
US20020099542A1 (en) * 1996-09-24 2002-07-25 Allvoice Computing Plc. Method and apparatus for processing the output of a speech recognition engine
US20020106124A1 (en) * 1998-12-30 2002-08-08 Shin-Ywan Wang Block selection of table features
US20020113805A1 (en) 2001-01-02 2002-08-22 Jiang Li Image-based walkthrough system and process employing spatial video streaming
JP2002269499A (en) 2001-03-07 2002-09-20 Masakazu Suzuki Numerical expression recognizing device and numerical expression recognizing method, and character recognizing device and character recognizing method
US20020140829A1 (en) 1999-12-31 2002-10-03 Stmicroelectronics, Inc. Still picture format for subsequent picture stitching for forming a panoramic image
US20020172425A1 (en) 2001-04-24 2002-11-21 Ramarathnam Venkatesan Recognizer of text-based work
US20020198909A1 (en) * 2000-06-06 2002-12-26 Microsoft Corporation Method and system for semantically labeling data and providing actions based on semantically labeled data
US20030010992A1 (en) 2001-07-16 2003-01-16 Motorola, Inc. Semiconductor structure and method for implementing cross-point switch functionality
US6542635B1 (en) 1999-09-08 2003-04-01 Lucent Technologies Inc. Method for document comparison and classification using document image layout
US6559846B1 (en) 2000-07-07 2003-05-06 Microsoft Corporation System and process for viewing panoramic video
US20030103670A1 (en) 2001-11-30 2003-06-05 Bernhard Schoelkopf Interactive images
US20030110150A1 (en) 2001-11-30 2003-06-12 O'neil Patrick Eugene System and method for relational representation of hierarchical data
US6587587B2 (en) 1993-05-20 2003-07-01 Microsoft Corporation System and methods for spacing, storing and recognizing electronic representations of handwriting, printing and drawings
US6597801B1 (en) 1999-09-16 2003-07-22 Hewlett-Packard Development Company L.P. Method for object registration via selection of models with dynamically ordered features
US20030171915A1 (en) 2002-01-14 2003-09-11 Barklund Par Jonas System for normalizing a discourse representation structure and normalized data structure
US20030169925A1 (en) 2002-03-11 2003-09-11 Jean-Pierre Polonowski Character recognition system and method
US20030194149A1 (en) 2002-04-12 2003-10-16 Irwin Sobel Imaging apparatuses, mosaic image compositing methods, video stitching methods and edgemap generation methods
US6636216B1 (en) 1997-07-15 2003-10-21 Silverbrook Research Pty Ltd Digital image warping system
US20030210817A1 (en) 2002-05-10 2003-11-13 Microsoft Corporation Preprocessing of multi-line rotated electronic ink
US6650774B1 (en) 1999-10-01 2003-11-18 Microsoft Corporation Locally adapted histogram equalization
US20030215139A1 (en) 2002-05-14 2003-11-20 Microsoft Corporation Handwriting layout analysis of freeform digital ink input
US20030215145A1 (en) 2002-05-14 2003-11-20 Microsoft Corporation Classification analysis of freeform digital ink input
US20030215138A1 (en) 2002-05-14 2003-11-20 Microsoft Corporation Incremental system for real time digital ink analysis
US20030222984A1 (en) 2002-06-03 2003-12-04 Zhengyou Zhang System and method for calibrating a camera with one-dimensional objects
US20030234772A1 (en) 2002-06-19 2003-12-25 Zhengyou Zhang System and method for whiteboard and audio capture
US20040006742A1 (en) 2002-05-20 2004-01-08 Slocombe David N. Document structure identifier
US6677981B1 (en) 1999-12-31 2004-01-13 Stmicroelectronics, Inc. Motion play-back of still pictures comprising a panoramic view for simulating perspective
US6678415B1 (en) 2000-05-12 2004-01-13 Xerox Corporation Document image decoding using an integrated stochastic language model
US6687400B1 (en) 1999-06-16 2004-02-03 Microsoft Corporation System and process for improving the uniformity of the exposure and tone of a digital image
US6701030B1 (en) 2000-07-07 2004-03-02 Microsoft Corporation Deghosting panoramic video
US6714249B2 (en) 1998-12-31 2004-03-30 Eastman Kodak Company Producing panoramic digital images by digital camera systems
US20040111408A1 (en) 2001-01-18 2004-06-10 Science Applications International Corporation Method and system of ranking and clustering for document indexing and retrieval
US20040114799A1 (en) 2001-12-12 2004-06-17 Xun Xu Multiple thresholding for video frame segmentation
US6766320B1 (en) 2000-08-24 2004-07-20 Microsoft Corporation Search engine with natural language-based robust parsing for user query and relevance feedback learning
US20040141648A1 (en) 2003-01-21 2004-07-22 Microsoft Corporation Ink divider and associated application program interface
RU2234126C2 (en) 2002-09-09 2004-08-10 Аби Софтвер Лтд. Method for recognition of text with use of adjustable classifier
RU2234734C1 (en) 2002-12-17 2004-08-20 Аби Софтвер Лтд. Method for multi-stage analysis of information of bitmap image
US6782505B1 (en) 1999-04-19 2004-08-24 Daniel P. Miranker Method and system for generating structured data from semi-structured data sources
US20040167778A1 (en) 2003-02-20 2004-08-26 Zica Valsan Method for recognizing speech
US20040165786A1 (en) 2003-02-22 2004-08-26 Zhengyou Zhang System and method for converting whiteboard content into an electronic document
US20040181749A1 (en) 2003-01-29 2004-09-16 Microsoft Corporation Method and apparatus for populating electronic forms from scanned documents
US20040186714A1 (en) 2003-03-18 2004-09-23 Aurilab, Llc Speech recognition improvement through post-processsing
US20040189674A1 (en) 2003-03-31 2004-09-30 Zhengyou Zhang System and method for whiteboard scanning to obtain a high resolution image
US6813391B1 (en) 2000-07-07 2004-11-02 Microsoft Corp. System and method for exposure compensation
US20040218799A1 (en) * 2003-05-02 2004-11-04 International Business Machines Corporation Background data recording and use with document processing
US20040233274A1 (en) 2000-07-07 2004-11-25 Microsoft Corporation Panoramic video
US20050015251A1 (en) 2001-05-08 2005-01-20 Xiaobo Pi High-order entropy error functions for neural classifiers
US20050013501A1 (en) 2003-07-18 2005-01-20 Kang Sing Bing System and process for generating high dynamic range images from multiple exposures of a moving scene
US20050036681A1 (en) * 1999-03-18 2005-02-17 Choicepoint Asset Company System and method for the secure data entry from document images
US20050044106A1 (en) 2003-08-21 2005-02-24 Microsoft Corporation Electronic ink processing
US20050055641A1 (en) * 1999-04-30 2005-03-10 Canon Kabushiki Kaisha Data processing apparatus, data processing method, and storage medium storing computer-readable program
US20050111737A1 (en) 2002-12-12 2005-05-26 Eastman Kodak Company Method for generating customized photo album pages and prints based on people and gender profiles
US20050125746A1 (en) 2003-12-04 2005-06-09 Microsoft Corporation Processing an electronic document for information extraction
US20050154979A1 (en) 2004-01-14 2005-07-14 Xerox Corporation Systems and methods for converting legacy and proprietary documents into extended mark-up language format
US20050168623A1 (en) 2004-01-30 2005-08-04 Stavely Donald J. Digital image production method and apparatus
US20050169555A1 (en) * 2003-11-07 2005-08-04 Yuichi Hasegawa Image processing apparatus and method, and computer program
US6930703B1 (en) 2000-04-29 2005-08-16 Hewlett-Packard Development Company, L.P. Method and apparatus for automatically capturing a plurality of images during a pan
US20050185047A1 (en) 2004-02-19 2005-08-25 Hii Desmond Toh O. Method and apparatus for providing a combined image
US20050200921A1 (en) 2004-03-09 2005-09-15 Microsoft Corporation System and process for automatic color and exposure correction in an image
US20050201634A1 (en) 2004-03-09 2005-09-15 Microsoft Corporation System and process for automatic exposure correction in an image
US20050210046A1 (en) * 2004-03-18 2005-09-22 Zenodata Corporation Context-based conversion of language to data systems and methods
US6950753B1 (en) 1999-04-15 2005-09-27 The Trustees Of The Columbia University In The City Of New York Methods for extracting information on interactions between biological entities from natural language text data
US20050259866A1 (en) 2004-05-20 2005-11-24 Microsoft Corporation Low resolution OCR for camera acquired documents
US6985620B2 (en) 2000-03-07 2006-01-10 Sarnoff Corporation Method of pose estimation and model refinement for video representation of a three dimensional scene
US6996295B2 (en) 2002-01-10 2006-02-07 Siemens Corporate Research, Inc. Automatic document reading system for technical drawings
US20060033963A1 (en) * 2004-08-10 2006-02-16 Hirobumi Nishida Image processing device, image processing method, image processing program, and recording medium
US20060041605A1 (en) * 2004-04-01 2006-02-23 King Martin T Determining actions involving captured information and electronic content associated with rendered documents
US20060039610A1 (en) * 2004-08-19 2006-02-23 Nextace Corporation System and method for automating document search and report generation
US20060045337A1 (en) 2004-08-26 2006-03-02 Microsoft Corporation Spatial recognition and grouping of text and graphics
US7010158B2 (en) 2001-11-13 2006-03-07 Eastman Kodak Company Method and apparatus for three-dimensional scene modeling and reconstruction
US20060053097A1 (en) * 2004-04-01 2006-03-09 King Martin T Searching and accessing documents on private networks for use with captures from rendered documents
US20060050969A1 (en) 2004-09-03 2006-03-09 Microsoft Corporation Freeform digital ink annotation recognition
US20060085740A1 (en) 2004-10-20 2006-04-20 Microsoft Corporation Parsing hierarchical lists and outlines
US20060088214A1 (en) 2004-10-22 2006-04-27 Xerox Corporation System and method for identifying and labeling fields of text associated with scanned business documents
US20060095248A1 (en) 2004-11-04 2006-05-04 Microsoft Corporation Machine translation system incorporating syntactic dependency treelets into a statistical framework
US7046401B2 (en) * 2001-06-01 2006-05-16 Hewlett-Packard Development Company, L.P. Camera-based document scanning system using multiple-pass mosaicking
US20060120624A1 (en) 2004-12-08 2006-06-08 Microsoft Corporation System and method for video browsing using a cluster index
US20060177150A1 (en) 2005-02-01 2006-08-10 Microsoft Corporation Method and system for combining multiple exposure images having scene and camera motion
US7107207B2 (en) 2002-06-19 2006-09-12 Microsoft Corporation Training machine learning by sequential conditional generalized iterative scaling
US20060224610A1 (en) 2003-08-21 2006-10-05 Microsoft Corporation Electronic Inking Process
US20060230004A1 (en) 2005-03-31 2006-10-12 Xerox Corporation Systems and methods for electronic document genre classification using document grammars
US7126630B1 (en) 2001-02-09 2006-10-24 Kujin Lee Method and apparatus for omni-directional image and 3-dimensional data acquisition with data annotation and dynamic range extension method
US20060245641A1 (en) 2005-04-29 2006-11-02 Microsoft Corporation Extracting data from semi-structured information utilizing a discriminative context free grammar
US20060245654A1 (en) 2005-04-29 2006-11-02 Microsoft Corporation Utilizing grammatical parsing for structured layout analysis
US20060253273A1 (en) 2004-11-08 2006-11-09 Ronen Feldman Information extraction using a trainable grammar
US20060280370A1 (en) 2005-06-13 2006-12-14 Microsoft Corporation Application of grammatical parsing to visual recognition tasks
US20070003147A1 (en) 2005-07-01 2007-01-04 Microsoft Corporation Grammatical parsing of document visual structures
US20070016514A1 (en) * 2005-07-15 2007-01-18 Al-Abdulqader Hisham A System, program product, and methods for managing contract procurement
US20070018858A1 (en) 2002-10-30 2007-01-25 Nbt Technology, Inc., (A Delaware Corporation) Content-based segmentation scheme for data compression in storage and transmission including hierarchical segment representation
US20070031062A1 (en) 2005-08-04 2007-02-08 Microsoft Corporation Video registration and image sequence stitching
US7184091B2 (en) * 2000-11-07 2007-02-27 Minolta Co., Ltd. Method for connecting split images and image shooting apparatus
US20070055662A1 (en) 2004-08-01 2007-03-08 Shimon Edelman Method and apparatus for learning, recognizing and generalizing sequences
US20070061415A1 (en) 2001-02-16 2007-03-15 David Emmett Automatic display of web content to smaller display devices: improved summarization and navigation
US7197497B2 (en) 2003-04-25 2007-03-27 Overture Services, Inc. Method and apparatus for machine learning a document relevance function
US20070150387A1 (en) * 2005-02-25 2007-06-28 Michael Seubert Consistent set of interfaces derived from a business object model
US20070177195A1 (en) * 2005-10-31 2007-08-02 Treber Rebert Queue processor for document servers
US20070222770A1 (en) 1999-05-25 2007-09-27 Silverbrook Research Pty Ltd Recording and Communication of Handwritten Information
US7343049B2 (en) * 2002-03-07 2008-03-11 Marvell International Technology Ltd. Method and apparatus for performing optical character recognition (OCR) and text stitching
US20080147790A1 (en) * 2005-10-24 2008-06-19 Sanjeev Malaney Systems and methods for intelligent paperless document management
US7471830B2 (en) * 2003-03-15 2008-12-30 Samsung Electronics Co., Ltd. Preprocessing device and method for recognizing image characters
US20100033765A1 (en) * 2008-08-05 2010-02-11 Xerox Corporation Document type classification for scanned bitmaps
US20110066424A1 (en) * 2004-04-02 2011-03-17 K-Nfb Reading Technology, Inc. Text Stitching From Multiple Images
US9479523B2 (en) 2013-04-28 2016-10-25 Verint Systems Ltd. System and method for automated configuration of intrusion detection systems

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040019636A1 (en) * 2002-07-24 2004-01-29 Sun Microsystems, Inc. System and method for dynamically routing web procedure calls
KR100527002B1 (en) * 2003-02-26 2005-11-08 한국전자통신연구원 Apparatus and method of that consider energy distribution characteristic of speech signal

Patent Citations (152)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US1030881A (en) 1911-02-24 1912-07-02 Allen S Erquhart Pipe-wrench.
US4040009A (en) * 1975-04-11 1977-08-02 Hitachi, Ltd. Pattern recognition system
US5235650A (en) 1989-02-02 1993-08-10 Samsung Electronics Co. Ltd. Pattern classifier for character recognition
US5020112A (en) 1989-10-31 1991-05-28 At&T Bell Laboratories Image recognition method using two-dimensional stochastic grammars
US5627942A (en) 1989-12-22 1997-05-06 British Telecommunications Public Limited Company Trainable neural network having short-term memory for altering input layer topology during training
US5920477A (en) * 1991-12-23 1999-07-06 Hoffberg; Steven M. Human factored interface incorporating adaptive pattern recognition based controller apparatus
US5579436A (en) 1992-03-02 1996-11-26 Lucent Technologies Inc. Recognition unit model training based on competing word and word string models
US5442715A (en) 1992-04-06 1995-08-15 Eastman Kodak Company Method and apparatus for cursive script recognition
US5680479A (en) * 1992-04-24 1997-10-21 Canon Kabushiki Kaisha Method and apparatus for character recognition
US5432868A (en) 1992-08-07 1995-07-11 Nippondenso Co., Ltd. Information medium recognition device
US5687286A (en) 1992-11-02 1997-11-11 Bar-Yam; Yaneer Neural networks with subdivision
US6011634A (en) * 1992-11-27 2000-01-04 Kabushiki Kaisha Toshiba Portable facsimile equipment
US5440662A (en) 1992-12-11 1995-08-08 At&T Corp. Keyword/non-keyword classification in isolated word speech recognition
US5373566A (en) 1992-12-24 1994-12-13 Motorola, Inc. Neural network-based diacritical marker recognition system and method
US5832435A (en) 1993-03-19 1998-11-03 Nynex Science & Technology Inc. Methods for controlling the generation of speech from text representing one or more names
US5625707A (en) 1993-04-29 1997-04-29 Canon Inc. Training a neural network using centroid dithering by randomly displacing a template
US5511148A (en) * 1993-04-30 1996-04-23 Xerox Corporation Interactive copying system
US6587587B2 (en) 1993-05-20 2003-07-01 Microsoft Corporation System and methods for spacing, storing and recognizing electronic representations of handwriting, printing and drawings
US6363171B1 (en) 1994-01-13 2002-03-26 Stmicroelectronics S.R.L. Apparatus for recognizing alphanumeric characters
US5479523A (en) 1994-03-16 1995-12-26 Eastman Kodak Company Constructing classification weights matrices for pattern recognition systems using reduced element feature subsets
US5625748A (en) 1994-04-18 1997-04-29 Bbn Corporation Topic discriminator using posterior probability or confidence scores
US5787194A (en) 1994-11-08 1998-07-28 International Business Machines Corporation System and method for image processing using segmentation of images and classification and merging of image segments using a cost function
US5987171A (en) 1994-11-10 1999-11-16 Canon Kabushiki Kaisha Page analysis system
US5649032A (en) 1994-11-14 1997-07-15 David Sarnoff Research Center, Inc. System for automatically aligning images to form a mosaic image
US5594676A (en) 1994-12-22 1997-01-14 Genesis Microchip Inc. Digital image warping system
US5749066A (en) 1995-04-24 1998-05-05 Ericsson Messaging Systems Inc. Method and apparatus for developing a neural network for phoneme recognition
US5818977A (en) 1996-03-12 1998-10-06 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Supply And Services And Of Public Works Photometric measurement apparatus
US5930746A (en) 1996-03-20 1999-07-27 The Government Of Singapore Parsing and translating natural language sentences automatically
US5943443A (en) 1996-06-26 1999-08-24 Fuji Xerox Co., Ltd. Method and apparatus for image based document processing
US20020099542A1 (en) * 1996-09-24 2002-07-25 Allvoice Computing Plc. Method and apparatus for processing the output of a speech recognition engine
US6041299A (en) 1997-03-11 2000-03-21 Atr Interpreting Telecommunications Research Laboratories Apparatus for calculating a posterior probability of phoneme symbol, and speech recognition apparatus
US6076057A (en) 1997-05-21 2000-06-13 At&T Corp Unsupervised HMM adaptation based on speech-silence discrimination
US5960397A (en) 1997-05-27 1999-09-28 At&T Corp System and method of recognizing an acoustic environment to adapt a set of based recognition models to the current acoustic environment for subsequent speech recognition
US6259826B1 (en) 1997-06-12 2001-07-10 Hewlett-Packard Company Image processing method and device
US6636216B1 (en) 1997-07-15 2003-10-21 Silverbrook Research Pty Ltd Digital image warping system
US6157747A (en) 1997-08-01 2000-12-05 Microsoft Corporation 3-dimensional image rotation method and apparatus for producing image mosaics
US6097854A (en) 1997-08-01 2000-08-01 Microsoft Corporation Image mosaic construction system and apparatus with patch-based alignment, global block adjustment and pair-wise motion-based local warping
US6011558A (en) 1997-09-23 2000-01-04 Industrial Technology Research Institute Intelligent stitcher for panoramic image-based virtual worlds
US6178398B1 (en) 1997-11-18 2001-01-23 Motorola, Inc. Method, device and system for noise-tolerant language understanding
US20020010719A1 (en) * 1998-01-30 2002-01-24 Julian M. Kupiec Method and system for generating document summaries with location information
US6393054B1 (en) 1998-04-20 2002-05-21 Hewlett-Packard Company System and method for automatically detecting shot boundary and key frame from a compressed video data
US6377704B1 (en) 1998-04-30 2002-04-23 Xerox Corporation Method for inset detection in document layout analysis
US6246413B1 (en) 1998-08-17 2001-06-12 Mgi Software Corporation Method and system for creating panoramas
US20020106124A1 (en) * 1998-12-30 2002-08-08 Shin-Ywan Wang Block selection of table features
US6714249B2 (en) 1998-12-31 2004-03-30 Eastman Kodak Company Producing panoramic digital images by digital camera systems
US20050036681A1 (en) * 1999-03-18 2005-02-17 Choicepoint Asset Company System and method for the secure data entry from document images
US6950753B1 (en) 1999-04-15 2005-09-27 The Trustees Of The Columbia University In The City Of New York Methods for extracting information on interactions between biological entities from natural language text data
US6782505B1 (en) 1999-04-19 2004-08-24 Daniel P. Miranker Method and system for generating structured data from semi-structured data sources
US20050055641A1 (en) * 1999-04-30 2005-03-10 Canon Kabushiki Kaisha Data processing apparatus, data processing method, and storage medium storing computer-readable program
US20070222770A1 (en) 1999-05-25 2007-09-27 Silverbrook Research Pty Ltd Recording and Communication of Handwritten Information
US6687400B1 (en) 1999-06-16 2004-02-03 Microsoft Corporation System and process for improving the uniformity of the exposure and tone of a digital image
US6542635B1 (en) 1999-09-08 2003-04-01 Lucent Technologies Inc. Method for document comparison and classification using document image layout
US6597801B1 (en) 1999-09-16 2003-07-22 Hewlett-Packard Development Company L.P. Method for object registration via selection of models with dynamically ordered features
US6650774B1 (en) 1999-10-01 2003-11-18 Microsoft Corporation Locally adapted histogram equalization
US20010014229A1 (en) * 1999-12-20 2001-08-16 Hironobu Nakata Digital image forming apparatus
US20020140829A1 (en) 1999-12-31 2002-10-03 Stmicroelectronics, Inc. Still picture format for subsequent picture stitching for forming a panoramic image
US6677981B1 (en) 1999-12-31 2004-01-13 Stmicroelectronics, Inc. Motion play-back of still pictures comprising a panoramic view for simulating perspective
US20010019636A1 (en) * 2000-03-03 2001-09-06 Hewlett-Packard Company Image capture systems
US6985620B2 (en) 2000-03-07 2006-01-10 Sarnoff Corporation Method of pose estimation and model refinement for video representation of a three dimensional scene
US6930703B1 (en) 2000-04-29 2005-08-16 Hewlett-Packard Development Company, L.P. Method and apparatus for automatically capturing a plurality of images during a pan
US6678415B1 (en) 2000-05-12 2004-01-13 Xerox Corporation Document image decoding using an integrated stochastic language model
US20020198909A1 (en) * 2000-06-06 2002-12-26 Microsoft Corporation Method and system for semantically labeling data and providing actions based on semantically labeled data
US6813391B1 (en) 2000-07-07 2004-11-02 Microsoft Corp. System and method for exposure compensation
US6559846B1 (en) 2000-07-07 2003-05-06 Microsoft Corporation System and process for viewing panoramic video
US20040233274A1 (en) 2000-07-07 2004-11-25 Microsoft Corporation Panoramic video
US6701030B1 (en) 2000-07-07 2004-03-02 Microsoft Corporation Deghosting panoramic video
US6766320B1 (en) 2000-08-24 2004-07-20 Microsoft Corporation Search engine with natural language-based robust parsing for user query and relevance feedback learning
JP2002133389A (en) 2000-10-26 2002-05-10 Nippon Telegr & Teleph Corp <Ntt> Data classification learning method, data classification method, data classification learner, data classifier, storage medium with data classification learning program recorded, and recording medium with data classification program recorded
US7184091B2 (en) * 2000-11-07 2007-02-27 Minolta Co., Ltd. Method for connecting split images and image shooting apparatus
US20020113805A1 (en) 2001-01-02 2002-08-22 Jiang Li Image-based walkthrough system and process employing spatial video streaming
US20040111408A1 (en) 2001-01-18 2004-06-10 Science Applications International Corporation Method and system of ranking and clustering for document indexing and retrieval
US7126630B1 (en) 2001-02-09 2006-10-24 Kujin Lee Method and apparatus for omni-directional image and 3-dimensional data acquisition with data annotation and dynamic range extension method
US20070061415A1 (en) 2001-02-16 2007-03-15 David Emmett Automatic display of web content to smaller display devices: improved summarization and navigation
JP2002269499A (en) 2001-03-07 2002-09-20 Masakazu Suzuki Numerical expression recognizing device and numerical expression recognizing method, and character recognizing device and character recognizing method
US20020172425A1 (en) 2001-04-24 2002-11-21 Ramarathnam Venkatesan Recognizer of text-based work
US20050015251A1 (en) 2001-05-08 2005-01-20 Xiaobo Pi High-order entropy error functions for neural classifiers
US7046401B2 (en) * 2001-06-01 2006-05-16 Hewlett-Packard Development Company, L.P. Camera-based document scanning system using multiple-pass mosaicking
US20030010992A1 (en) 2001-07-16 2003-01-16 Motorola, Inc. Semiconductor structure and method for implementing cross-point switch functionality
US7010158B2 (en) 2001-11-13 2006-03-07 Eastman Kodak Company Method and apparatus for three-dimensional scene modeling and reconstruction
US20030103670A1 (en) 2001-11-30 2003-06-05 Bernhard Schoelkopf Interactive images
US20030110150A1 (en) 2001-11-30 2003-06-12 O'neil Patrick Eugene System and method for relational representation of hierarchical data
US20040114799A1 (en) 2001-12-12 2004-06-17 Xun Xu Multiple thresholding for video frame segmentation
US6996295B2 (en) 2002-01-10 2006-02-07 Siemens Corporate Research, Inc. Automatic document reading system for technical drawings
US20030171915A1 (en) 2002-01-14 2003-09-11 Barklund Par Jonas System for normalizing a discourse representation structure and normalized data structure
US7343049B2 (en) * 2002-03-07 2008-03-11 Marvell International Technology Ltd. Method and apparatus for performing optical character recognition (OCR) and text stitching
US20030169925A1 (en) 2002-03-11 2003-09-11 Jean-Pierre Polonowski Character recognition system and method
US7327883B2 (en) 2002-03-11 2008-02-05 Imds Software Inc. Character recognition system and method
US20030194149A1 (en) 2002-04-12 2003-10-16 Irwin Sobel Imaging apparatuses, mosaic image compositing methods, video stitching methods and edgemap generation methods
US20030210817A1 (en) 2002-05-10 2003-11-13 Microsoft Corporation Preprocessing of multi-line rotated electronic ink
US20030215139A1 (en) 2002-05-14 2003-11-20 Microsoft Corporation Handwriting layout analysis of freeform digital ink input
US20030215145A1 (en) 2002-05-14 2003-11-20 Microsoft Corporation Classification analysis of freeform digital ink input
US20030215138A1 (en) 2002-05-14 2003-11-20 Microsoft Corporation Incremental system for real time digital ink analysis
US20040006742A1 (en) 2002-05-20 2004-01-08 Slocombe David N. Document structure identifier
US20030222984A1 (en) 2002-06-03 2003-12-04 Zhengyou Zhang System and method for calibrating a camera with one-dimensional objects
US20030234772A1 (en) 2002-06-19 2003-12-25 Zhengyou Zhang System and method for whiteboard and audio capture
US7107207B2 (en) 2002-06-19 2006-09-12 Microsoft Corporation Training machine learning by sequential conditional generalized iterative scaling
RU2234126C2 (en) 2002-09-09 2004-08-10 Аби Софтвер Лтд. Method for recognition of text with use of adjustable classifier
US20070018858A1 (en) 2002-10-30 2007-01-25 Nbt Technology, Inc., (A Delaware Corporation) Content-based segmentation scheme for data compression in storage and transmission including hierarchical segment representation
US20050111737A1 (en) 2002-12-12 2005-05-26 Eastman Kodak Company Method for generating customized photo album pages and prints based on people and gender profiles
RU2234734C1 (en) 2002-12-17 2004-08-20 Аби Софтвер Лтд. Method for multi-stage analysis of information of bitmap image
US20040141648A1 (en) 2003-01-21 2004-07-22 Microsoft Corporation Ink divider and associated application program interface
US20040181749A1 (en) 2003-01-29 2004-09-16 Microsoft Corporation Method and apparatus for populating electronic forms from scanned documents
US20040167778A1 (en) 2003-02-20 2004-08-26 Zica Valsan Method for recognizing speech
US20040165786A1 (en) 2003-02-22 2004-08-26 Zhengyou Zhang System and method for converting whiteboard content into an electronic document
US7471830B2 (en) * 2003-03-15 2008-12-30 Samsung Electronics Co., Ltd. Preprocessing device and method for recognizing image characters
US20040186714A1 (en) 2003-03-18 2004-09-23 Aurilab, Llc Speech recognition improvement through post-processsing
US20050104902A1 (en) 2003-03-31 2005-05-19 Microsoft Corporation System and method for whiteboard scanning to obtain a high resolution image
US20040189674A1 (en) 2003-03-31 2004-09-30 Zhengyou Zhang System and method for whiteboard scanning to obtain a high resolution image
US7197497B2 (en) 2003-04-25 2007-03-27 Overture Services, Inc. Method and apparatus for machine learning a document relevance function
US20040218799A1 (en) * 2003-05-02 2004-11-04 International Business Machines Corporation Background data recording and use with document processing
US7142723B2 (en) 2003-07-18 2006-11-28 Microsoft Corporation System and process for generating high dynamic range images from multiple exposures of a moving scene
US20050013501A1 (en) 2003-07-18 2005-01-20 Kang Sing Bing System and process for generating high dynamic range images from multiple exposures of a moving scene
US20060224610A1 (en) 2003-08-21 2006-10-05 Microsoft Corporation Electronic Inking Process
US20050044106A1 (en) 2003-08-21 2005-02-24 Microsoft Corporation Electronic ink processing
US20050169555A1 (en) * 2003-11-07 2005-08-04 Yuichi Hasegawa Image processing apparatus and method, and computer program
US20050125746A1 (en) 2003-12-04 2005-06-09 Microsoft Corporation Processing an electronic document for information extraction
US20050154979A1 (en) 2004-01-14 2005-07-14 Xerox Corporation Systems and methods for converting legacy and proprietary documents into extended mark-up language format
US20050168623A1 (en) 2004-01-30 2005-08-04 Stavely Donald J. Digital image production method and apparatus
US20050185047A1 (en) 2004-02-19 2005-08-25 Hii Desmond Toh O. Method and apparatus for providing a combined image
US20050200921A1 (en) 2004-03-09 2005-09-15 Microsoft Corporation System and process for automatic color and exposure correction in an image
US20050201634A1 (en) 2004-03-09 2005-09-15 Microsoft Corporation System and process for automatic exposure correction in an image
US20050210046A1 (en) * 2004-03-18 2005-09-22 Zenodata Corporation Context-based conversion of language to data systems and methods
US20060041605A1 (en) * 2004-04-01 2006-02-23 King Martin T Determining actions involving captured information and electronic content associated with rendered documents
US20060053097A1 (en) * 2004-04-01 2006-03-09 King Martin T Searching and accessing documents on private networks for use with captures from rendered documents
US20110066424A1 (en) * 2004-04-02 2011-03-17 K-Nfb Reading Technology, Inc. Text Stitching From Multiple Images
US20050259866A1 (en) 2004-05-20 2005-11-24 Microsoft Corporation Low resolution OCR for camera acquired documents
US20070055662A1 (en) 2004-08-01 2007-03-08 Shimon Edelman Method and apparatus for learning, recognizing and generalizing sequences
US20060033963A1 (en) * 2004-08-10 2006-02-16 Hirobumi Nishida Image processing device, image processing method, image processing program, and recording medium
US20060039610A1 (en) * 2004-08-19 2006-02-23 Nextace Corporation System and method for automating document search and report generation
US20060045337A1 (en) 2004-08-26 2006-03-02 Microsoft Corporation Spatial recognition and grouping of text and graphics
US20060050969A1 (en) 2004-09-03 2006-03-09 Microsoft Corporation Freeform digital ink annotation recognition
US20060085740A1 (en) 2004-10-20 2006-04-20 Microsoft Corporation Parsing hierarchical lists and outlines
US20060088214A1 (en) 2004-10-22 2006-04-27 Xerox Corporation System and method for identifying and labeling fields of text associated with scanned business documents
US20060095248A1 (en) 2004-11-04 2006-05-04 Microsoft Corporation Machine translation system incorporating syntactic dependency treelets into a statistical framework
US20060253273A1 (en) 2004-11-08 2006-11-09 Ronen Feldman Information extraction using a trainable grammar
US20060120624A1 (en) 2004-12-08 2006-06-08 Microsoft Corporation System and method for video browsing using a cluster index
US20060177150A1 (en) 2005-02-01 2006-08-10 Microsoft Corporation Method and system for combining multiple exposure images having scene and camera motion
US7239805B2 (en) 2005-02-01 2007-07-03 Microsoft Corporation Method and system for combining multiple exposure images having scene and camera motion
US20070150387A1 (en) * 2005-02-25 2007-06-28 Michael Seubert Consistent set of interfaces derived from a business object model
US20060230004A1 (en) 2005-03-31 2006-10-12 Xerox Corporation Systems and methods for electronic document genre classification using document grammars
US20060245641A1 (en) 2005-04-29 2006-11-02 Microsoft Corporation Extracting data from semi-structured information utilizing a discriminative context free grammar
US20060245654A1 (en) 2005-04-29 2006-11-02 Microsoft Corporation Utilizing grammatical parsing for structured layout analysis
US20060280370A1 (en) 2005-06-13 2006-12-14 Microsoft Corporation Application of grammatical parsing to visual recognition tasks
US20070003147A1 (en) 2005-07-01 2007-01-04 Microsoft Corporation Grammatical parsing of document visual structures
US8249344B2 (en) 2005-07-01 2012-08-21 Microsoft Corporation Grammatical parsing of document visual structures
US20070016514A1 (en) * 2005-07-15 2007-01-18 Al-Abdulqader Hisham A System, program product, and methods for managing contract procurement
US7460730B2 (en) 2005-08-04 2008-12-02 Microsoft Corporation Video registration and image sequence stitching
US20070031062A1 (en) 2005-08-04 2007-02-08 Microsoft Corporation Video registration and image sequence stitching
US20080147790A1 (en) * 2005-10-24 2008-06-19 Sanjeev Malaney Systems and methods for intelligent paperless document management
US20070177195A1 (en) * 2005-10-31 2007-08-02 Treber Rebert Queue processor for document servers
US20100033765A1 (en) * 2008-08-05 2010-02-11 Xerox Corporation Document type classification for scanned bitmaps
US9479523B2 (en) 2013-04-28 2016-10-25 Verint Systems Ltd. System and method for automated configuration of intrusion detection systems

Non-Patent Citations (136)

* Cited by examiner, † Cited by third party
Title
"ABBYY FineReader 8.0 Redefines OCR Accuracy and Performance; Intelligent Recognition Enhancements," Aug. 29, 2005, Market Wire.
"ambiguous grammar", Feb. 2, 2001.
"Hidden Markov Models", printed Sep. 23, 2004.
"Photovista Panorama-The world's leading 360° panorama software," retrieved Sep. 7, 2005, Iseemedia, http://www.iseemedia.com/main/products/photovista/panorama.
"Product Features: (PanoWorx Only Features)," retrieved Sep. 7, 2005, http://www.vrtoolbox.com/vrpanoworx.html.
Adobe Acrobat 7.0 professional, Jan. 2005. *
Agarwala, "Interactive Digital Photomontage", in ACM Transactions on Graphics (TOG), vol. 23, No. 3, Aug. 8-12, 2004.
Allan, "Challenges in Information Retrieval and Language Modeling", Report of a Workshop held at the Center for intelligent information Retrieval, University of Massachusetts Amherst, Sep. 2002.
Altun, "Discriminative Learning for Label Sequences via Boosting", Advances in Neural Information Processing Systems, Dec. 9-14, 2002.
Artieres, "Poorly Structured Handwritten Documents Segmentation using Continuous Probabilistc Feature Grammars", Proceedings of the Third International Workshop on Document Layout Interpretation and its Applications, Aug. 2, 2003.
Baecker, "A Principled Design for Scalable Internet Visual Communications with Rich Media, Interactivity, and Structured Archives", Proceedings of the 2003 conference of the Centre for Advanced Studies on Collaborative Research, Oct. 6-9, 2003.
Bergen, "Hierarchical Model-Based Motion Estimation", Second European Conference on Computer Vision (ECCV'92), May 19-22, 1992.
Block, "scanR lets you copy, scan with your cameraphone," Jan. 5, 2006, Engadget.com, http://engadget.com/2006/01/05/scanr-lets-you-copy-scan-with-your-cameraphone.
Blostein, "Applying Compiler Techniques to Diagram Recognition", Aug. 11-15, 2002.
Bogoni, "Extending Dynamic Range of Monochrome and Color images through Fusion", Proceedings of the 15th International Conference on Pattern Recognition (ICPR 2000), vol. 3, Sep. 3-7, 2000.
Borkar, "Automatically Extracting Structure from Free Text Addresses", Bulletin of the IEEE Computer Society Technical Committee on Data Engineering, Dec. 2000.
Bouckaert, "Low level information extraction: a Bayesian network based approach", In Proceedings of TextML, Jul. 2002.
Boykov, "Fast Approximate Energy Minimization via Graph Cuts", IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 23, No. 11, Nov. 2001.
Breuel, "High Performance Document Layout Analysis", Symposium on Document Image Understanding Technology (SDIUT03), Apr. 9-11, 2003.
Brown, "Multi-Image Matching using Multi-Scale Oriented Patches", Technical Report MSR-TR-2004-133, Dec. 2004.
Brown, "Recognising Panoramas"; Proceedings of the Ninth IEEE International Conference on Computer Vision, Oct. 13-16, 2003.
Cardie, "Proposal for an Interactive Environment for Information Extraction", Cornell University Computer Science Technical Report TR98-1702, Jul. 6, 1998.
Caruana, "High Predsion Information Extraction", KDD-2000 Workshop on Text Mining, Aug. 20-23, 2000.
Chan, "Mathematical expression recognition: a survey", International Journal on Document Analysis and Recognition, Jun. 12, 2000.
Charniak, "Edge-Based Best-First Chart Parsing*", May 1, 1998.
Charniak, "Statistical Techniques for Natural Language Parsing", Aug. 7, 1997.
Chen, "Detecting and Reading Text in Natural Scenes", IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, Jun. 27-Jul. 2, 2004.
Chou, "Recognition of Equations Using a Two-Dimensional Stochastic Context-Free Grammar", SPIE vol. 1199 Visual Communications and Image Processing IV, Nov. 1, 1989.
CN Notice on the Second Office Action for Application No, 200680031501.X, Apr. 6, 2011.
CN Notice on the Third Office Action for Application No. 200680031501.X, Aug. 22, 2011.
Collins, "Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms", Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Jul. 2002.
Conway, "Page Grammars and Page Parsing: A Syntactic Approach to Document Layout Recognition", Proceedings of the Second International Conference on Document Analysis and Recognition (ICDAR '93), Oct. 20-22, 1993.
Cortes, "Support-Vector Networks", Machine Learning, Mar. 8, 1995.
Davis, "Mosaics of Scenes with Moving Objects", IEEE Conference on Computer Vision and Pattern Recognition (CVPR'98), Jun. 23-25, 1998.
Debevec, "Recovering High Dynamic Range Radiance Maps from Photographs", Proceedings of the 24th Annual Conference on Computer Graphics (SIGGRAPH '97), Aug. 3-8, 1997.
Durand, "Fast Bilateral Filtering for the Display of High-Dynamic-Range Images", in ACM Transactions on Graphics (TOG), vol. 21, No. 3, Jul. 2002.
Faloutsos, "FastMap: A Fast Algorithm for Indexing, Data-Mining and Visualization af Traditional and Multimedia Datasets", Proceedings of the 1995 ACM SIGMOD International Conference on Management of Data, May 22-25, 1995.
Fattal, "Gradient Domain High Dynamic Range Compression", in ACM Transactions on Graphics (TOG), vol. 21, No. 3, Jul. 2002.
Freund, "A decision-theoretic generalization of on-line learning and an application to boosting", Appearing in the proceedings of the Second European Conference on Computational Learning Theory, Mar. 1995.
Freund, "Experiments with a New Boosting Algorithm", Machine Learning: Proceedings of the Thirteenth International Conference, Jul. 3-6, 1996.
Freund, "Large Margin Classification Using the Perceptron Algorithm", Machine Learning, Dec. 1999.
Golfarelli, "A Methodological Framework for Data Warehouse Design", DOLAP '98, ACM First International Workshop on Data Warehousing and OLAP, Nov. 7, 1998.
Graham-Rowe, "Camera phones will be high-precision scanners," Sep. 14, 2005, NewScientist.com News Service, http://www.newscientist.com.
Hartigan, "The K-Means Algc.trithin", Clustering Algorithms, John Wiley & Sons, pp. 84-107, 1975.
He, "Multiscale Conditional Random Fields for Image Labeling", Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, Jun. 27-Jul. 2, 2004.
Hull, "Recognition of Mathematics Using a Two-Dimensional Trainable Context-free Grammar" Jun. 11, 1996.
Irani, "Improving Resolution by Image Registration", CVGIP: Graphical Models and Image Processing, vol. 53, No. 3, May 1991.
Irani, "Mosaic Based Representations of Video Sequences and Their Applications", Proceedings of the Fifth international Conference on Computer Vision (ICCV '95), Jun. 20-23, 1995.
Irani, "Recovery of Ego-Motion Using Image Stabilization", Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Jun. 21-23, 1994.
Irani, "Video Indexing Based on Mosaic Representations"; Proceedings of the IEEE, May 1998.
Jain, "Structure in On-line Documents", 6th International Conference on Document Analysis and Recognition, Sep. 10-13, 2001.
JP Notice of Rejection for Application No. 2008-520352, Dec. 2, 2011.
Kang, "Characterization of Errors in Compositing Panoramic Images", Appears in Conference on Computer Vision and Pattern Recognition (CVPR '97), Jun. 1996.
Kang, "High Dynamic Range Video", ACM Transactions on Graphics, vol. 22, No. 3, Papers from SIGGRAPH, Jul. 28-31, 2003.
Kanungo, "Stochastic Language Models for Style-Directed Layout Analysis of Document Images", IEEE Transactions on Image Processing, vol. 12, No. 5, May 2003.
Kay, "Algorithm Schemata and Data Structures in Syntactic Processing", Oct. 1980.
Kay, "Chart Generation", Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, Jun. 26, 1996.
Kim, "Automated Labeling in Document Images", Proc. SPIE vol. 4307, Document Recognition and Retrieval VIII, Jan. 20, 2001.
Klein, "A* Parsing: Fast Exact Viterbi Parse Selection", Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, May 27-Jun. 1, 2003.
Koch; "Calibration of Hand-held Camera Sequences for Plenoptic Modeling", Proceedings of the 7th International Conference on Computer Vision (ICCV '99), Sep. 20-25, 1999.
Kolmogorov, "What Energy Functions Can Be Minimized via Graph Cuts?", IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 26, No. 2, Feb. 2004.
Krisbnamoorthy, "Syntactic Segmentation and Labeling of Digitized Pages from Technical Journals", IEEE Transactions on Pattern Analysis and Machine intelligence, vol. 15, No. 7, Jul. 1993.
Kristjansson, "Interactive Information Extraction with Constrained Conditional Random Fields", American Association for Artificial Intelligence, Mar. 22-24, 2004.
Lafferty, "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data", Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), Jun. 28-Jul. 1, 2001.
Langley, "Learning Context-Free Grammars with a Simplicity Bias", Proceedings of the Eleventh European Conference on Machine Learning, May 31-Jun. 2, 2000.
Levin, "Seamless Image Stitching in the Gradient Domain", Eighth European Conference on Computer Vision (ECCV 2004), vol. 4, May 11-14, 2004.
Li, "Improved Video Mosaic Construction by Selecting a Suitable Subset of Video Images", Proceedings of the 27th Australasian Conference on Computer Science, Jan. 2004.
Liang, "An Optimization Methodology for Document Structure Extraction on Latin Character Documents", IEEE Transactions on Pattern Analysis and Machine intelligence, vol. 23, No. 7, Jul. 2001.
Lucas, "An iterative image Registration Technique with an Application to Stereo Vision", Proceedings of the 14th DARPA Imaging Understanding Workshop, Apr. 21-23, 1981.
Mann, "On being 'undigital' with digital cameras: Extending Dynamic Range by Combining Differently Exposed Pictures", IS&T's 48th Annual Conference, May 7-11, 1995.
Mann, "Painting with Looks: Photographic: images from Video Using Quantimetric Processing", Proceedings of the 10th ACM International Conference on Multimedia 2002, Dec. 1-6, 2002.
Mann, "Virtual bellows: Constructing High Quality Stills from Video", In the First IEEE International Conference on Image Processing (ICIP '94), vol. 1, Nov. 13-16, 1994.
Manning, "Foundations of Statistical Natural Language Processing", The MIT Press, 1999.
Mao, "Document Structure Analysis Algorithms: A Literature Survey", Proc. SPIE vol. 5010, Document Recognition and Retrieval X, Jan. 20, 2003.
Marcus, "The Penn Treebank: Annotating Predicate Argument Structure", Proceedings of the workshop on Human Language Technology, Mar. 8-11, 1994.
Massey, "Salient stills: Process and practice", IBM Systems Journal, vol. 35, Nos. 3&4, 1996.
Matsakis, "Recognition of Handwritten Mathematical Expressions" May 21, 1999.
McCallum, "Early Results for Named Entity Recognition with Conditional Random Fields, Feature Induction and Web-Enhanced Lexicons", Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL, May 27-Jun. 1, 2003.
McCallum, "Efficiently Inducing Features of Conditional Random Fields", Proceedings of the 19th Conference in Uncertainty in Artificial Intelligence, Aug. 7-10 2003.
McCallum, "Maximum Entropy Markov Models for Information Extraction and Segmentation", Proceedings of the Seventeenth International Conference on Machine Learning, Jun. 29-Jul. 2, 2000.
Microsoft Press® Computer Dictionary, Second Edition, The Comprehensive Standard for Business, School, Library, and Home, 1994.
Miller, "Ambiguity and Constraint in Mathematical Expression Recognition", Proceedings of the Fifteenth National Conference on Artificial Intelligence, Jul. 26-30, 1998.
Mitsunaga, "Radiometric Self Calibration", IEEE Conference on Computer Vision and Pattern Recognition (CVPR '99), vol. 2, Session 2-C, Jun. 23-25, 1999.
Mittendorf, "Applying Probabilistic Term Weighting to OCR Text in the Case of a Large Alphabetic Library Catalogue", Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul. 9-13, 1995.
Moran, "Pen-Based Interaction Techniques for Organizing Material on an Electronic Whiteboard", Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology, Oct. 14-17, 1997.
Munson, "System Study of a Dial-Up Text Reading Service for the Blind", Proceedings of the ACM annual conference, vol. 1, Aug. 1972.
Namboodiri, "Robust Segmentation of Unconstrained Online Handwritten Documents", Proceedings of the Fourth Indian Conference on Computer Vision, Graphics & image Processing (ICVGIP), Dec. 16-18, 2004.
Navarro, "A Guided Tour to Approximate String Matching", ACM Computing Surveys, vol. 33, No. 1, Mar. 2001.
Nayar, "High Dynamic Range Imaging: Spatially Varying Pixel Exposures", Conference on Computer Vision and Pattern Recognition (CVPR 2000), vol. 1, Jun. 13-15, 2000.
Nestér, "Frame Decimation for Structure and Motion", Second European Workshop on 3D Structure frorn Multiple Images of Large-Scale Environments (SMILE 2000), Jul. 12, 2000.
Nigam, "Using Maximum Entropy for Text Classification", Machine Learning for Information Filtering at IJCAI, Aug. 1, 1999.
Niyogi, "Knowledge-Based Derivation of Document Logical Structure", Third International Conference on Document Analysis and Recognition, ICDAR, vol. 1, Aug. 14-15, 1995.
Pal, "Probability Models for High Dynamic Range Imaging", IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2004), vol. 2, Jun. 27-Jul. 2, 2004.
Patterson, "Features of Samsung MM-A800", Jul. 20, 2005, Cnet.com, http://reviews.cnet.com.
PCT International Search Report and Written Opinion for Application No. PCT/US06/26140, Reference 313210.05 WO, Jul. 23, 2007.
Peleg, "Panoramic Mosaics by Manifold Projection", Conference on Computer Vision and Pattern Recognition (CVPR '97), Jun. 17-19, 1997.
Perez, "Poisson Image Editing", in ACM Transactions on Graphics (TOG), vol. 22, No. 3, Papers from SIGGRAPH 2003, Jul. 28-31, 2003.
Phillips, "CD-ROM Document Database Standard", Second International Conference on Document Analysis and Recognition, ICDAR, Oct. 20-22, 1993.
Pinto, "Table Extraction Using Conditional Random Fields", Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul. 28-Aug. 1, 2003.
Rabiner, "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition". Proceedings of the IEEE, vol. 77, No. 2, Feb. 1989.
Reinhard, "Photographic Tone Reproduction for Digital Images", in ACM Transactions on Graphics (TOG), vol. 21, No. 3, Jul. 2002.
Rozenknop, "Discriminative Models of SCFG and STSG", Seventh International Conference on Text, Speech and Dialogue, Sep. 8-11, 2004.
Ruthruff, "End-User Software Visualizations for Fault Localization", ACM Symposium on Software Visualization, Jun. 11-13, 2003.
Sawhney, "Robust Video Mosaicing through Topology Inference and Local to Global Alignment", 5th European Conference on Computer Vision (ECCV '98), vol. 2, Jun. 2-6, 1998.
Sawhney, "VideoBrush™: Experiences with Consumer Video Mosaicing", Proceedings of the 4th IEEE Workshop on Applications of Computer Vision (WACV '98), Oct. 19-21, 1998.
Sazaklis, "Geometric Decision Trees for Optical Character Recognition", Proceedings of the Thirteenth Annual Symposium on Computational Geometry, Jun. 4-6, 1997.
Scheer, "Active Hidden Markov Models for Information Extraction", Proceedings of the Fourth International Symposium on Intelligent Data Analysis, Sep. 13-15, 2001.
Schilit, "Beyond Paper: Supporting Active Reading with Free Form Digital Ink Annotations", Proceeding of the CHI'98 Conference on Human Factors in Computing Systems, Apr. 18-23, 1998.
Schodl, "Video Textures", Proceedings of the 27th Annual Conference on Computer Graphics (SIGGRAPH2000), Jul. 23-28, 2000.
Sha, "Shallow Parsing with Conditional Random Fields", Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL, May 27-Jun. 1, 2003.
Shilman, "Discerning Structure from Freeform Handwritten Notes", Proceedings of the Seventh International Conference on Document Analysis and Recognition, vol. 1, Aug. 3-6, 2003.
Shilman, "Recognition and Grouping of Handwritten Text in Diagrams and Equations", Ninth International Workshop on Frontiers in Handwriting Recognition, IWFFIR, Oct. 26-29, 2004.
Shilman, "Spatial Recognition and Grouping of Text and Graphics", Eurographics Workshop on Sketch-Based interfaces and Modeling, Aug. 30-31, 2004.
Shilman, "Statistical Visual Language Models for Ink Parsing", In the 2002 AAAI Spring Symposium, Mar. 25-27, 2002.
Stylos, "Citrine: Providing Intelligent Copy-and-Paste", Proceedings of the 17th annual ACM Symposium on User Interface Software and Technology, UIST, Oct. 24-27, 2004.
Sutton, "Conditional Probabilistic Context-Free Grammars", Mar. 13, 2004.
Szeliski, "Creating Full View Panoramic Image Mosaics and Environment Maps", Proceedings of the 24th Annual Conference on Computer Graphics (SIGGRAPH '97), Aug., 3-8, 1997.
Szeliski, "Image Alignment and Stitching: A Tutorial", Technical Report MSR-TR-2004-92, Sep. 27, 2004.
Szeliski, "Video Mosaics for Virtual Environments", IEEE Computer Graphics and Applications, vol. 16, No. 2, Mar. 1996.
Taskar, "Max-Margin Markov Networks", In Advances in Neural Information Processing Systems 16, Dec. 8-13, 2003.
Taskar, "Max-Margin Parsing", Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, EMNLP, Jul. 25-26, 2004.
Tokuyasu, "Turbo recognition: a statistical approach to layout analysis", Proc. SPIE vol. 4307, Document Recognition and Retrieval VIII, Jan. 20, 2001.
Triggs, "Bundle Adjustment—A Modern Synthesis", Vision Algorithms: Theory and Practice, International Workshop on Vision Algorithms (ICCV '99), Sep. 21-22, 1999.
Tsin, "Statistical Calibration of CCD Imaging Process", IEEE International Conference on Computer Vision (ICCV 2001), vol. 1, Jul. 7-14, 2001.
Uyttendaele, "Eliminating Ghosting and Exposure Artifacts in Image Mosaics", IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2001), vol. 2, Dec. 8-14, 2001.
Viola, "Rapid Object Detection using a Boosted Cascade of Simple Features", IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Dec. 8-14, 2001.
Viswanathan, "Document Recognition: An Attribute Grammar Approach", Proc. SPIE vol. 2660, Document Recognition III, Mar. 1996.
Voss, "Concept Indexing", international Conference on Supporting Group Work, Nov. 14-17, 1999.
Wallach, "Conditional Random Fields: An Introduction", University of Pennsylvania CIS Technical Report MS-CIS-04-21, Feb. 24, 2004.
Whittaker, "SCAN: Designing and evaluating user interfaces to support retrieval from speech archives", Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Aug. 15-19, 1999.
www.archive.org archived web page for http://www.panoramio.com/help on Feb. 6, 2006. *
Ye, "Learning to Parse Hierarchical Lists and Outlines using Conditional Random Fields", Proceedings of the 9th International Workshop on Frontiers in Handwriting Recognition, Oct. 26-29, 2004.
Yoshida, "Development of an Optical Character Film Reader", Review of the Electrical Communication Laboratories, vol. 23, Nos. 11-12, Nov.-Dec. 1975.
Zanibbi, "A Survey of Table Recognition: Models, Observations, Transformations, and Inferences", Oct. 24, 2003.
Zhang and He, "Notetaking with a Camera: Whiteboard Scanning and Image Enhancement," Proceedings of IEEE ICASPP, 2004, http://research.microsoft.com/users/lhe/.
Zhu, "Stereo Mosaics from a Moving Video Camera for Environmental Monitoring", First International Workshop on Digital and Computational Video, Dec. 10, 1999.

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9251123B2 (en) * 2010-11-29 2016-02-02 Hewlett-Packard Development Company, L.P. Systems and methods for converting a PDF file
US20120137207A1 (en) * 2010-11-29 2012-05-31 Heinz Christopher J Systems and methods for converting a pdf file
US9513671B2 (en) 2014-08-01 2016-12-06 Microsoft Technology Licensing, Llc Peripheral retention device
US10191986B2 (en) 2014-08-11 2019-01-29 Microsoft Technology Licensing, Llc Web resource compatibility with web applications
US9705637B2 (en) 2014-08-19 2017-07-11 Microsoft Technology Licensing, Llc Guard band utilization for wireless data communication
WO2016028827A1 (en) * 2014-08-21 2016-02-25 Microsoft Technology Licensing, Llc Enhanced interpretation of character arrangements
US9524429B2 (en) 2014-08-21 2016-12-20 Microsoft Technology Licensing, Llc Enhanced interpretation of character arrangements
US9805483B2 (en) 2014-08-21 2017-10-31 Microsoft Technology Licensing, Llc Enhanced recognition of charted data
US9824269B2 (en) 2014-08-21 2017-11-21 Microsoft Technology Licensing, Llc Enhanced interpretation of character arrangements
US9397723B2 (en) 2014-08-26 2016-07-19 Microsoft Technology Licensing, Llc Spread spectrum wireless over non-contiguous channels
US10129883B2 (en) 2014-08-26 2018-11-13 Microsoft Technology Licensing, Llc Spread spectrum wireless over non-contiguous channels
US10156889B2 (en) 2014-09-15 2018-12-18 Microsoft Technology Licensing, Llc Inductive peripheral retention device
US20210168290A1 (en) * 2017-05-22 2021-06-03 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and non-transitory storage medium
US11627255B2 (en) * 2017-05-22 2023-04-11 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and non-transitory storage medium

Also Published As

Publication number Publication date
US20070177183A1 (en) 2007-08-02

Similar Documents

Publication Publication Date Title
US8509563B2 (en) Generation of documents from images
US20040139391A1 (en) Integration of handwritten annotations into an electronic original
US9710704B2 (en) Method and apparatus for finding differences in documents
US9922247B2 (en) Comparing documents using a trusted source
US9514103B2 (en) Effective system and method for visual document comparison using localized two-dimensional visual fingerprints
US7840092B2 (en) Medium processing method, copying apparatus, and data filing apparatus
JP4533273B2 (en) Image processing apparatus, image processing method, and program
US8965125B2 (en) Image processing device, method and storage medium for storing and displaying an electronic document
JP4854491B2 (en) Image processing apparatus and control method thereof
US20100128922A1 (en) Automated generation of form definitions from hard-copy forms
WO2014086287A1 (en) Text image automatic dividing method and device, method for automatically dividing handwriting entries
US8953228B1 (en) Automatic assignment of note attributes using partial image recognition results
CN105335453B (en) Image file dividing method
US20210081660A1 (en) Information processing apparatus and non-transitory computer readable medium
JP2008022159A (en) Document processing apparatus and document processing method
US20110170788A1 (en) Method for capturing data from mobile and scanned images of business cards
US8300952B2 (en) Electronic document comparison system and method
JP6694587B2 (en) Image reading device and program
US20160188612A1 (en) Objectification with deep searchability
JP2010003218A (en) Document review support device and method, program and storage medium
CN116343210A (en) File digitization management method and device
CN114821623A (en) Document processing method and device, electronic equipment and storage medium
CN114863459A (en) Out-of-order document sorting method and device and electronic equipment
CN113835598A (en) Information acquisition method and device and electronic equipment
JP5657401B2 (en) Document processing apparatus and document processing program

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROBINSON, MERLE MICHAEL;UYTTENDAELE, MATTHEIU T.;ZHANG, ZHENGYOU;AND OTHERS;SIGNING DATES FROM 20060126 TO 20060202;REEL/FRAME:017488/0089

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROBINSON, MERLE MICHAEL;UYTTENDAELE, MATTHEIU T.;ZHANG, ZHENGYOU;AND OTHERS;REEL/FRAME:017488/0089;SIGNING DATES FROM 20060126 TO 20060202

AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE SPELLING OF MATTHEIU T. UYTTENDAELE. PREVIOUSLY RECORDED ON REEL 017488 FRAME 0089. ASSIGNOR(S) HEREBY CONFIRMS THE NAME IS SPELLED AS FOLLOWS ON THE RECORDED ASSIGNMENT: MATTHIEU T. UYTTENDAELE;ASSIGNORS:ROBINSON, MERLE MICHAEL;UYTTENDAELE, MATTHEIU T.;ZHANG, ZHENGYOU;AND OTHERS;SIGNING DATES FROM 20060126 TO 20060202;REEL/FRAME:030612/0344

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034543/0001

Effective date: 20141014

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20210813