WO2008095226A1 - Bar code reading method - Google Patents

Bar code reading method Download PDF

Info

Publication number
WO2008095226A1
WO2008095226A1 PCT/AU2008/000046 AU2008000046W WO2008095226A1 WO 2008095226 A1 WO2008095226 A1 WO 2008095226A1 AU 2008000046 W AU2008000046 W AU 2008000046W WO 2008095226 A1 WO2008095226 A1 WO 2008095226A1
Authority
WO
WIPO (PCT)
Prior art keywords
waveform
pen
netpage
tag
sensing device
Prior art date
Application number
PCT/AU2008/000046
Other languages
French (fr)
Inventor
Jonathon Leigh Napper
Paul Lapstun
Kia Silverbrook
Original Assignee
Silverbrook Research Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silverbrook Research Pty Ltd filed Critical Silverbrook Research Pty Ltd
Priority to CA002675689A priority Critical patent/CA2675689A1/en
Publication of WO2008095226A1 publication Critical patent/WO2008095226A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • G06F3/0317Detection arrangements using opto-electronic means in co-operation with a patterned surface, e.g. absolute position or relative movement detection for an optical mouse or pen positioned with respect to a coded surface
    • G06F3/0321Detection arrangements using opto-electronic means in co-operation with a patterned surface, e.g. absolute position or relative movement detection for an optical mouse or pen positioned with respect to a coded surface by optically sensing the absolute position with respect to a regularly patterned surface forming a passive digitiser, e.g. pen optically detecting position indicative tags printed on a paper sheet
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/033Indexing scheme relating to G06F3/033
    • G06F2203/0337Status LEDs integrated in the mouse to provide visual feedback to the user about the status of the input device, the PC, or the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/038Indexing scheme relating to G06F3/038
    • G06F2203/0384Wireless input, i.e. hardware and software details of wireless interface arrangements for pointing devices

Definitions

  • the present invention relates to a method and system for reading barcodes disposed on a surface. It has been developed primarily to enable acquistion of linear barcodes using a camera with a field of view smaller than the barcode.
  • the Applicant has previously described a method of enabling users to access information from a computer system via a printed substrate e.g. paper.
  • the substrate has coded data printed thereon, which is read by an optical sensing device when the user interacts with the substrate using the sensing device.
  • a computer receives interaction data from the sensing device and uses this data to determine what action is being requested by the user. For example, a user may make make handwritten input onto a form or make a selection gesture around a printed item. This input is interpreted by the computer system with reference to a page description corresponding to the printed substrate.
  • the present invention provides amethod of recovering a waveform representing a linear bar code, the method including the steps of: moving a sensing device relative to the barcode, said sensing device having a two- dimensional image sensor; capturing, using the image sensor, a plurality of two-dimensional partial images of said bar code during said movement; determining, from at least one of the images, a direction substantially perpendicular to the bars of the bar code; determining, substantially along the direction, a waveform fragment corresponding to each captured image; determining an alignment between each pair of successive waveform fragments; and recovering, from the aligned waveform fragments, the waveform.
  • a field of view of the image sensor is smaller than the length of the bar code.
  • each partial two-dimensional image of said bar code contains a plurality of bars.
  • a method further comprising the step of: determining a product code by decoding the waveform.
  • a method further comprising the step of: low-pass filtering the captured images in a direction substantially parallel to the bars.
  • the direction is determined using a Hough transform for identifying an orientation of edges in the at least one image.
  • the alignment between each pair of successive waveform fragments is determined by performing one or more normalized cross-correlations between each pair.
  • the waveform is recovered from the aligned waveform fragments by appending each fragment to a previous fragment, and skipping a region overlapping with said previous fragment.
  • the waveform is recovered from the aligned waveform fragments by: determining an average value for a plurality of sample values of the waveform, said sample values being contained in portions of the waveform contained in overlapping waveform fragments.
  • the average value is a weighted average, whereby sample values captured from a centre portion of each image have a higher weight than sample values captured from an edge portion of each image.
  • the sample values for each image are weighted in accordance with a Gaussian window for said image.
  • the waveform is recovered from the aligned waveform fragments by: aligning a current waveform fragment with a partially -constructed waveform constructed using all waveform fragments up to the current fragment.
  • said method is performed only in the absence of a location-indicating tag in a field of view of the image sensor.
  • the present invention provides a sensing device for recovering a waveform representing a linear bar code, said sensing device comprising: a two-dimensional image sensor for capturing a plurality of partial two-dimensional images of said bar code during movement of said sensing device relative to said bar code; and a processor configured for: determining, from at least one of the images, a direction substantially perpendicular to the bars of the bar code; determining, substantially along the direction, a waveform fragment corresponding to each captured image; determining an alignment between each pair of successive waveform fragments; and recovering, from the aligned waveform fragments, the waveform.
  • a field of view of the image sensor is smaller than the length of the bar code.
  • a field of view of the image sensor is sufficiently large for capturing an image of a plurality of bars.
  • the processor is further configured for: determining the alignment between each pair of successive waveform fragments by performing one or more normalized cross-correlations between each pair.
  • the processor is further configured for: determining an average value for a plurality of sample values of the waveform, said sample values being contained in portions of the waveform contained in overlapping waveform fragments.
  • a sensing device further comprising: communication means for communicating the waveform to a computer system.
  • said image sensor has a field of view sufficiently large for capturing an image of a whole location-indicating tag disposed on a surface
  • said processor is configured for determining a position of the sensing device relative to the surface using the imaged tag.
  • Figure 1 shows an embodiment of basic netpage architecture
  • Figure 2 is a schematic of a the relationship between a sample printed netpage and its online page description
  • Figure 3 shows an embodiment of basic netpage architecture with various alternatives for the relay device
  • Figure 3A illustrates a collection of netpage servers, Web terminals, printers and relays interconnected via a network
  • Figure 4 is a schematic view of a high-level structure of a printed netpage and its online page description
  • Figure 5 A is a plan view showing a structure of a netpage tag
  • Figure 5B is a plan view showing a relationship between a set of the tags shown in Figure 5a and a field of view of a netpage sensing device in the form of a netpage pen;
  • Figure 6A is a plan view showing an alternative structure of a netpage tag;
  • Figure 6B is a plan view showing a relationship between a set of the tags shown in Figure 6a and a field of view of a netpage sensing device in the form of a netpage pen
  • Figure 6C is a plan view showing an arrangement of nine of the tags shown in Figure 6a where targets are shared between adjacent tags;
  • Figure 6D is a plan view showing the interleaving and rotation of the symbols of the four codewords of the tag shown in Figure 6a;
  • Figure 7 is a flowchart of a tag image processing and decoding algorithm;
  • Figure 8 is a perspective view of a netpage pen and its associated tag-sensing field-of-view cone
  • Figure 9 is a perspective exploded view of the netpage pen shown in Figure 8.
  • Figure 10 is a schematic block diagram of a pen controller for the netpage pen shown in Figures 8 and 9;
  • Figure 11 is a schematic view of a pen class diagram
  • Figure 12 is a schematic view of a document and page description class diagram
  • Figure 13 is a schematic view of a document and page ownership class diagram
  • Figure 14 is a schematic view of a terminal element specialization class diagram
  • Figure 15 shows a typical EAN- 13 bar code symbol
  • Figure 16 shows two successive frames from a bar code scan
  • Figure 17 shows the cross-correlation between the two frames shown in Figure 16.
  • Figure 18 shows the optimal alignment of the two frames shown in Figure 16.
  • MemjetTM is a trade mark of Silverbrook Research Pty Ltd, Australia.
  • the invention is configured to work with the netpage networked computer system, a detailed overview of which follows. It will be appreciated that not every implementation will necessarily embody all or even most of the specific details and extensions discussed below in relation to the basic system. However, the system is described in its most complete form to reduce the need for external reference when attempting to understand the context in which the preferred embodiments and aspects of the present invention operate.
  • the preferred form of the netpage system employs a computer interface in the form of a mapped surface, that is, a physical surface which contains references to a map of the surface maintained in a computer system. The map references can be queried by an appropriate sensing device.
  • the map references may be encoded visibly or invisibly, and defined in such a way that a local query on the mapped surface yields an unambiguous map reference both within the map and among different maps.
  • the computer system can contain information about features on the mapped surface, and such information can be retrieved based on map references supplied by a sensing device used with the mapped surface. The information thus retrieved can take the form of actions which are initiated by the computer system on behalf of the operator in response to the operator ' s interaction with the surface features.
  • the netpage system relies on the production of, and human interaction with, netpages. These are pages of text, graphics and images printed on ordinary paper, but which work like interactive webpages. Information is encoded on each page using ink which is substantially invisible to the unaided human eye.
  • the ink can be sensed by an optically imaging sensing device and transmitted to the netpage system.
  • the sensing device may take the form of a clicker (for clicking on a specific position on a surface), a pointer having a stylus (for pointing or gesturing on a surface using pointer strokes), or a pen having a marking nib (for marking a surface with ink when pointing, gesturing or writing on the surface).
  • pen or “netpage pen” are provided by way of example only. It will, of course, be appreciated that the pen may take the form of any of the sensing devices described above.
  • active buttons and hyperlinks on each page can be clicked with the sensing device to request information from the network or to signal preferences to a network server.
  • text written by hand on a netpage is automatically recognized and converted to computer text in the netpage system, allowing forms to be filled in.
  • signatures recorded on a netpage are automatically verified, allowing e-commerce transactions to be securely authorized.
  • text on a netpage may be clicked or gestured to initiate a search based on keywords indicated by the user.
  • a printed netpage 1 can represent a interactive form which can be filled in by the user both physically, on the printed page, and "electronically", via communication between the pen and the netpage system.
  • the example shows a "Request” form containing name and address fields and a submit button.
  • the netpage consists of graphic data 2 printed using visible ink, and coded data 3 printed as a collection of tags 4 using invisible ink.
  • the corresponding page description 5, stored on the netpage network describes the individual elements of the netpage. In particular it describes the type and spatial extent (zone) of each interactive element (i.e. text field or button in the example), to allow the netpage system to correctly interpret input via the netpage.
  • the submit button 6, for example has a zone 7 which corresponds to the spatial extent of the corresponding graphic 8.
  • a netpage sensing device 101 works in conjunction with a netpage relay device 601, which is an Internet-connected device for home, office or mobile use.
  • the pen is wireless and communicates securely with the netpage relay device 601 via a short-range radio link 9.
  • the netpage pen 101 utilises a wired connection, such as a USB or other serial connection, to the relay device 601.
  • the relay device 601 performs the basic function of relaying interaction data to a page server 10, which interprets the interaction data.
  • the relay device 601 may, for example, take the form of a personal computer 601a, a netpage printer 601b or some other relay 601c.
  • the netpage printer 601b is able to deliver, periodically or on demand, personalized newspapers, magazines, catalogs, brochures and other publications, all printed at high quality as interactive netpages.
  • the netpage printer is an appliance which can be, for example, wall-mounted adjacent to an area where the morning news is first consumed, such as in a user's kitchen, near a breakfast table, or near the household's point of departure for the day. It also comes in tabletop, desktop, portable and miniature versions. Netpages printed on-demand at their point of consumption combine the ease-of-use of paper with the timeliness and interactivity of an interactive medium.
  • the netpage relay device 601 may be a portable device, such as a mobile phone or PDA, a laptop or desktop computer, or an information appliance connected to a shared display, such as a TV. If the relay device 601 is not a netpage printer 601b which prints netpages digitally and on demand, the netpages may be printed by traditional analog printing presses, using such techniques as offset lithography, flexography, screen printing, relief printing and rotogravure, as well as by digital printing presses, using techniques such as drop-on-demand inkjet, continuous inkjet, dye transfer, and laser printing.
  • traditional analog printing presses using such techniques as offset lithography, flexography, screen printing, relief printing and rotogravure, as well as by digital printing presses, using techniques such as drop-on-demand inkjet, continuous inkjet, dye transfer, and laser printing.
  • the netpage sensing device 101 interacts with the coded data on a printed netpage 1, or other printed substate such as a label of a product item 251, and communicates, via a short-range radio link 9, the interaction to the relay 601.
  • the relay 601 sends corresponding interaction data to the relevant netpage page server 10 for interpretation.
  • Raw data received from the sensing device 101 may be relayed directly to the page server 10 as interaction data.
  • the interaction data may be encoded in the form of an interaction URI and transmitted to the page server 10 via a user's web browser.
  • the relay device 601 e.g. mobile phone
  • Interpretation of the interaction data by the page server 10 may result in direct access to information requested by the user.
  • This information may be sent from the page server 10 to, for example, a user's display device (e.g. a display device associated with the relay device 601).
  • the information sent to the user may be in the form of a webpage constructed by the page server 10 and the webpage may be constructed using information from external web services 200 (e.g. search engines) or local web resources accessible by the page server 10.
  • the page server 10 may access application computer software running on a netpage application server 13.
  • a two-step information retrieval process may be employed. Interaction data is sent from the sensing device 101 to the relay device 601 in the usual way. The relay device 601 then sends the interaction data to the page server 10 for interpretation with reference to the relevant page description 5. Then, the page server 10 forms a request (typically in the form of a request URI) and sends this request URI back to the user's relay device 601. A web browser running on the relay device 601 then sends the request URI to a netpage web server 201, which interprets the request. The netpage web server 201 may interact with local web resources and external web services 200 to interpret the request and construct a webpage.
  • a request URI typically in the form of a request URI
  • a web browser running on the relay device 601 then sends the request URI to a netpage web server 201, which interprets the request.
  • the netpage web server 201 may interact with local web resources and external web services 200 to interpret the request and construct a webpage.
  • the webpage Once the webpage has been constructed by the netpage web server 201, it is transmitted to the web browser running on the user's relay device 601, which typically displays the webpage.
  • This system architecture is particulary useful for performing searching via netpages, as described in our earlier US Patent Application No. 11/672,950 filed on February 8, 2007 (the contents of which is incorporated by reference).
  • the request URI may encode search query terms, which are searched via the netpage web server 201.
  • the netpage relay device 601 can be configured to support any number of sensing devices, and a sensing device can work with any number of netpage relays.
  • each netpage sensing device 101 has a unique identifier. This allows each user to maintain a distinct profile with respect to a netpage page server 10 or application server 13.
  • Digital, on-demand delivery of netpages 1 may be performed by the netpage printer 601b, which exploits the growing availability of broadband Internet access.
  • Netpage publication servers 14 on the netpage network are configured to deliver print- quality publications to netpage printers. Periodical publications are delivered automatically to subscribing netpage printers via pointcasting and multicasting Internet protocols. Personalized publications are filtered and formatted according to individual user profiles.
  • a netpage pen may be registered with a netpage registration server 11 and linked to one or more payment card accounts. This allows e-commerce payments to be securely authorized using the netpage pen.
  • the netpage registration server compares the signature captured by the netpage pen with a previously registered signature, allowing it to authenticate the user's identity to an e-commerce server. Other biometrics can also be used to verify identity.
  • One version of the netpage pen includes fingerprint scanning, verified in a similar way by the netpage registration server.
  • UML UML class diagram.
  • a class diagram consists of a set of object classes connected by relationships, and two kinds of relationships are of interest here: associations and generalizations.
  • An association represents some kind of relationship between objects, i.e. between instances of classes.
  • a generalization relates actual classes, and can be understood in the following way: if a class is thought of as the set of all objects of that class, and class A is a generalization of class B, then B is simply a subset of A.
  • the UML does not directly support second-order modelling - i.e. classes of classes.
  • Each class is drawn as a rectangle labelled with the name of the class. It contains a list of the attributes of the class, separated from the name by a horizontal line, and a list of the operations of the class, separated from the attribute list by a horizontal line. In the class diagrams which follow, however, operations are never modelled.
  • An association is drawn as a line joining two classes, optionally labelled at either end with the multiplicity of the association.
  • the default multiplicity is one.
  • An asterisk (*) indicates a multiplicity of "many", i.e. zero or more.
  • Each association is optionally labelled with its name, and is also optionally labelled at either end with the role of the corresponding class.
  • An open diamond indicates an aggregation association ("is-part-of '), and is drawn at the aggregator end of the association line.
  • a generalization relationship (“is-a") is drawn as a solid line joining two classes, with an arrow (in the form of an open triangle) at the generalization end.
  • Netpages are the foundation on which a netpage network is built. They provide a paper-based user interface to published information and interactive services.
  • a netpage consists of a printed page (or other surface region) invisibly tagged with references to an online description of the page.
  • the online page description is maintained persistently by the netpage page server 10.
  • the page description describes the visible layout and content of the page, including text, graphics and images. It also describes the input elements on the page, including buttons, hyperlinks, and input fields.
  • a netpage allows markings made with a netpage pen on its surface to be simultaneously captured and processed by the netpage system.
  • each netpage may be assigned a unique page identifier. This page ID has sufficient precision to distinguish between a very large number of netpages.
  • Each reference to the page description is encoded in a printed tag. The tag identifies the unique page on which it appears, and thereby indirectly identifies the page description. The tag also identifies its own position on the page. Characteristics of the tags are described in more detail below.
  • Tags are typically printed in infrared-absorptive ink on any substrate which is infrared-reflective, such as ordinary paper, or in infrared fluorescing ink. Near-infrared wavelengths are invisible to the human eye but are easily sensed by a solid-state image sensor with an appropriate filter.
  • a tag is sensed by a 2D area image sensor in the netpage sensing device, and the tag data is transmitted to the netpage system via the nearest netpage relay device.
  • the pen is wireless and communicates with the netpage relay device via a short-range radio link.
  • Tags are sufficiently small and densely arranged that the sensing device can reliably image at least one tag even on a single click on the page. It is important that the pen recognize the page ID and position on every interaction with the page, since the interaction is stateless. Tags are error-correctably encoded to make them partially tolerant to surface damage.
  • the netpage page server 10 maintains a unique page instance for each unique printed netpage, allowing it to maintain a distinct set of user-supplied values for input fields in the page description for each printed netpage.
  • the relationship between the page description, the page instance, and the printed netpage is shown in Figure 4.
  • the printed netpage may be part of a printed netpage document 45.
  • the page instance may be associated with both the netpage printer which printed it and, if known, the netpage user who requested it.
  • each tag identifies the region in which it appears, and the location of that tag within the region and an orientation of the tag relative to a substrate on which the tag is printed.
  • a tag may also contain flags which relate to the region as a whole or to the tag.
  • One or more flag bits may, for example, signal a tag sensing device to provide feedback indicative of a function associated with the immediate area of the tag, without the sensing device having to refer to a description of the region.
  • a netpage pen may, for example, illuminate an "active area" LED when in the zone of a hyperlink.
  • each tag typically contains an easily recognized invariant structure which aids initial detection, and which assists in minimizing the effect of any warp induced by the surface or by the sensing process.
  • the tags preferably tile the entire page, and are sufficiently small and densely arranged that the pen can reliably image at least one tag even on a single click on the page. It is important that the pen recognize the page ID and position on every interaction with the page, since the interaction is stateless.
  • the region to which a tag refers coincides with an entire page, and the region ID encoded in the tag is therefore synonymous with the page ID of the page on which the tag appears.
  • the region to which a tag refers can be an arbitrary subregion of a page or other surface. For example, it can coincide with the zone of an interactive element, in which case the region ID can directly identify the interactive element.
  • Each tag contains 120 bits of information, typically allocated as shown in Table 1. Assuming a maximum tag density of 64 per square inch, a 16-bit tag ID supports a region size of up to 1024 square inches. Larger regions can be mapped continuously without increasing the tag ID precision simply by using abutting regions and maps. The 100-bit region ID allows 2 100 ( ⁇ 10 30 or a million trillion trillion) different regions to be uniquely identified. 2.2 Tag Data Encoding
  • the 120 bits of tag data are redundantly encoded using a (15, 5) Reed-Solomon code. This yields 360 encoded bits consisting of 6 codewords of 15 4-bit symbols each.
  • the (15, 5) code allows up to 5 symbol errors to be corrected per codeword, i.e. it is tolerant of a symbol error rate of up to 33% per codeword.
  • Each 4-bit symbol is represented in a spatially coherent way in the tag, and the symbols of the six codewords are interleaved spatially within the tag. This ensures that a burst error (an error affecting multiple spatially adjacent bits) damages a minimum number of symbols overall and a minimum number of symbols in any one codeword, thus maximising the likelihood that the burst error can be fully corrected.
  • Any suitable error-correcting code code can be used in place of a (15, 5) Reed- Solomon code, for example a Reed-Solomon code with more or less redundancy, with the same or different symbol and codeword sizes; another block code; or a different kind of code, such as a convolutional code (see, for example, Stephen B. Wicker, Error Control Systems for Digital Communication and Storage, Prentice-Hall 1995, the contents of which a herein incorporated by cross-reference).
  • the physical representation of the tag shown in Figure 5a, includes fixed target structures 15, 16, 17 and variable data areas 18.
  • the fixed target structures allow a sensing device such as the netpage pen to detect the tag and infer its three-dimensional orientation relative to the sensor.
  • the data areas contain representations of the individual bits of the encoded tag data.
  • the tag is rendered at a resolution of 256x256 dots. When printed at 1600 dots per inch this yields a tag with a diameter of about 4 mm.
  • the tag is designed to be surrounded by a "quiet area" of radius 16 dots. Since the quiet area is also contributed by adjacent tags, it only adds 16 dots to the effective diameter of the tag.
  • the tag may include a plurality of target structures.
  • a detection ring 15 allows the sensing device to initially detect the tag. The ring is easy to detect because it is rotationally invariant and because a simple correction of its aspect ratio removes most of the effects of perspective distortion.
  • An orientation axis 16 allows the sensing device to determine the approximate planar orientation of the tag due to the yaw of the sensor. The orientation axis is skewed to yield a unique orientation.
  • Four perspective targets 17 allow the sensing device to infer an accurate two-dimensional perspective transform of the tag and hence an accurate three-dimensional position and orientation of the tag relative to the sensor.
  • the sensing device In order to support "single-click" interaction with a tagged region via a sensing device, the sensing device must be able to see at least one entire tag in its field of view no matter where in the region or at what orientation it is positioned.
  • the required diameter of the field of view of the sensing device is therefore a function of the size and spacing of the tags.
  • the tag image processing and decoding performed by a sensing device such as the netpage pen is shown in Figure 7. While a captured image is being acquired from the image sensor, the dynamic range of the image is determined (at 20). The center of the range is then chosen as the binary threshold for the image 21. The image is then thresholded and segmented into connected pixel regions (i.e. shapes 23) (at 22). Shapes which are too small to represent tag target structures are discarded. The size and centroid of each shape is also computed. Binary shape moments 25 are then computed (at 24) for each shape, and these provide the basis for subsequently locating target structures. Central shape moments are by their nature invariant of position, and can be easily made invariant of scale, aspect ratio and rotation.
  • the ring target structure 15 is the first to be located (at 26).
  • a ring has the advantage of being very well behaved when perspective-distorted. Matching proceeds by aspect-normalizing and rotation-normalizing each shape's moments. Once its second-order moments are normalized the ring is easy to recognize even if the perspective distortion was significant. The ring's original aspect and rotation 27 together provide a useful approximation of the perspective transform.
  • the axis target structure 16 is the next to be located (at 28). Matching proceeds by applying the ring's normalizations to each shape's moments, and rotation-normalizing the resulting moments. Once its second-order moments are normalized the axis target is easily recognized. Note that one third order moment is required to disambiguate the two possible orientations of the axis.
  • the shape is deliberately skewed to one side to make this possible. Note also that it is only possible to rotation-normalize the axis target after it has had the ring ' s normalizations applied, since the perspective distortion can hide the axis target's axis.
  • the axis target's original rotation provides a useful approximation of the tag ' s rotation due to pen yaw 29.
  • the four perspective target structures 17 are the last to be located (at 30). Good estimates of their positions are computed based on their known spatial relationships to the ring and axis targets, the aspect and rotation of the ring, and the rotation of the axis. Matching proceeds by applying the ring's normalizations to each shape's moments. Once their second-order moments are normalized the circular perspective targets are easy to recognize, and the target closest to each estimated position is taken as a match.
  • the original centroids of the four perspective targets are then taken to be the perspective- distorted corners 31 of a square of known size in tag space, and an eight-degree-of- freedom perspective transform 33 is inferred (at 32) based on solving the well-understood equations relating the four tag-space and image-space point pairs (see Heckbert, P., Fundamentals of Texture Mapping and Image Warping, Masters Thesis, Dept. of EECS, U. of California at Berkeley, Technical Report No. UCB/CSD 89/516, June 1989, the contents of which are herein incorporated by cross-reference).
  • the inferred tag-space to image-space perspective transform is used to project (at 36) each known data bit position in tag space into image space where the real-valued position is used to bilinearly interpolate (at 36) the four relevant adjacent pixels in the input image.
  • the previously computed image threshold 21 is used to threshold the result to produce the final bit value 37.
  • each of the six 60-bit Reed-Solomon codewords is decoded (at 38) to yield 20 decoded bits 39, or 120 decoded bits in total. Note that the codeword symbols are sampled in codeword order, so that codewords are implicitly de-interleaved during the sampling process.
  • the ring target 15 is only sought in a subarea of the image whose relationship to the image guarantees that the ring, if found, is part of a complete tag. If a complete tag is not found and successfully decoded, then no pen position is recorded for the current frame.
  • an alternative strategy involves seeking another tag in the current image.
  • the obtained tag data indicates the identity of the region containing the tag and the position of the tag within the region.
  • An accurate position 35 of the pen nib in the region, as well as the overall orientation 35 of the pen, is then inferred (at 34) from the perspective transform 33 observed on the tag and the known spatial relationship between the image sensor (containing the optical axis of the pen) and the nib (which tyically contains the physical axis of the pen).
  • the image sensor is usually offset from the nib.
  • the tag structure described above is designed to support the tagging of non- planar surfaces where a regular tiling of tags may not be possible.
  • a regular tiling of tags i.e. surfaces such as sheets of paper and the like.
  • more efficient tag structures can be used which exploit the regular nature of the tiling.
  • Figure 6a shows a square tag 4 with four perspective targets 17.
  • the tag represents sixty 4-bit Reed-Solomon symbols 47, for a total of 240 bits.
  • the tag represents each one bit as a dot 48, and each zero bit by the absence of the corresponding dot.
  • the perspective targets are designed to be shared between adjacent tags, as shown in Figures 6b and 6c.
  • Figure 6b shows a square tiling of 16 tags and the corresponding minimum field of view 193, which must span the diagonals of two tags.
  • Figure 6c shows a square tiling of nine tags, containing all one bits for illustration purposes.
  • 112 bits of tag data are redundantly encoded to produce 240 encoded bits.
  • the four codewords are interleaved spatially within the tag to maximize resilience to burst errors. Assuming a 16-bit tag ID as before, this allows a region ID of up to 92 bits.
  • the data-bearing dots 48 of the tag are designed to not overlap their neighbors, so that groups of tags cannot produce structures which resemble targets. This also saves ink.
  • the perspective targets therefore allow detection of the tag, so further targets are not required.
  • Tag image processing proceeds as described in section 1.2.4 above, with the exception that steps 26 and 28 are omitted.
  • the tag may contain an orientation feature to allow disambiguation of the four possible orientations of the tag relative to the sensor
  • orientation data can be embedded in the tag data.
  • the four codewords can be arranged so that each tag orientation contains one codeword placed at that orientation, as shown in Figure 6d, where each symbol is labelled with the number of its codeword (1-4) and the position of the symbol within the codeword (A-O).
  • Tag decoding then consists of decoding one codeword at each orientation.
  • Each codeword can either contain a single bit indicating whether it is the first codeword, or two bits indicating which codeword it is.
  • the latter approach has the advantage that if, say, the data content of only one codeword is required, then at most two codewords need to be decoded to obtain the desired data. This may be the case if the region ID is not expected to change within a stroke and is thus only decoded at the start of a stroke. Within a stroke only the codeword containing the tag ID is then desired. Furthermore, since the rotation of the sensing device changes slowly and predictably within a stroke, only one codeword typically needs to be decoded per frame.
  • each bit value (or multi-bit value) is typically represented by an explicit glyph, i.e. no bit value is represented by the absence of a glyph.
  • each tag data must contain a marker pattern, and these must be redundantly encoded to allow reliable detection.
  • the overhead of such marker patterns is similar to the overhead of explicit perspective targets.
  • One such scheme uses dots positioned a various points relative to grid vertices to represent different glyphs and hence different multi-bit values (see Anoto Technology Description, Anoto April 2000).
  • Decoding a tag typically results in a region ID, a tag ID, and a tag-relative pen transform. Before the tag ID and the tag-relative pen location can be translated into an absolute location within the tagged region, the location of the tag within the region must be known. This is given by a tag map, a function which maps each tag ID in a tagged region to a corresponding location.
  • the tag map class diagram is shown in Figure 22, as part of the netpage printer class diagram.
  • a tag map reflects the scheme used to tile the surface region with tags, and this can vary according to surface type. When multiple tagged regions share the same tiling scheme and the same tag numbering scheme, they can also share the same tag map.
  • the tag map for a region must be retrievable via the region ID. Thus, given a region ID, a tag ID and a pen transform, the tag map can be retrieved, the tag ID can be translated into an absolute tag location within the region, and the tag-relative pen location can be added to the tag location to yield an absolute pen location within the region.
  • the tag ID may have a structure which assists translation through the tag map. It may, for example, encode cartesian (x-y) coordinates or polar coordinates, depending on the surface type on which it appears.
  • the tag ID structure is dictated by and known to the tag map, and tag IDs associated with different tag maps may therefore have different structures.
  • the tags usually function in cooperation with associated visual elements on the netpage. These function as user interactive elements in that a user can interact with the printed page using an appropriate sensing device in order for tag data to be read by the sensing device and for an appropriate response to be generated in the netpage system. Additionally (or alternatively), decoding a tag may be used to provide orientation data indicative of the yaw of the pen relative to the surface. The orientation data may be determined using, for example, the orientation axis 16 described above (Section 2.3) or orientation data embedded in the tag data (Section 2.5).
  • FIG. 12 A preferred embodiment of a document and page description class diagram is shown in Figures 12 and 13.
  • a document is described at three levels. At the most abstract level the document 836 has a hierarchical structure whose terminal elements 839 are associated with content objects 840 such as text objects, text style objects, image objects, etc.
  • content objects 840 such as text objects, text style objects, image objects, etc.
  • the document is paginated and otherwise formatted. Formatted terminal elements 835 will in some cases be associated with content objects which are different from those associated with their corresponding terminal elements, particularly where the content objects are style-related.
  • Each printed instance of a document and page is also described separately, to allow input captured through a particular page instance 830 to be recorded separately from input captured through other instances of the same page description.
  • a formatted document 834 consists of a set of formatted page descriptions 5, each of which consists of a set of formatted terminal elements 835. Each formatted element has a spatial extent or zone 58 on the page. This defines the active area of input elements such as hyperlinks and input fields.
  • a document instance 831 corresponds to a formatted document 834. It consists of a set of page instances 830, each of which corresponds to a page description 5 of the formatted document. Each page instance 830 describes a single unique printed netpage 1, and records the page ID 50 of the netpage.
  • a page instance is not part of a document instance if it represents a copy of a page requested in isolation.
  • a page instance consists of a set of terminal element instances 832.
  • An element instance only exists if it records instance-specific information.
  • a hyperlink instance exists for a hyperlink element because it records a transaction ID 55 which is specific to the page instance, and a field instance exists for a field element because it records input specific to the page instance.
  • An element instance does not exist, however, for static elements such as textflows.
  • a terminal element 839 can be a visual element or an input element.
  • a visual element can be a static element 843 or a dynamic element 846.
  • An input element may be, for example, a hyperlink element 844 or a field element 845, as shown in Figure 14. Other types of input element are of course possible, such a input elements, which select a particular mode of the pen 101.
  • a page instance has a background field 833 which is used to record any digital ink captured on the page which does not apply to a specific input element.
  • a tag map 811 is associated with each page instance to allow tags on the page to be translated into locations on the page.
  • a netpage network consists of a distributed set of netpage page servers 10, netpage registration servers 11, netpage ID servers 12, netpage application servers 13, and netpage relay devices 601 connected via a network 19 such as the Internet, as shown in Figure 3.
  • the netpage registration server 11 is a server which records relationships between users, pens, printers and applications, and thereby authorizes various network activities. It authenticates users and acts as a signing proxy on behalf of authenticated users in application transactions. It also provides handwriting recognition services.
  • a netpage page server 10 maintains persistent information about page descriptions and page instances.
  • the netpage network includes any number of page servers, each handling a subset of page instances. Since a page server also maintains user input values for each page instance, clients such as netpage relays 601 send netpage input directly to the appropriate page server. The page server interprets any such input relative to the description of the corresponding page.
  • a netpage ID server 12 allocates document IDs 51 on demand, and provides load-balancing of page servers via its ID allocation scheme.
  • a netpage relay 601 uses the Internet Distributed Name System (DNS), or similar, to resolve a netpage page ID 50 into the network address of the netpage page server 10 handling the corresponding page instance.
  • DNS Internet Distributed Name System
  • a netpage application server 13 is a server which hosts interactive netpage applications.
  • Netpage servers can be hosted on a variety of network server platforms from manufacturers such as IBM, Hewlett-Packard, and Sun. Multiple netpage servers can run concurrently on a single host, and a single server can be distributed over a number of hosts. Some or all of the functionality provided by netpage servers, and in particular the functionality provided by the ID server and the page server, can also be provided directly in a netpage appliance such as a netpage printer, in a computer workstation, or on a local network.
  • a netpage appliance such as a netpage printer, in a computer workstation, or on a local network.
  • the active sensing device of the netpage system may take the form of a clicker
  • a pen 101 uses its embedded controller 134 to capture and decode netpage tags from a page via an image sensor.
  • the image sensor is a solid-state device provided with an appropriate filter to permit sensing at only near-infrared wavelengths.
  • the system is able to sense when the nib is in contact with the surface, and the pen is able to sense tags at a sufficient rate to capture human handwriting (i.e. at 200 dpi or greater and 100 Hz or faster).
  • Information captured by the pen may be encrypted and wirelessly transmitted to the printer (or base station), the printer or base station interpreting the data with respect to the (known) page structure.
  • the preferred embodiment of the netpage pen 101 operates both as a normal marking ink pen and as a non-marking stylus (i.e. as a pointer).
  • the marking aspect is not necessary for using the netpage system as a browsing system, such as when it is used as an Internet interface.
  • Each netpage pen is registered with the netpage system and has a unique pen ID 61.
  • Figure 11 shows the netpage pen class diagram, reflecting pen- related information maintained by a registration server 11 on the netpage network.
  • the nib determines its position and orientation relative to the page.
  • the nib is attached to a force sensor, and the force on the nib is interpreted relative to a threshold to indicate whether the pen is '"up" or "down".
  • the force may be captured as a continuous value to allow, say, the full dynamics of a signature to be verified.
  • the pen determines the position and orientation of its nib on the netpage by imaging, in the infrared spectrum, an area 193 of the page in the vicinity of the nib. It decodes the nearest tag and computes the position of the nib relative to the tag from the observed perspective distortion on the imaged tag and the known geometry of the pen optics. Although the position resolution of the tag may be low, because the tag density on the page is inversely proportional to the tag size, the adjusted position resolution is quite high, exceeding the minimum resolution required for accurate handwriting recognition. Pen actions relative to a netpage are captured as a series of strokes. A stroke consists of a sequence of time-stamped pen positions on the page, initiated by a pen-down event and completed by the subsequent pen-up event.
  • a stroke is also tagged with the page ID 50 of the netpage whenever the page ID changes, which, under normal circumstances, is at the commencement of the stroke.
  • Each netpage pen has a current selection 826 associated with it, allowing the user to perform copy and paste operations etc.
  • the selection is timestamped to allow the system to discard it after a defined time period.
  • the current selection describes a region of a page instance. It consists of the most recent digital ink stroke captured through the pen relative to the background area of the page. It is interpreted in an application-specific manner once it is submitted to an application via a selection hyperlink activation.
  • Each pen has a current nib 824. This is the nib last notified by the pen to the system. In the case of the default netpage pen described above, either the marking black ink nib or the non-marking stylus nib is current.
  • Each pen also has a current nib style 825. This is the nib style last associated with the pen by an application, e.g. in response to the user selecting a color from a palette.
  • the default nib style is the nib style associated with the current nib. Strokes captured through a pen are tagged with the current nib style. When the strokes are subsequently reproduced, they are reproduced in the nib style with which they are tagged.
  • the pen 101 may have one or more buttons 209. As described in US Application
  • the button(s) may be used to determine a mode or behavior of the pen, which, in turn, determines how a stroke or, more generally, interaction data is interpreted by the page server 10.
  • a sequence of captured strokes is referred to as digital ink.
  • Digital ink forms the basis for the digital exchange of drawings and handwriting, for online recognition of handwriting, and for online verification of signatures.
  • the pen is typically wireless and transmits digital ink to the relay device 601 via a short-range radio link.
  • the transmitted digital ink is encrypted for privacy and security and packetized for efficient transmission, but is always flushed on a pen-up event to ensure timely handling in the printer.
  • the pen When the pen is out-of-range of a relay device 601 it buffers digital ink in internal memory, which has a capacity of over ten minutes of continuous handwriting. When the pen is once again within range of a relay device, it transfers any buffered digital ink.
  • a pen can be registered with any number of relay devices, but because all state data resides in netpages both on paper and on the network, it is largely immaterial which relay device a pen is communicating with at any particular time.
  • the netpage relay device 601 receives data relating to a stroke from the pen 101 when the pen is used to interact with a netpage 1.
  • the coded data 3 of the tags 4 is read by the pen when it is used to execute a movement, such as a stroke.
  • the data allows the identity of the particular page to be determined and an indication of the positioning of the pen relative to the page to be obtained.
  • Interaction data typically comprising the page ID 50 and at least one position of the pen, is transmitted to the relay device 601, where it resolves, via the DNS, the page ID 50 of the stroke into the network address of the netpage page server 10 which maintains the corresponding page instance 830. It then transmits the stroke to the page server.
  • Each netpage consists of a compact page layout maintained persistently by a netpage page server (see below).
  • the page layout refers to objects such as images, fonts and pieces of text, typically stored elsewhere on the netpage network.
  • the page server When the page server receives the stroke from the pen, it retrieves the page description to which the stroke applies, and determines which element of the page description the stroke intersects. It is then able to interpret the stroke in the context of the type of the relevant element.
  • a “click” is a stroke where the distance and time between the pen down position and the subsequent pen up position are both less than some small maximum.
  • An object which is activated by a click typically requires a click to be activated, and accordingly, a longer stroke is ignored.
  • the failure of a pen action, such as a "sloppy" click, to register may be indicated by the lack of response from the pen's "ok” LED.
  • Hyperlinks and form fields are two kinds of input elements, which may be contained in a netpage page description. Input through a form field can also trigger the activation of an associated hyperlink. These types of input elements are described in further detail in the above-identified patents and patent applications, the contents of which are herein incorporated by cross-reference.
  • a housing 102 in the form of a plastics moulding having walls 103 defining an interior space 104 for mounting the pen components.
  • Mode selector buttons 209 are provided on the housing 102.
  • the pen top 105 is in operation rotatably mounted at one end 106 of the housing 102.
  • a semi-transparent cover 107 is secured to the opposite end 108 of the housing 102.
  • the cover 107 is also of moulded plastics, and is formed from semi- transparent material in order to enable the user to view the status of the LED mounted within the housing 102.
  • the cover 107 includes a main part 109 which substantially surrounds the end 108 of the housing 102 and a projecting portion 110 which projects back from the main part 109 and fits within a corresponding slot 111 formed in the walls 103 of the housing 102.
  • a radio antenna 112 is mounted behind the projecting portion 110, within the housing 102.
  • Screw threads 113 surrounding an aperture 113A on the cover 107 are arranged to receive a metal end piece 114, including corresponding screw threads 115.
  • the metal end piece 114 is removable to enable ink cartridge replacement.
  • a tri-color status LED 116 is mounted within the cover 107.
  • the antenna 112 is also mounted on the flex PCB 117.
  • the status LED 116 is mounted at the top of the pen 101 for good all-around visibility.
  • the pen can operate both as a normal marking ink pen and as a non-marking stylus.
  • An ink pen cartridge 118 with nib 119 and a stylus 120 with stylus nib 121 are mounted side by side within the housing 102. Either the ink cartridge nib 119 or the stylus nib 121 can be brought forward through open end 122 of the metal end piece 114, by rotation of the pen top 105.
  • Respective slider blocks 123 and 124 are mounted to the ink cartridge 118 and stylus 120, respectively.
  • a rotatable cam barrel 125 is secured to the pen top 105 in operation and arranged to rotate therewith.
  • the cam barrel 125 includes a cam 126 in the form of a slot within the walls 181 of the cam barrel.
  • Cam followers 127 and 128 projecting from slider blocks 123 and 124 fit within the cam slot 126.
  • the slider blocks 123 or 124 move relative to each other to project either the pen nib 119 or stylus nib 121 out through the hole 122 in the metal end piece 114.
  • the pen 101 has three states of operation. By turning the top 105 through 90° steps, the three states are:
  • a second flex PCB 129 is mounted on an electronics chassis 130 which sits within the housing 102.
  • the second flex PCB 129 mounts an infrared LED 131 for providing infrared radiation for projection onto the surface.
  • An image sensor 132 is provided mounted on the second flex PCB 129 for receiving reflected radiation from the surface.
  • the second flex PCB 129 also mounts a radio frequency chip 133, which includes an RF transmitter and RF receiver, and a controller chip 134 for controlling operation of the pen 101.
  • An optics block 135 (formed from moulded clear plastics) sits within the cover 107 and projects an infrared beam onto the surface and receives images onto the image sensor 132.
  • Power supply wires 136 connect the components on the second flex PCB 129 to battery contacts 137 which are mounted within the cam barrel 125.
  • a terminal 138 connects to the battery contacts 137 and the cam barrel 125.
  • a three volt rechargeable battery 139 sits within the cam barrel 125 in contact with the battery contacts.
  • An induction charging coil 140 is mounted about the second flex PCB 129 to enable recharging of the battery 139 via induction.
  • the second flex PCB 129 also mounts an infrared LED 143 and infrared photodiode 144 for detecting displacement in the cam barrel 125 when either the stylus 120 or the ink cartridge 118 is used for writing, in order to enable a determination of the force being applied to the surface by the pen nib 119 or stylus nib 121.
  • the IR photodiode 144 detects light from the IR LED 143 via reflectors (not shown) mounted on the slider blocks 123 and 124.
  • top 105 also includes a clip 142 for clipping the pen 101 to a pocket.
  • the pen 101 is arranged to determine the position of its nib (stylus nib 121 or ink cartridge nib 119) by imaging, in the infrared spectrum, an area of the surface in the vicinity of the nib. It records the location data from the nearest location tag, and is arranged to calculate the distance of the nib 121 or 119 from the location tab utilising optics 135 and controller chip 134.
  • the controller chip 134 calculates the orientation (yaw) of the pen using an orientation indicator in the imaged tag, and the nib-to-tag distance from the perspective distortion observed on the imaged tag.
  • the pen 101 can transmit the digital ink data (which is encrypted for security and packaged for efficient transmission) to the computing system.
  • the digital ink data is transmitted as it is formed.
  • digital ink data is buffered within the pen 101 (the pen 101 circuitry includes a buffer arranged to store digital ink data for approximately 12 minutes of the pen motion on the surface) and can be transmitted later.
  • the controller 134 notifies the system of the pen ID, nib ID 175, current absolute time 176, and the last absolute time it obtained from the system prior to going offline.
  • the pen ID allows the computing system to identify the pen when there is more than one pen being operated with the computing system.
  • the nib ID allows the computing system to identify which nib (stylus nib 121 or ink cartridge nib 119) is presently being used.
  • the computing system can vary its operation depending upon which nib is being used. For example, if the ink cartridge nib 119 is being used the computing system may defer producing feedback output because immediate feedback is provided by the ink markings made on the surface. Where the stylus nib 121 is being used, the computing system may produce immediate feedback output. Since a user may change the nib 119, 121 between one stroke and the next, the pen 101 optionally records a nib ID for a stroke 175. This becomes the nib ID implicitly associated with later strokes.
  • Cartridges having particular nib characteristics may be interchangeable in the pen.
  • the pen controller 134 may interrogate a cartridge to obtain the nib ID 175 of the cartridge.
  • the nib ID 175 may be stored in a ROM or a barcode on the cartridge.
  • the controller 134 notifies the system of the nib ID whenever it changes. The system is thereby able to determine the characteristics of the nib used to produce a stroke 175, and is thereby subsequently able to reproduce the characteristics of the stroke itself.
  • the controller chip 134 is mounted on the second flex PCB 129 in the pen 101.
  • Figure 10 is a block diagram illustrating in more detail the architecture of the controller chip 134.
  • Figure 10 also shows representations of the RF chip 133, the image sensor 132, the tri-color status LED 116, the IR illumination LED 131, the IR force sensor LED 143, and the force sensor photodiode 144.
  • the pen controller chip 134 includes a controlling processor 145.
  • Bus 146 enables the exchange of data between components of the controller chip 134.
  • Flash memory 147 and a 512 KB DRAM 148 are also included.
  • 149 is arranged to convert the analog signal from the force sensor photodiode 144 to a digital signal.
  • An image sensor interface 152 interfaces with the image sensor 132.
  • a transceiver controller 153 and base band circuit 154 are also included to interface with the RF chip 133 which includes an RF circuit 155 and RF resonators and inductors 156 connected to the antenna 112.
  • the controlling processor 145 captures and decodes location data from tags from the surface via the image sensor 132, monitors the force sensor photodiode 144, controls the LEDs 116, 131 and 143, and handles short-range radio communication via the radio transceiver 153. It is a medium-performance (-4OMHz) general-purpose RISC processor.
  • the processor 145 digital transceiver components (transceiver controller 153 and baseband circuit 154), image sensor interface 152, flash memory 147 and 512KB DRAM 148 are integrated in a single controller ASIC.
  • Analog RF components RF circuit 155 and RF resonators and inductors 156) are provided in the separate RF chip.
  • the image sensor is a 215x215 pixel CCD (such a sensor is produced by
  • the controller ASIC 134 enters a quiescent state after a period of inactivity when the pen 101 is not in contact with a surface. It incorporates a dedicated circuit 150 which monitors the force sensor photodiode 144 and wakes up the controller 134 via the power manager 151 on a pen-down event.
  • the radio transceiver communicates in the unlicensed 900MHz band normally used by cordless telephones, or alternatively in the unlicensed 2.4GHz industrial, scientific and medical (ISM) band, and uses frequency hopping and collision detection to provide interference-free communication.
  • ISM industrial, scientific and medical
  • the pen incorporates an Infrared Data Association (IrDA) interface for short-range communication with a base station or netpage printer.
  • IrDA Infrared Data Association
  • the pen 101 includes a pair of orthogonal accelerometers mounted in the normal plane of the pen 101 axis.
  • the accelerometers 190 are shown in Figures 9 and 10 in ghost outline, although it will be appreciated that other alternative motion sensors may be used instead of the accelerometers 190.
  • each location tag ID can then identify an object of interest rather than a position on the surface. For example, if the object is a user interface input element (e.g. a command button), then the tag ID of each location tag within the area of the input element can directly identify the input element.
  • the acceleration measured by the accelerometers in each of the x and y directions is integrated with respect to time to produce an instantaneous velocity and position.
  • the Netpage pen 101 would be capable of reading bar codes, including linear bar codes and 2D bar codes, as well as Netpage tags 4.
  • the most obvious such function is the ability to read the UPC/EAN bar codes that appear on consumer packaged goods.
  • the utility of a barcode reading pen is discussed in our earlier US Patent Application No. 10/815,647 filed on April 2, 2004, the contents of which is incorporated herein by reference. It would be particularly desirable for the pen to capable of reading both Netpage tags 4 and barcodes without any significant design modifications or a requirement to be placed in a special barcode-reading mode.
  • the pen 101 To support the reading of bar coded trade items world-wide, the pen 101 must support the following symbologies: EAN- 13, UPC-A, EAN-8 and UPC-E.
  • Figure 15 shows a sample EAN- 13 bar code symbol.
  • Each bar code symbol in the EAN/UPC family (with the exception of the simplified UPC-E) consists of the following components: »a left quiet zone
  • Each symbol character encodes a digit between 0 and 9, and consists of two bars and two spaces, each between one and four modules wide, for a fixed total of seven modules per character. Symbol characters are self-checking.
  • the nominal width of a module is 0.33mm. It can be printed with an actual width ranging from 75% to 200% of the nominal width, i.e. from 0.25mm to 0.66mm, but must have a consistent width within a single symbol instance.
  • An EAN- 13 bar code symbol directly encodes 12 characters, i.e. six per half. It encodes a thirteenth character in the parities of its six left-half characters.
  • a UPC-A symbol encodes 12 characters.
  • An EAN-8 symbol encodes 8 characters.
  • a UPC-E symbol encodes 6 characters, without a centre guard bar pattern and with a special right guard bar pattern.
  • the nominal width of an EAN- 13 and UPC-A bar code is 109 modules (including the left and right quiet zones), or about 36mm. It may be printed with an actual width ranging from 27mm to 72mm.
  • EAN/UPC bar codes are designed to be imaged under narrowband 670nm illumination, with spaces being generally reflective (light) and bars being generally non- reflective (dark). Since most bar codes are traditionally printed using a broadband- absorptive black ink on a broadband reflective white substrate, other illumination wavelengths, such as wavelengths in the near infrared, allow most bar codes to be acquired.
  • a Netpage pen 101 which images under near- infrared 810nm illumination, has a native ability to image most bar codes. However, since some bar codes are printed using near-infrared-transparent black, green and blue inks, near-infrared imaging may not be fully adequate in all circumstances, and 670nm imaging is therefore important. Accordingly, the Netpage pen 101 may be supplemented with an additional light source, if required.
  • each Netpage tag 4 typically has a maximum dimension of about 4 mm and the Netpage pen 101 is designed primarily for capturing images of Netapage tags. Accordingly, the image sensor 132 does not have a sufficiently large field of view to acquire an entire bar code from a single image, since the field of view is only about 6 mm when in contact with a surface. This strategy is therefore unsatisfactory, because it would require significant design modifications of the pen 101 by incorporation of a separate barcode-reading sensor having a larger field of view. This would inevitably increase the size and adversely affect ergonomics of the pen.
  • Another strategy for acquiring a linear bar code is to capture a dense series of point samples of the bar code as the reader is swiped across the bar code.
  • a light source from the tip of a barcode reading pen focuses a dot of light onto the bar code.
  • the pen is swiped across the bar code in a steady even motion and a waveform of the barcode in constructed by a photodiode measuring the intensity of light reflected back from the bar code.
  • the dot of light should be equal to or slightly smaller than the narrowest bar width. If the dot is wider than the width of the narrowest bar or space, then the dot will overlap two or more bars at a time so that clear transitions between bars and spaces cannot be distinguished.
  • the dot of light for reading standard bar codes has a diameter of 0.33 mm or less. Since the light source of the Netpage pen 101 illuminates a relatively large area (about 6 mm in diameter) to read Netpage tags 4, then point sampling is an unsatisfactory means for acquiring a linear bar code. It would require incorporation of a separate barcode-reading system in the Netpage pen 101. Moreover, point sampling is a generally unreliable means for acquiring linear bar codes, because it requires a steady swiping motion of constant velocity.
  • An alternative strategy for acquiring a linear bar code is to capture a series of overlapping partial 2D images of the bar code by swiping a Netpage pen 101 across a surface.
  • Frame-based barcode scanning can be used to decode barcodes using a 2D image sensor where the barcode is larger than the field of view of the imaging system.
  • multiple images of the barcode are generated by scanning the image sensor across the barcode and capturing images at a constant rate. These regularly-sampled images are used to generate a set of one-dimensional (ID) frames (or waveform fragments) that represent the sequence of bars visible in the image. The frames (waveform fragments) are then aligned to generate a waveform that represents the entire barcode which is then used to decode the barcode. Obviously, the entire barcode must be scanned for decoding to be successful.
  • ID one-dimensional
  • the maximum scan velocity is exceeded during a scan, the barcode will typically fail to decode.
  • the maximum scan velocity can be increased by increasing the sampling rate, increasing field of view, or decreasing the required minimum overlap between frames (although this may lead to errors in frame alignment and waveform reconstruction).
  • Image Equalization is performed to increase the signal-to-noise ratio (SNR) of the captured images of the barcode.
  • the equalization filter is a band-pass filter that combines a low-pass characteristic for noise suppression (essentially a matched filter) and a high-pass component to attenuate the distortion caused by optical effects and non-uniform illumination.
  • the orientation of the barcode within an image must be estimated, since the imaging system may be arbitrarily oriented with respect to the barcode when an image is captured.
  • the process uses a decimated version of the equalized image.
  • the image is first filtered using an edge-enhancement filter (e.g. Laplacian) to emphasise the edges between adjacent bars.
  • the edge-enhanced image is then processed using a Hough transform, which is used to identify the orientation of the edges in the image.
  • the Hough transform output is first rectified (i.e. each bin is replaced with the absolute value of that bin) to ensure that both positive and negative extrema values generated by the edges during edge enhancement contribute to the maxima calculation.
  • a profile of the transform space along the axis representing the quantized angle is then projected. This profile is smoothed, and the barcode orientation is estimated by finding the bin containing the maximum value.
  • the estimated orientation of the barcode is in the range 0° to (180 - quantization) 0 since the barcode is bilaterally symmetric through its centre axis. This means that it is possible for the orientation to "reverse direction" during successive frames.
  • the orientation may jump from 2° to 178°, a change in direction of 176°, instead of the more likely 4° change in direction to -2° (or 358°).
  • the orientation is constrained to change by less the 90° between successive frames by adding or subtracting increments of 180°.
  • the barcode orientation is used to generate a ID frame from the full-resolution equalized image. To do this, a strip of the image oriented in the direction of the barcode is extracted, with the profile of this strip used as the ID frame. Since the strip is arbitrarily orientated within the image, sub-pixel interpolation (e.g. bi-linear interpolation) must be used to extract the pixel values.
  • sub-pixel interpolation e.g. bi-linear interpolation
  • the length of the strip is typically the size of the effective field of view within the image, and the width determines the level of smoothing applied to the profile. If the strip is too narrow, the profile will not be sufficiently smoothed, whilst if the strip is too wide, the barcode edges may be blurred due to noise and quantization error in the orientation estimation.
  • Extracted frames must be normalized to ensure accurate frame alignment. To do this, the signal is smoothed to reduce noise, and any illumination variation is removed. The signal is then normalized to a fixed range, with a zero mean to ensure the energy in successive frames is approximately equal. Frame Alignment
  • the individual frames must be aligned. If the maximum scan velocity has not been exceeded, the signal in each frame will overlap that of the proceeding frame by at least the minimum overlap. Thus, two frames can be aligned by finding the sections of the frames that are similar.
  • the standard method of measuring the similarity of two sequences is cross- correlation.
  • a number of normalized cross-correlations are performed between the two frames, with the frames successively offset in the range 0 to (1 - minimum overlap) * frame size samples.
  • the offset that produces the maximum cross-correlation is selected as the optimal alignment.
  • the two shown in Figure 16 show two successive frames from a barcode scan.
  • Figure 17 shows the cross-correlation between the two frames shown in Figure 16.
  • the graph shown in Figure 18 shows the optimal alignment of the two frames based on the maximum value of the cross-correlations.
  • the offset is dependent on the scan speed, with a slow scan generating small offsets, and a fast scan generating large offsets.
  • the cross-correlations between the frames can generate multiple maxima, each of which represents a possible frame alignment.
  • linear prediction using previous (and possibly subsequent) frame offsets can be used to estimate the most likely offset within a frame, allowing the ambiguity of multiple cross-correlation maxima to be resolved.
  • Waveform Reconstruction Once the optimal alignment of the frames has been found, the waveform must be reconstructed by piecing the individual frames together into a single, continuous signal. A simple way to do this is to append each frame to the waveform, skipping the region that overlaps with the previous frame. However, this approach is not optimal and often produces discontinuities in the waveform at frame boundaries.
  • An alternative approach is to use the average value of all the sample values in all frames that overlap a sample position within the waveform. Thus, the samples in each frame are simply added to the waveform using the appropriate alignment, and a count of the number of frame samples that contributed to each waveform sample is used to calculate the average sample value once all the frames have been added.
  • This process can be further improved by observing that the quality of the frame data is better near the centre of the frame, due to the effects of illumination and optical distortion in the captured images.
  • the simple average can be replaced with a weighted average that emphasizes the samples near the centre of each frame (e.g. a
  • a final improvement is to align each frame with the partially reconstructed waveform (i.e. constructed using all the frames up to the current frame) rather than with the previous frame. This reduces the degradation caused by noisy frames and limits the cumulative effect of alignment error caused by quantization and noise.
  • the bar code can be decoded in the usual way, e.g. to yield a product code
  • the bar code can be decoded in the usual way, e.g. to yield a product code

Abstract

A method of recovering a waveform representing a linear bar code, the method including the steps of: moving a sensing device relative to the barcode, said sensing device having a two-dimensional image sensor; capturing, using the image sensor, a plurality of two- dimensional partial images of said bar code during said movement; determining, from at least one of the images, a direction substantially perpendicular to the bars of the bar code; determining, substantially along the direction, a waveform fragment corresponding to each captured image; determining an alignment between each pair of successive waveform fragments; and recovering, from the aligned waveform fragments, the waveform.

Description

BAR CODE READING METHOD
FIELD OF INVENTION
The present invention relates to a method and system for reading barcodes disposed on a surface. It has been developed primarily to enable acquistion of linear barcodes using a camera with a field of view smaller than the barcode.
BACKGROUND
The Applicant has previously described a method of enabling users to access information from a computer system via a printed substrate e.g. paper. The substrate has coded data printed thereon, which is read by an optical sensing device when the user interacts with the substrate using the sensing device. A computer receives interaction data from the sensing device and uses this data to determine what action is being requested by the user. For example, a user may make make handwritten input onto a form or make a selection gesture around a printed item. This input is interpreted by the computer system with reference to a page description corresponding to the printed substrate.
It would be desirable to improve to enable the sensing device to read standard linear barcodes without special modifications or selection of a special barcode-reading mode by the user.
SUMMARY OF INVENTION
In a first aspect the present invention provides amethod of recovering a waveform representing a linear bar code, the method including the steps of: moving a sensing device relative to the barcode, said sensing device having a two- dimensional image sensor; capturing, using the image sensor, a plurality of two-dimensional partial images of said bar code during said movement; determining, from at least one of the images, a direction substantially perpendicular to the bars of the bar code; determining, substantially along the direction, a waveform fragment corresponding to each captured image; determining an alignment between each pair of successive waveform fragments; and recovering, from the aligned waveform fragments, the waveform.
Optionally, a field of view of the image sensor is smaller than the length of the bar code.
Optionally, each partial two-dimensional image of said bar code contains a plurality of bars.
In a further aspect there is provided a method further comprising the step of: determining a product code by decoding the waveform.
In a further aspect there is provided a method further comprising the step of: low-pass filtering the captured images in a direction substantially parallel to the bars.
Optionally, the direction is determined using a Hough transform for identifying an orientation of edges in the at least one image.
Optionally, the alignment between each pair of successive waveform fragments is determined by performing one or more normalized cross-correlations between each pair.
Optionally, the waveform is recovered from the aligned waveform fragments by appending each fragment to a previous fragment, and skipping a region overlapping with said previous fragment.
Optionally, the waveform is recovered from the aligned waveform fragments by: determining an average value for a plurality of sample values of the waveform, said sample values being contained in portions of the waveform contained in overlapping waveform fragments.
Optionally, the average value is a weighted average, whereby sample values captured from a centre portion of each image have a higher weight than sample values captured from an edge portion of each image. Optionally, the sample values for each image are weighted in accordance with a Gaussian window for said image.
Optionally, the waveform is recovered from the aligned waveform fragments by: aligning a current waveform fragment with a partially -constructed waveform constructed using all waveform fragments up to the current fragment.
Optionally, said method is performed only in the absence of a location-indicating tag in a field of view of the image sensor.
In another aspect the present invention provides a sensing device for recovering a waveform representing a linear bar code, said sensing device comprising: a two-dimensional image sensor for capturing a plurality of partial two-dimensional images of said bar code during movement of said sensing device relative to said bar code; and a processor configured for: determining, from at least one of the images, a direction substantially perpendicular to the bars of the bar code; determining, substantially along the direction, a waveform fragment corresponding to each captured image; determining an alignment between each pair of successive waveform fragments; and recovering, from the aligned waveform fragments, the waveform.
Optionally, a field of view of the image sensor is smaller than the length of the bar code.
Optionally, a field of view of the image sensor is sufficiently large for capturing an image of a plurality of bars.
Optionally, the processor is further configured for: determining the alignment between each pair of successive waveform fragments by performing one or more normalized cross-correlations between each pair. Optionally, the processor is further configured for: determining an average value for a plurality of sample values of the waveform, said sample values being contained in portions of the waveform contained in overlapping waveform fragments.
In a further aspect there is provided a sensing device further comprising: communication means for communicating the waveform to a computer system.
Optionally, said image sensor has a field of view sufficiently large for capturing an image of a whole location-indicating tag disposed on a surface, and said processor is configured for determining a position of the sensing device relative to the surface using the imaged tag. BRIEF DESCRIPTION OF DRAWINGS
Preferred and other embodiments of the invention will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:
Figure 1 shows an embodiment of basic netpage architecture;
Figure 2 is a schematic of a the relationship between a sample printed netpage and its online page description;
Figure 3 shows an embodiment of basic netpage architecture with various alternatives for the relay device;
Figure 3A illustrates a collection of netpage servers, Web terminals, printers and relays interconnected via a network; Figure 4 is a schematic view of a high-level structure of a printed netpage and its online page description;
Figure 5 A is a plan view showing a structure of a netpage tag;
Figure 5B is a plan view showing a relationship between a set of the tags shown in Figure 5a and a field of view of a netpage sensing device in the form of a netpage pen; Figure 6A is a plan view showing an alternative structure of a netpage tag;
Figure 6B is a plan view showing a relationship between a set of the tags shown in Figure 6a and a field of view of a netpage sensing device in the form of a netpage pen; Figure 6C is a plan view showing an arrangement of nine of the tags shown in Figure 6a where targets are shared between adjacent tags;
Figure 6D is a plan view showing the interleaving and rotation of the symbols of the four codewords of the tag shown in Figure 6a; Figure 7 is a flowchart of a tag image processing and decoding algorithm;
Figure 8 is a perspective view of a netpage pen and its associated tag-sensing field-of-view cone;
Figure 9 is a perspective exploded view of the netpage pen shown in Figure 8;
Figure 10 is a schematic block diagram of a pen controller for the netpage pen shown in Figures 8 and 9;
Figure 11 is a schematic view of a pen class diagram;
Figure 12 is a schematic view of a document and page description class diagram;
Figure 13 is a schematic view of a document and page ownership class diagram;
Figure 14 is a schematic view of a terminal element specialization class diagram; Figure 15 shows a typical EAN- 13 bar code symbol;
Figure 16 shows two successive frames from a bar code scan;
Figure 17 shows the cross-correlation between the two frames shown in Figure 16; and
Figure 18 shows the optimal alignment of the two frames shown in Figure 16.
DETAILED DESCRIPTION OF PREFERRED AND OTHER EMBODIMENTS
Note: Memjet™ is a trade mark of Silverbrook Research Pty Ltd, Australia.
In the preferred embodiment, the invention is configured to work with the netpage networked computer system, a detailed overview of which follows. It will be appreciated that not every implementation will necessarily embody all or even most of the specific details and extensions discussed below in relation to the basic system. However, the system is described in its most complete form to reduce the need for external reference when attempting to understand the context in which the preferred embodiments and aspects of the present invention operate. In brief summary, the preferred form of the netpage system employs a computer interface in the form of a mapped surface, that is, a physical surface which contains references to a map of the surface maintained in a computer system. The map references can be queried by an appropriate sensing device. Depending upon the specific implementation, the map references may be encoded visibly or invisibly, and defined in such a way that a local query on the mapped surface yields an unambiguous map reference both within the map and among different maps. The computer system can contain information about features on the mapped surface, and such information can be retrieved based on map references supplied by a sensing device used with the mapped surface. The information thus retrieved can take the form of actions which are initiated by the computer system on behalf of the operator in response to the operator's interaction with the surface features.
In its preferred form, the netpage system relies on the production of, and human interaction with, netpages. These are pages of text, graphics and images printed on ordinary paper, but which work like interactive webpages. Information is encoded on each page using ink which is substantially invisible to the unaided human eye. The ink, however, and thereby the coded data, can be sensed by an optically imaging sensing device and transmitted to the netpage system. The sensing device may take the form of a clicker (for clicking on a specific position on a surface), a pointer having a stylus (for pointing or gesturing on a surface using pointer strokes), or a pen having a marking nib (for marking a surface with ink when pointing, gesturing or writing on the surface). References herein to "pen" or "netpage pen" are provided by way of example only. It will, of course, be appreciated that the pen may take the form of any of the sensing devices described above. In one embodiment, active buttons and hyperlinks on each page can be clicked with the sensing device to request information from the network or to signal preferences to a network server. In one embodiment, text written by hand on a netpage is automatically recognized and converted to computer text in the netpage system, allowing forms to be filled in. In other embodiments, signatures recorded on a netpage are automatically verified, allowing e-commerce transactions to be securely authorized. In other embodiments, text on a netpage may be clicked or gestured to initiate a search based on keywords indicated by the user.
As illustrated in Figure 2, a printed netpage 1 can represent a interactive form which can be filled in by the user both physically, on the printed page, and "electronically", via communication between the pen and the netpage system. The example shows a "Request" form containing name and address fields and a submit button. The netpage consists of graphic data 2 printed using visible ink, and coded data 3 printed as a collection of tags 4 using invisible ink. The corresponding page description 5, stored on the netpage network, describes the individual elements of the netpage. In particular it describes the type and spatial extent (zone) of each interactive element (i.e. text field or button in the example), to allow the netpage system to correctly interpret input via the netpage. The submit button 6, for example, has a zone 7 which corresponds to the spatial extent of the corresponding graphic 8.
As illustrated in Figures 1 and 3, a netpage sensing device 101, such as the pen shown in Figures 8 and 9 and described in more detail below, works in conjunction with a netpage relay device 601, which is an Internet-connected device for home, office or mobile use. The pen is wireless and communicates securely with the netpage relay device 601 via a short-range radio link 9. In an alternative embodiment, the netpage pen 101 utilises a wired connection, such as a USB or other serial connection, to the relay device 601.
The relay device 601 performs the basic function of relaying interaction data to a page server 10, which interprets the interaction data. As shown in Figure 3, the relay device 601 may, for example, take the form of a personal computer 601a, a netpage printer 601b or some other relay 601c.
The netpage printer 601b is able to deliver, periodically or on demand, personalized newspapers, magazines, catalogs, brochures and other publications, all printed at high quality as interactive netpages. Unlike a personal computer, the netpage printer is an appliance which can be, for example, wall-mounted adjacent to an area where the morning news is first consumed, such as in a user's kitchen, near a breakfast table, or near the household's point of departure for the day. It also comes in tabletop, desktop, portable and miniature versions. Netpages printed on-demand at their point of consumption combine the ease-of-use of paper with the timeliness and interactivity of an interactive medium. Alternatively, the netpage relay device 601 may be a portable device, such as a mobile phone or PDA, a laptop or desktop computer, or an information appliance connected to a shared display, such as a TV. If the relay device 601 is not a netpage printer 601b which prints netpages digitally and on demand, the netpages may be printed by traditional analog printing presses, using such techniques as offset lithography, flexography, screen printing, relief printing and rotogravure, as well as by digital printing presses, using techniques such as drop-on-demand inkjet, continuous inkjet, dye transfer, and laser printing. As shown in Figure 3, the netpage sensing device 101 interacts with the coded data on a printed netpage 1, or other printed substate such as a label of a product item 251, and communicates, via a short-range radio link 9, the interaction to the relay 601. The relay 601 sends corresponding interaction data to the relevant netpage page server 10 for interpretation. Raw data received from the sensing device 101 may be relayed directly to the page server 10 as interaction data. Alternatively, the interaction data may be encoded in the form of an interaction URI and transmitted to the page server 10 via a user's web browser. Of course, the relay device 601 (e.g. mobile phone) may incorporate a web browser and a display device. Interpretation of the interaction data by the page server 10 may result in direct access to information requested by the user. This information may be sent from the page server 10 to, for example, a user's display device (e.g. a display device associated with the relay device 601). The information sent to the user may be in the form of a webpage constructed by the page server 10 and the webpage may be constructed using information from external web services 200 (e.g. search engines) or local web resources accessible by the page server 10. In some circumstances, the page server 10 may access application computer software running on a netpage application server 13.
Alternatively, and as shown explicitly in Figure 1, a two-step information retrieval process may be employed. Interaction data is sent from the sensing device 101 to the relay device 601 in the usual way. The relay device 601 then sends the interaction data to the page server 10 for interpretation with reference to the relevant page description 5. Then, the page server 10 forms a request (typically in the form of a request URI) and sends this request URI back to the user's relay device 601. A web browser running on the relay device 601 then sends the request URI to a netpage web server 201, which interprets the request. The netpage web server 201 may interact with local web resources and external web services 200 to interpret the request and construct a webpage. Once the webpage has been constructed by the netpage web server 201, it is transmitted to the web browser running on the user's relay device 601, which typically displays the webpage. This system architecture is particulary useful for performing searching via netpages, as described in our earlier US Patent Application No. 11/672,950 filed on February 8, 2007 (the contents of which is incorporated by reference). For example, the request URI may encode search query terms, which are searched via the netpage web server 201.
The netpage relay device 601 can be configured to support any number of sensing devices, and a sensing device can work with any number of netpage relays. In the preferred implementation, each netpage sensing device 101 has a unique identifier. This allows each user to maintain a distinct profile with respect to a netpage page server 10 or application server 13. Digital, on-demand delivery of netpages 1 may be performed by the netpage printer 601b, which exploits the growing availability of broadband Internet access. Netpage publication servers 14 on the netpage network are configured to deliver print- quality publications to netpage printers. Periodical publications are delivered automatically to subscribing netpage printers via pointcasting and multicasting Internet protocols. Personalized publications are filtered and formatted according to individual user profiles.
A netpage pen may be registered with a netpage registration server 11 and linked to one or more payment card accounts. This allows e-commerce payments to be securely authorized using the netpage pen. The netpage registration server compares the signature captured by the netpage pen with a previously registered signature, allowing it to authenticate the user's identity to an e-commerce server. Other biometrics can also be used to verify identity. One version of the netpage pen includes fingerprint scanning, verified in a similar way by the netpage registration server.
NETPAGE SYSTEM ARCHITECTURE Each object model in the system is described using a Unified Modeling Language
(UML) class diagram. A class diagram consists of a set of object classes connected by relationships, and two kinds of relationships are of interest here: associations and generalizations. An association represents some kind of relationship between objects, i.e. between instances of classes. A generalization relates actual classes, and can be understood in the following way: if a class is thought of as the set of all objects of that class, and class A is a generalization of class B, then B is simply a subset of A. The UML does not directly support second-order modelling - i.e. classes of classes.
Each class is drawn as a rectangle labelled with the name of the class. It contains a list of the attributes of the class, separated from the name by a horizontal line, and a list of the operations of the class, separated from the attribute list by a horizontal line. In the class diagrams which follow, however, operations are never modelled.
An association is drawn as a line joining two classes, optionally labelled at either end with the multiplicity of the association. The default multiplicity is one. An asterisk (*) indicates a multiplicity of "many", i.e. zero or more. Each association is optionally labelled with its name, and is also optionally labelled at either end with the role of the corresponding class. An open diamond indicates an aggregation association ("is-part-of '), and is drawn at the aggregator end of the association line.
A generalization relationship ("is-a") is drawn as a solid line joining two classes, with an arrow (in the form of an open triangle) at the generalization end.
When a class diagram is broken up into multiple diagrams, any class which is duplicated is shown with a dashed outline in all but the main diagram which defines it. It is shown with attributes only where it is defined. 1 NETPAGES
Netpages are the foundation on which a netpage network is built. They provide a paper-based user interface to published information and interactive services.
A netpage consists of a printed page (or other surface region) invisibly tagged with references to an online description of the page. The online page description is maintained persistently by the netpage page server 10. The page description describes the visible layout and content of the page, including text, graphics and images. It also describes the input elements on the page, including buttons, hyperlinks, and input fields. A netpage allows markings made with a netpage pen on its surface to be simultaneously captured and processed by the netpage system.
Multiple netpages (for example, those printed by analog printing presses) can share the same page description. However, to allow input through otherwise identical pages to be distinguished, each netpage may be assigned a unique page identifier. This page ID has sufficient precision to distinguish between a very large number of netpages. Each reference to the page description is encoded in a printed tag. The tag identifies the unique page on which it appears, and thereby indirectly identifies the page description. The tag also identifies its own position on the page. Characteristics of the tags are described in more detail below.
Tags are typically printed in infrared-absorptive ink on any substrate which is infrared-reflective, such as ordinary paper, or in infrared fluorescing ink. Near-infrared wavelengths are invisible to the human eye but are easily sensed by a solid-state image sensor with an appropriate filter.
A tag is sensed by a 2D area image sensor in the netpage sensing device, and the tag data is transmitted to the netpage system via the nearest netpage relay device. The pen is wireless and communicates with the netpage relay device via a short-range radio link. Tags are sufficiently small and densely arranged that the sensing device can reliably image at least one tag even on a single click on the page. It is important that the pen recognize the page ID and position on every interaction with the page, since the interaction is stateless. Tags are error-correctably encoded to make them partially tolerant to surface damage.
The netpage page server 10 maintains a unique page instance for each unique printed netpage, allowing it to maintain a distinct set of user-supplied values for input fields in the page description for each printed netpage. The relationship between the page description, the page instance, and the printed netpage is shown in Figure 4. The printed netpage may be part of a printed netpage document 45. The page instance may be associated with both the netpage printer which printed it and, if known, the netpage user who requested it.
2 NETPAGE TAGS
2.1 Tag Data Content
In a preferred form, each tag identifies the region in which it appears, and the location of that tag within the region and an orientation of the tag relative to a substrate on which the tag is printed. A tag may also contain flags which relate to the region as a whole or to the tag. One or more flag bits may, for example, signal a tag sensing device to provide feedback indicative of a function associated with the immediate area of the tag, without the sensing device having to refer to a description of the region. A netpage pen may, for example, illuminate an "active area" LED when in the zone of a hyperlink.
As will be more clearly explained below, in a preferred embodiment, each tag typically contains an easily recognized invariant structure which aids initial detection, and which assists in minimizing the effect of any warp induced by the surface or by the sensing process. The tags preferably tile the entire page, and are sufficiently small and densely arranged that the pen can reliably image at least one tag even on a single click on the page. It is important that the pen recognize the page ID and position on every interaction with the page, since the interaction is stateless.
In a preferred embodiment, the region to which a tag refers coincides with an entire page, and the region ID encoded in the tag is therefore synonymous with the page ID of the page on which the tag appears. In other embodiments, the region to which a tag refers can be an arbitrary subregion of a page or other surface. For example, it can coincide with the zone of an interactive element, in which case the region ID can directly identify the interactive element.
Table 1 - Tag data
Figure imgf000013_0001
Each tag contains 120 bits of information, typically allocated as shown in Table 1. Assuming a maximum tag density of 64 per square inch, a 16-bit tag ID supports a region size of up to 1024 square inches. Larger regions can be mapped continuously without increasing the tag ID precision simply by using abutting regions and maps. The 100-bit region ID allows 2100 (~1030 or a million trillion trillion) different regions to be uniquely identified. 2.2 Tag Data Encoding
The 120 bits of tag data are redundantly encoded using a (15, 5) Reed-Solomon code. This yields 360 encoded bits consisting of 6 codewords of 15 4-bit symbols each. The (15, 5) code allows up to 5 symbol errors to be corrected per codeword, i.e. it is tolerant of a symbol error rate of up to 33% per codeword.
Each 4-bit symbol is represented in a spatially coherent way in the tag, and the symbols of the six codewords are interleaved spatially within the tag. This ensures that a burst error (an error affecting multiple spatially adjacent bits) damages a minimum number of symbols overall and a minimum number of symbols in any one codeword, thus maximising the likelihood that the burst error can be fully corrected.
Any suitable error-correcting code code can be used in place of a (15, 5) Reed- Solomon code, for example a Reed-Solomon code with more or less redundancy, with the same or different symbol and codeword sizes; another block code; or a different kind of code, such as a convolutional code (see, for example, Stephen B. Wicker, Error Control Systems for Digital Communication and Storage, Prentice-Hall 1995, the contents of which a herein incorporated by cross-reference). 2.3 Physical Tag Structure
The physical representation of the tag, shown in Figure 5a, includes fixed target structures 15, 16, 17 and variable data areas 18. The fixed target structures allow a sensing device such as the netpage pen to detect the tag and infer its three-dimensional orientation relative to the sensor. The data areas contain representations of the individual bits of the encoded tag data.
To achieve proper tag reproduction, the tag is rendered at a resolution of 256x256 dots. When printed at 1600 dots per inch this yields a tag with a diameter of about 4 mm.
At this resolution the tag is designed to be surrounded by a "quiet area" of radius 16 dots. Since the quiet area is also contributed by adjacent tags, it only adds 16 dots to the effective diameter of the tag.
The tag may include a plurality of target structures. A detection ring 15 allows the sensing device to initially detect the tag. The ring is easy to detect because it is rotationally invariant and because a simple correction of its aspect ratio removes most of the effects of perspective distortion. An orientation axis 16 allows the sensing device to determine the approximate planar orientation of the tag due to the yaw of the sensor. The orientation axis is skewed to yield a unique orientation. Four perspective targets 17 allow the sensing device to infer an accurate two-dimensional perspective transform of the tag and hence an accurate three-dimensional position and orientation of the tag relative to the sensor.
All target structures are redundantly large to improve their immunity to noise.
In order to support "single-click" interaction with a tagged region via a sensing device, the sensing device must be able to see at least one entire tag in its field of view no matter where in the region or at what orientation it is positioned. The required diameter of the field of view of the sensing device is therefore a function of the size and spacing of the tags.
Thus, if a tag has a circular shape, the minimum diameter of the sensor field of view is obtained when the tags are tiled on a equilateral triangular grid, as shown in Figure 5b. 2.4 Tag Image Processing and Decoding
The tag image processing and decoding performed by a sensing device such as the netpage pen is shown in Figure 7. While a captured image is being acquired from the image sensor, the dynamic range of the image is determined (at 20). The center of the range is then chosen as the binary threshold for the image 21. The image is then thresholded and segmented into connected pixel regions (i.e. shapes 23) (at 22). Shapes which are too small to represent tag target structures are discarded. The size and centroid of each shape is also computed. Binary shape moments 25 are then computed (at 24) for each shape, and these provide the basis for subsequently locating target structures. Central shape moments are by their nature invariant of position, and can be easily made invariant of scale, aspect ratio and rotation.
The ring target structure 15 is the first to be located (at 26). A ring has the advantage of being very well behaved when perspective-distorted. Matching proceeds by aspect-normalizing and rotation-normalizing each shape's moments. Once its second-order moments are normalized the ring is easy to recognize even if the perspective distortion was significant. The ring's original aspect and rotation 27 together provide a useful approximation of the perspective transform. The axis target structure 16 is the next to be located (at 28). Matching proceeds by applying the ring's normalizations to each shape's moments, and rotation-normalizing the resulting moments. Once its second-order moments are normalized the axis target is easily recognized. Note that one third order moment is required to disambiguate the two possible orientations of the axis. The shape is deliberately skewed to one side to make this possible. Note also that it is only possible to rotation-normalize the axis target after it has had the ring's normalizations applied, since the perspective distortion can hide the axis target's axis. The axis target's original rotation provides a useful approximation of the tag's rotation due to pen yaw 29.
The four perspective target structures 17 are the last to be located (at 30). Good estimates of their positions are computed based on their known spatial relationships to the ring and axis targets, the aspect and rotation of the ring, and the rotation of the axis. Matching proceeds by applying the ring's normalizations to each shape's moments. Once their second-order moments are normalized the circular perspective targets are easy to recognize, and the target closest to each estimated position is taken as a match. The original centroids of the four perspective targets are then taken to be the perspective- distorted corners 31 of a square of known size in tag space, and an eight-degree-of- freedom perspective transform 33 is inferred (at 32) based on solving the well-understood equations relating the four tag-space and image-space point pairs (see Heckbert, P., Fundamentals of Texture Mapping and Image Warping, Masters Thesis, Dept. of EECS, U. of California at Berkeley, Technical Report No. UCB/CSD 89/516, June 1989, the contents of which are herein incorporated by cross-reference).
The inferred tag-space to image-space perspective transform is used to project (at 36) each known data bit position in tag space into image space where the real-valued position is used to bilinearly interpolate (at 36) the four relevant adjacent pixels in the input image. The previously computed image threshold 21 is used to threshold the result to produce the final bit value 37.
Once all 360 data bits 37 have been obtained in this way, each of the six 60-bit Reed-Solomon codewords is decoded (at 38) to yield 20 decoded bits 39, or 120 decoded bits in total. Note that the codeword symbols are sampled in codeword order, so that codewords are implicitly de-interleaved during the sampling process.
The ring target 15 is only sought in a subarea of the image whose relationship to the image guarantees that the ring, if found, is part of a complete tag. If a complete tag is not found and successfully decoded, then no pen position is recorded for the current frame.
Given adequate processing power and ideally a non-minimal field of view 193, an alternative strategy involves seeking another tag in the current image.
The obtained tag data indicates the identity of the region containing the tag and the position of the tag within the region. An accurate position 35 of the pen nib in the region, as well as the overall orientation 35 of the pen, is then inferred (at 34) from the perspective transform 33 observed on the tag and the known spatial relationship between the image sensor (containing the optical axis of the pen) and the nib (which tyically contains the physical axis of the pen). The image sensor is usually offset from the nib.
2.5 Alternative Tag Structures
The tag structure described above is designed to support the tagging of non- planar surfaces where a regular tiling of tags may not be possible. In the more usual case of planar surfaces where a regular tiling of tags is possible, i.e. surfaces such as sheets of paper and the like, more efficient tag structures can be used which exploit the regular nature of the tiling.
Figure 6a shows a square tag 4 with four perspective targets 17. The tag represents sixty 4-bit Reed-Solomon symbols 47, for a total of 240 bits. The tag represents each one bit as a dot 48, and each zero bit by the absence of the corresponding dot. The perspective targets are designed to be shared between adjacent tags, as shown in Figures 6b and 6c. Figure 6b shows a square tiling of 16 tags and the corresponding minimum field of view 193, which must span the diagonals of two tags. Figure 6c shows a square tiling of nine tags, containing all one bits for illustration purposes. Using a (15, 7) Reed-Solomon code, 112 bits of tag data are redundantly encoded to produce 240 encoded bits. The four codewords are interleaved spatially within the tag to maximize resilience to burst errors. Assuming a 16-bit tag ID as before, this allows a region ID of up to 92 bits.
The data-bearing dots 48 of the tag are designed to not overlap their neighbors, so that groups of tags cannot produce structures which resemble targets. This also saves ink. The perspective targets therefore allow detection of the tag, so further targets are not required. Tag image processing proceeds as described in section 1.2.4 above, with the exception that steps 26 and 28 are omitted.
Although the tag may contain an orientation feature to allow disambiguation of the four possible orientations of the tag relative to the sensor, it is also possible to embed orientation data in the tag data. For example, the four codewords can be arranged so that each tag orientation contains one codeword placed at that orientation, as shown in Figure 6d, where each symbol is labelled with the number of its codeword (1-4) and the position of the symbol within the codeword (A-O). Tag decoding then consists of decoding one codeword at each orientation. Each codeword can either contain a single bit indicating whether it is the first codeword, or two bits indicating which codeword it is. The latter approach has the advantage that if, say, the data content of only one codeword is required, then at most two codewords need to be decoded to obtain the desired data. This may be the case if the region ID is not expected to change within a stroke and is thus only decoded at the start of a stroke. Within a stroke only the codeword containing the tag ID is then desired. Furthermore, since the rotation of the sensing device changes slowly and predictably within a stroke, only one codeword typically needs to be decoded per frame.
It is possible to dispense with perspective targets altogether and instead rely on the data representation being self-registering. In this case each bit value (or multi-bit value) is typically represented by an explicit glyph, i.e. no bit value is represented by the absence of a glyph. This ensures that the data grid is well-populated, and thus allows the grid to be reliably identified and its perspective distortion detected and subsequently corrected during data sampling. To allow tag boundaries to be detected, each tag data must contain a marker pattern, and these must be redundantly encoded to allow reliable detection. The overhead of such marker patterns is similar to the overhead of explicit perspective targets. One such scheme uses dots positioned a various points relative to grid vertices to represent different glyphs and hence different multi-bit values (see Anoto Technology Description, Anoto April 2000).
Additional tag structures are disclosed in US Patent 6929186 ("Orientation- indicating machine-readable coded data") filed by the applicant or assignee of the present invention and the contents of which is herein incorporated by reference.
2.6 Tag Map
Decoding a tag typically results in a region ID, a tag ID, and a tag-relative pen transform. Before the tag ID and the tag-relative pen location can be translated into an absolute location within the tagged region, the location of the tag within the region must be known. This is given by a tag map, a function which maps each tag ID in a tagged region to a corresponding location. The tag map class diagram is shown in Figure 22, as part of the netpage printer class diagram.
A tag map reflects the scheme used to tile the surface region with tags, and this can vary according to surface type. When multiple tagged regions share the same tiling scheme and the same tag numbering scheme, they can also share the same tag map. The tag map for a region must be retrievable via the region ID. Thus, given a region ID, a tag ID and a pen transform, the tag map can be retrieved, the tag ID can be translated into an absolute tag location within the region, and the tag-relative pen location can be added to the tag location to yield an absolute pen location within the region.
The tag ID may have a structure which assists translation through the tag map. It may, for example, encode cartesian (x-y) coordinates or polar coordinates, depending on the surface type on which it appears. The tag ID structure is dictated by and known to the tag map, and tag IDs associated with different tag maps may therefore have different structures.
With the tagging scheme described above, the tags usually function in cooperation with associated visual elements on the netpage. These function as user interactive elements in that a user can interact with the printed page using an appropriate sensing device in order for tag data to be read by the sensing device and for an appropriate response to be generated in the netpage system. Additionally (or alternatively), decoding a tag may be used to provide orientation data indicative of the yaw of the pen relative to the surface. The orientation data may be determined using, for example, the orientation axis 16 described above (Section 2.3) or orientation data embedded in the tag data (Section 2.5).
3 DOCUMENT AND PAGE DESCRIPTIONS
A preferred embodiment of a document and page description class diagram is shown in Figures 12 and 13.
In the netpage system a document is described at three levels. At the most abstract level the document 836 has a hierarchical structure whose terminal elements 839 are associated with content objects 840 such as text objects, text style objects, image objects, etc. Once the document is printed on a printer with a particular page size, the document is paginated and otherwise formatted. Formatted terminal elements 835 will in some cases be associated with content objects which are different from those associated with their corresponding terminal elements, particularly where the content objects are style-related. Each printed instance of a document and page is also described separately, to allow input captured through a particular page instance 830 to be recorded separately from input captured through other instances of the same page description.
The presence of the most abstract document description on the page server allows a a copy of a document to be printed without being forced to accept the source document's specific format. The user or a printing press may be requesting a copy for a printer with a different page size, for example. Conversely, the presence of the formatted document description on the page server allows the page server to efficiently interpret user actions on a particular printed page. A formatted document 834 consists of a set of formatted page descriptions 5, each of which consists of a set of formatted terminal elements 835. Each formatted element has a spatial extent or zone 58 on the page. This defines the active area of input elements such as hyperlinks and input fields.
A document instance 831 corresponds to a formatted document 834. It consists of a set of page instances 830, each of which corresponds to a page description 5 of the formatted document. Each page instance 830 describes a single unique printed netpage 1, and records the page ID 50 of the netpage. A page instance is not part of a document instance if it represents a copy of a page requested in isolation. A page instance consists of a set of terminal element instances 832. An element instance only exists if it records instance-specific information. Thus, a hyperlink instance exists for a hyperlink element because it records a transaction ID 55 which is specific to the page instance, and a field instance exists for a field element because it records input specific to the page instance. An element instance does not exist, however, for static elements such as textflows.
A terminal element 839 can be a visual element or an input element. A visual element can be a static element 843 or a dynamic element 846. An input element may be, for example, a hyperlink element 844 or a field element 845, as shown in Figure 14. Other types of input element are of course possible, such a input elements, which select a particular mode of the pen 101.
A page instance has a background field 833 which is used to record any digital ink captured on the page which does not apply to a specific input element.
In the preferred form of the invention, a tag map 811 is associated with each page instance to allow tags on the page to be translated into locations on the page.
4 THE NETPAGE NETWORK
In one embodiment, a netpage network consists of a distributed set of netpage page servers 10, netpage registration servers 11, netpage ID servers 12, netpage application servers 13, and netpage relay devices 601 connected via a network 19 such as the Internet, as shown in Figure 3.
The netpage registration server 11 is a server which records relationships between users, pens, printers and applications, and thereby authorizes various network activities. It authenticates users and acts as a signing proxy on behalf of authenticated users in application transactions. It also provides handwriting recognition services. As described above, a netpage page server 10 maintains persistent information about page descriptions and page instances. The netpage network includes any number of page servers, each handling a subset of page instances. Since a page server also maintains user input values for each page instance, clients such as netpage relays 601 send netpage input directly to the appropriate page server. The page server interprets any such input relative to the description of the corresponding page.
A netpage ID server 12 allocates document IDs 51 on demand, and provides load-balancing of page servers via its ID allocation scheme. A netpage relay 601 uses the Internet Distributed Name System (DNS), or similar, to resolve a netpage page ID 50 into the network address of the netpage page server 10 handling the corresponding page instance.
A netpage application server 13 is a server which hosts interactive netpage applications.
Netpage servers can be hosted on a variety of network server platforms from manufacturers such as IBM, Hewlett-Packard, and Sun. Multiple netpage servers can run concurrently on a single host, and a single server can be distributed over a number of hosts. Some or all of the functionality provided by netpage servers, and in particular the functionality provided by the ID server and the page server, can also be provided directly in a netpage appliance such as a netpage printer, in a computer workstation, or on a local network.
5 THE NETPAGE PEN The active sensing device of the netpage system may take the form of a clicker
(for clicking on a specific position on a surface), a pointer having a stylus (for pointing or gesturing on a surface using pointer strokes), or a pen having a marking nib (for marking a surface with ink when pointing, gesturing or writing on the surface). A pen 101 is described herein, although it will be appreciated that clickers and pointers may have similar features. The pen 101 uses its embedded controller 134 to capture and decode netpage tags from a page via an image sensor. The image sensor is a solid-state device provided with an appropriate filter to permit sensing at only near-infrared wavelengths. As described in more detail below, the system is able to sense when the nib is in contact with the surface, and the pen is able to sense tags at a sufficient rate to capture human handwriting (i.e. at 200 dpi or greater and 100 Hz or faster). Information captured by the pen may be encrypted and wirelessly transmitted to the printer (or base station), the printer or base station interpreting the data with respect to the (known) page structure.
The preferred embodiment of the netpage pen 101 operates both as a normal marking ink pen and as a non-marking stylus (i.e. as a pointer). The marking aspect, however, is not necessary for using the netpage system as a browsing system, such as when it is used as an Internet interface. Each netpage pen is registered with the netpage system and has a unique pen ID 61. Figure 11 shows the netpage pen class diagram, reflecting pen- related information maintained by a registration server 11 on the netpage network. When the nib is in contact with a netpage, the pen determines its position and orientation relative to the page. The nib is attached to a force sensor, and the force on the nib is interpreted relative to a threshold to indicate whether the pen is '"up" or "down". This allows an interactive element on the page to be 'clicked' by pressing with the pen nib, in order to request, say, information from a network. Furthermore, the force may be captured as a continuous value to allow, say, the full dynamics of a signature to be verified.
The pen determines the position and orientation of its nib on the netpage by imaging, in the infrared spectrum, an area 193 of the page in the vicinity of the nib. It decodes the nearest tag and computes the position of the nib relative to the tag from the observed perspective distortion on the imaged tag and the known geometry of the pen optics. Although the position resolution of the tag may be low, because the tag density on the page is inversely proportional to the tag size, the adjusted position resolution is quite high, exceeding the minimum resolution required for accurate handwriting recognition. Pen actions relative to a netpage are captured as a series of strokes. A stroke consists of a sequence of time-stamped pen positions on the page, initiated by a pen-down event and completed by the subsequent pen-up event. A stroke is also tagged with the page ID 50 of the netpage whenever the page ID changes, which, under normal circumstances, is at the commencement of the stroke. Each netpage pen has a current selection 826 associated with it, allowing the user to perform copy and paste operations etc. The selection is timestamped to allow the system to discard it after a defined time period. The current selection describes a region of a page instance. It consists of the most recent digital ink stroke captured through the pen relative to the background area of the page. It is interpreted in an application-specific manner once it is submitted to an application via a selection hyperlink activation.
Each pen has a current nib 824. This is the nib last notified by the pen to the system. In the case of the default netpage pen described above, either the marking black ink nib or the non-marking stylus nib is current. Each pen also has a current nib style 825. This is the nib style last associated with the pen by an application, e.g. in response to the user selecting a color from a palette. The default nib style is the nib style associated with the current nib. Strokes captured through a pen are tagged with the current nib style. When the strokes are subsequently reproduced, they are reproduced in the nib style with which they are tagged. The pen 101 may have one or more buttons 209. As described in US Application
No. 11/672,950 filed on February 8, 2007 (the contents of which is herein incorporated by reference), the button(s) may be used to determine a mode or behavior of the pen, which, in turn, determines how a stroke or, more generally, interaction data is interpreted by the page server 10.
Whenever the pen is within range of a relay device 601 with which it can communicate, the pen slowly flashes its "online" LED. When the pen fails to decode a stroke relative to the page, it momentarily activates its "error" LED. When the pen succeeds in decoding a stroke relative to the page, it momentarily activates its "ok" LED. A sequence of captured strokes is referred to as digital ink. Digital ink forms the basis for the digital exchange of drawings and handwriting, for online recognition of handwriting, and for online verification of signatures.
The pen is typically wireless and transmits digital ink to the relay device 601 via a short-range radio link. The transmitted digital ink is encrypted for privacy and security and packetized for efficient transmission, but is always flushed on a pen-up event to ensure timely handling in the printer.
When the pen is out-of-range of a relay device 601 it buffers digital ink in internal memory, which has a capacity of over ten minutes of continuous handwriting. When the pen is once again within range of a relay device, it transfers any buffered digital ink.
A pen can be registered with any number of relay devices, but because all state data resides in netpages both on paper and on the network, it is largely immaterial which relay device a pen is communicating with at any particular time.
One embodiment of the pen is described in greater detail in Section 7 below, with reference to Figures 8 to 10.
6 NETPAGE INTERACTION
The netpage relay device 601 receives data relating to a stroke from the pen 101 when the pen is used to interact with a netpage 1. The coded data 3 of the tags 4 is read by the pen when it is used to execute a movement, such as a stroke. The data allows the identity of the particular page to be determined and an indication of the positioning of the pen relative to the page to be obtained. Interaction data, typically comprising the page ID 50 and at least one position of the pen, is transmitted to the relay device 601, where it resolves, via the DNS, the page ID 50 of the stroke into the network address of the netpage page server 10 which maintains the corresponding page instance 830. It then transmits the stroke to the page server. If the page was recently identified in an earlier stroke, then the relay device may already have the address of the relevant page server in its cache. Each netpage consists of a compact page layout maintained persistently by a netpage page server (see below). The page layout refers to objects such as images, fonts and pieces of text, typically stored elsewhere on the netpage network.
When the page server receives the stroke from the pen, it retrieves the page description to which the stroke applies, and determines which element of the page description the stroke intersects. It is then able to interpret the stroke in the context of the type of the relevant element.
A "click" is a stroke where the distance and time between the pen down position and the subsequent pen up position are both less than some small maximum. An object which is activated by a click typically requires a click to be activated, and accordingly, a longer stroke is ignored. The failure of a pen action, such as a "sloppy" click, to register may be indicated by the lack of response from the pen's "ok" LED.
Hyperlinks and form fields are two kinds of input elements, which may be contained in a netpage page description. Input through a form field can also trigger the activation of an associated hyperlink. These types of input elements are described in further detail in the above-identified patents and patent applications, the contents of which are herein incorporated by cross-reference.
7 DETAILED NETPAGE PEN DESCRIPTION
7.1 PEN MECHANICS Referring to Figures 8 and 9, the pen, generally designated by reference numeral
101, includes a housing 102 in the form of a plastics moulding having walls 103 defining an interior space 104 for mounting the pen components. Mode selector buttons 209 are provided on the housing 102. The pen top 105 is in operation rotatably mounted at one end 106 of the housing 102. A semi-transparent cover 107 is secured to the opposite end 108 of the housing 102. The cover 107 is also of moulded plastics, and is formed from semi- transparent material in order to enable the user to view the status of the LED mounted within the housing 102. The cover 107 includes a main part 109 which substantially surrounds the end 108 of the housing 102 and a projecting portion 110 which projects back from the main part 109 and fits within a corresponding slot 111 formed in the walls 103 of the housing 102. A radio antenna 112 is mounted behind the projecting portion 110, within the housing 102. Screw threads 113 surrounding an aperture 113A on the cover 107 are arranged to receive a metal end piece 114, including corresponding screw threads 115. The metal end piece 114 is removable to enable ink cartridge replacement.
Also mounted within the cover 107 is a tri-color status LED 116 on a flex PCB 117. The antenna 112 is also mounted on the flex PCB 117. The status LED 116 is mounted at the top of the pen 101 for good all-around visibility.
The pen can operate both as a normal marking ink pen and as a non-marking stylus. An ink pen cartridge 118 with nib 119 and a stylus 120 with stylus nib 121 are mounted side by side within the housing 102. Either the ink cartridge nib 119 or the stylus nib 121 can be brought forward through open end 122 of the metal end piece 114, by rotation of the pen top 105. Respective slider blocks 123 and 124 are mounted to the ink cartridge 118 and stylus 120, respectively. A rotatable cam barrel 125 is secured to the pen top 105 in operation and arranged to rotate therewith. The cam barrel 125 includes a cam 126 in the form of a slot within the walls 181 of the cam barrel. Cam followers 127 and 128 projecting from slider blocks 123 and 124 fit within the cam slot 126. On rotation of the cam barrel 125, the slider blocks 123 or 124 move relative to each other to project either the pen nib 119 or stylus nib 121 out through the hole 122 in the metal end piece 114. The pen 101 has three states of operation. By turning the top 105 through 90° steps, the three states are:
Stylus 120 nib 121 out;
• Ink cartridge 118 nib 119 out; and
• Neither ink cartridge 118 nib 119 out nor stylus 120 nib 121 out. A second flex PCB 129, is mounted on an electronics chassis 130 which sits within the housing 102. The second flex PCB 129 mounts an infrared LED 131 for providing infrared radiation for projection onto the surface. An image sensor 132 is provided mounted on the second flex PCB 129 for receiving reflected radiation from the surface. The second flex PCB 129 also mounts a radio frequency chip 133, which includes an RF transmitter and RF receiver, and a controller chip 134 for controlling operation of the pen 101. An optics block 135 (formed from moulded clear plastics) sits within the cover 107 and projects an infrared beam onto the surface and receives images onto the image sensor 132. Power supply wires 136 connect the components on the second flex PCB 129 to battery contacts 137 which are mounted within the cam barrel 125. A terminal 138 connects to the battery contacts 137 and the cam barrel 125. A three volt rechargeable battery 139 sits within the cam barrel 125 in contact with the battery contacts. An induction charging coil 140 is mounted about the second flex PCB 129 to enable recharging of the battery 139 via induction. The second flex PCB 129 also mounts an infrared LED 143 and infrared photodiode 144 for detecting displacement in the cam barrel 125 when either the stylus 120 or the ink cartridge 118 is used for writing, in order to enable a determination of the force being applied to the surface by the pen nib 119 or stylus nib 121. The IR photodiode 144 detects light from the IR LED 143 via reflectors (not shown) mounted on the slider blocks 123 and 124.
Rubber grip pads 141 and 142 are provided towards the end 108 of the housing 102 to assist gripping the pen 101, and top 105 also includes a clip 142 for clipping the pen 101 to a pocket.
7.2 PEN CONTROLLER
The pen 101 is arranged to determine the position of its nib (stylus nib 121 or ink cartridge nib 119) by imaging, in the infrared spectrum, an area of the surface in the vicinity of the nib. It records the location data from the nearest location tag, and is arranged to calculate the distance of the nib 121 or 119 from the location tab utilising optics 135 and controller chip 134. The controller chip 134 calculates the orientation (yaw) of the pen using an orientation indicator in the imaged tag, and the nib-to-tag distance from the perspective distortion observed on the imaged tag.
Utilising the RF chip 133 and antenna 112 the pen 101 can transmit the digital ink data (which is encrypted for security and packaged for efficient transmission) to the computing system.
When the pen is in range of a relay device 601, the digital ink data is transmitted as it is formed. When the pen 101 moves out of range, digital ink data is buffered within the pen 101 (the pen 101 circuitry includes a buffer arranged to store digital ink data for approximately 12 minutes of the pen motion on the surface) and can be transmitted later. In Applicant's US Patent No. 6,870,966, the contents of which is incorporated herein by reference, a pen 101 having an interchangeable ink cartridge nib and stylus nib was described. Accordingly, and referring to Figure 27, when the pen 101 connects to the computing system, the controller 134 notifies the system of the pen ID, nib ID 175, current absolute time 176, and the last absolute time it obtained from the system prior to going offline. The pen ID allows the computing system to identify the pen when there is more than one pen being operated with the computing system.
The nib ID allows the computing system to identify which nib (stylus nib 121 or ink cartridge nib 119) is presently being used. The computing system can vary its operation depending upon which nib is being used. For example, if the ink cartridge nib 119 is being used the computing system may defer producing feedback output because immediate feedback is provided by the ink markings made on the surface. Where the stylus nib 121 is being used, the computing system may produce immediate feedback output. Since a user may change the nib 119, 121 between one stroke and the next, the pen 101 optionally records a nib ID for a stroke 175. This becomes the nib ID implicitly associated with later strokes.
Cartridges having particular nib characteristics may be interchangeable in the pen. The pen controller 134 may interrogate a cartridge to obtain the nib ID 175 of the cartridge. The nib ID 175 may be stored in a ROM or a barcode on the cartridge. The controller 134 notifies the system of the nib ID whenever it changes. The system is thereby able to determine the characteristics of the nib used to produce a stroke 175, and is thereby subsequently able to reproduce the characteristics of the stroke itself.
The controller chip 134 is mounted on the second flex PCB 129 in the pen 101. Figure 10 is a block diagram illustrating in more detail the architecture of the controller chip 134. Figure 10 also shows representations of the RF chip 133, the image sensor 132, the tri-color status LED 116, the IR illumination LED 131, the IR force sensor LED 143, and the force sensor photodiode 144.
The pen controller chip 134 includes a controlling processor 145. Bus 146 enables the exchange of data between components of the controller chip 134. Flash memory 147 and a 512 KB DRAM 148 are also included. An analog-to-digital converter
149 is arranged to convert the analog signal from the force sensor photodiode 144 to a digital signal.
An image sensor interface 152 interfaces with the image sensor 132. A transceiver controller 153 and base band circuit 154 are also included to interface with the RF chip 133 which includes an RF circuit 155 and RF resonators and inductors 156 connected to the antenna 112.
The controlling processor 145 captures and decodes location data from tags from the surface via the image sensor 132, monitors the force sensor photodiode 144, controls the LEDs 116, 131 and 143, and handles short-range radio communication via the radio transceiver 153. It is a medium-performance (-4OMHz) general-purpose RISC processor.
The processor 145, digital transceiver components (transceiver controller 153 and baseband circuit 154), image sensor interface 152, flash memory 147 and 512KB DRAM 148 are integrated in a single controller ASIC. Analog RF components (RF circuit 155 and RF resonators and inductors 156) are provided in the separate RF chip.
The image sensor is a 215x215 pixel CCD (such a sensor is produced by
Matsushita Electronic Corporation, and is described in a paper by Itakura, K T Nobusada, N Okusenya, R Nagayoshi, and M Ozaki, "A lmm 50k-Pixel IT CCD Image Sensor for
Miniature Camera System", IEEE Transactions on Electronic Devices, Volt 47, number 1,
January 2000, which is incorporated herein by reference) with an IR filter.
The controller ASIC 134 enters a quiescent state after a period of inactivity when the pen 101 is not in contact with a surface. It incorporates a dedicated circuit 150 which monitors the force sensor photodiode 144 and wakes up the controller 134 via the power manager 151 on a pen-down event.
The radio transceiver communicates in the unlicensed 900MHz band normally used by cordless telephones, or alternatively in the unlicensed 2.4GHz industrial, scientific and medical (ISM) band, and uses frequency hopping and collision detection to provide interference-free communication.
In an alternative embodiment, the pen incorporates an Infrared Data Association (IrDA) interface for short-range communication with a base station or netpage printer.
7.3 ALTERNATIVE MOTION SENSOR In a further embodiment, the pen 101 includes a pair of orthogonal accelerometers mounted in the normal plane of the pen 101 axis. The accelerometers 190 are shown in Figures 9 and 10 in ghost outline, although it will be appreciated that other alternative motion sensors may be used instead of the accelerometers 190.
The provision of the accelerometers enables this embodiment of the pen 101 to sense motion without reference to surface location tags. Each location tag ID can then identify an object of interest rather than a position on the surface. For example, if the object is a user interface input element (e.g. a command button), then the tag ID of each location tag within the area of the input element can directly identify the input element. The acceleration measured by the accelerometers in each of the x and y directions is integrated with respect to time to produce an instantaneous velocity and position.
Since the starting position of the stroke may not be known, only relative positions within a stroke are calculated. Although position integration accumulates errors in the sensed acceleration, accelerometers typically have high resolution, and the time duration of a stroke, over which errors accumulate, is short.
It will be appreciated that a number of alternative (or additional) motion sensors may be employed in a Netpage pen 101. These typically either measure absolute displacement or relative displacement. For example, an optical mouse that measures displacement relative to an external grid (see US 4,390,873 and US 4,521,772) measures absolute displacement, whereas a mechanical mouse that measures displacement via the movement of a wheel or ball in contact with the surface (see US 3,541,541 and US
4,464,652) measures relative displacement because measurement errors accumulate. An optical mouse that measures displacement relative to surface texture (see US 6,631,218, US 6,281,882, US 6,297,513 and US 4,794,384), measures relative displacement for the same reason. Motion sensors based on point interferometry (see US 6,246,482) or acceleration (see US 4,787,051) also measure relative displacement. The contents of all US
Patents identified in the preceding paragraph relating to motion sensors are herein incorporated by reference.
7.4 BARCODE READING PEN
It would be desirable for the Netpage pen 101 to be capable of reading bar codes, including linear bar codes and 2D bar codes, as well as Netpage tags 4. The most obvious such function is the ability to read the UPC/EAN bar codes that appear on consumer packaged goods. The utility of a barcode reading pen is discussed in our earlier US Patent Application No. 10/815,647 filed on April 2, 2004, the contents of which is incorporated herein by reference. It would be particularly desirable for the pen to capable of reading both Netpage tags 4 and barcodes without any significant design modifications or a requirement to be placed in a special barcode-reading mode.
7.4.1 BARCODE READING REQUIREMENTS
To support the reading of bar coded trade items world-wide, the pen 101 must support the following symbologies: EAN- 13, UPC-A, EAN-8 and UPC-E. Figure 15 shows a sample EAN- 13 bar code symbol.
Each bar code symbol in the EAN/UPC family (with the exception of the simplified UPC-E) consists of the following components: »a left quiet zone
•a left normal guard bar pattern •a fixed number of left half symbol characters •a centre guard bar pattern •a fixed number of right half symbol characters »a right normal guard bar pattern
•a right quiet zone
Each symbol character encodes a digit between 0 and 9, and consists of two bars and two spaces, each between one and four modules wide, for a fixed total of seven modules per character. Symbol characters are self-checking. The nominal width of a module is 0.33mm. It can be printed with an actual width ranging from 75% to 200% of the nominal width, i.e. from 0.25mm to 0.66mm, but must have a consistent width within a single symbol instance.
An EAN- 13 bar code symbol directly encodes 12 characters, i.e. six per half. It encodes a thirteenth character in the parities of its six left-half characters. A UPC-A symbol encodes 12 characters. An EAN-8 symbol encodes 8 characters. A UPC-E symbol encodes 6 characters, without a centre guard bar pattern and with a special right guard bar pattern.
The nominal width of an EAN- 13 and UPC-A bar code is 109 modules (including the left and right quiet zones), or about 36mm. It may be printed with an actual width ranging from 27mm to 72mm.
EAN/UPC bar codes are designed to be imaged under narrowband 670nm illumination, with spaces being generally reflective (light) and bars being generally non- reflective (dark). Since most bar codes are traditionally printed using a broadband- absorptive black ink on a broadband reflective white substrate, other illumination wavelengths, such as wavelengths in the near infrared, allow most bar codes to be acquired. A Netpage pen 101, which images under near- infrared 810nm illumination, has a native ability to image most bar codes. However, since some bar codes are printed using near-infrared-transparent black, green and blue inks, near-infrared imaging may not be fully adequate in all circumstances, and 670nm imaging is therefore important. Accordingly, the Netpage pen 101 may be supplemented with an additional light source, if required.
7.4.2 TRADITIONAL BARCODE READING STRATEGIES One strategy for acquiring a linear bar code is to capture a single image of the entire bar code. This technique is used in the majority of linear bar code readers, and is also used by existing hybrid linear/2D barcode readers. However, as discussed above, each Netpage tag 4 typically has a maximum dimension of about 4 mm and the Netpage pen 101 is designed primarily for capturing images of Netapage tags. Accordingly, the image sensor 132 does not have a sufficiently large field of view to acquire an entire bar code from a single image, since the field of view is only about 6 mm when in contact with a surface. This strategy is therefore unsatisfactory, because it would require significant design modifications of the pen 101 by incorporation of a separate barcode-reading sensor having a larger field of view. This would inevitably increase the size and adversely affect ergonomics of the pen.
Another strategy for acquiring a linear bar code is to capture a dense series of point samples of the bar code as the reader is swiped across the bar code. Typically, a light source from the tip of a barcode reading pen focuses a dot of light onto the bar code. The pen is swiped across the bar code in a steady even motion and a waveform of the barcode in constructed by a photodiode measuring the intensity of light reflected back from the bar code. The dot of light should be equal to or slightly smaller than the narrowest bar width. If the dot is wider than the width of the narrowest bar or space, then the dot will overlap two or more bars at a time so that clear transitions between bars and spaces cannot be distinguished. If the dot is too small, then any spots or voids in the bars can be misinterpreted as light areas also making a bar code unreadable. Typically, the dot of light for reading standard bar codes has a diameter of 0.33 mm or less. Since the light source of the Netpage pen 101 illuminates a relatively large area (about 6 mm in diameter) to read Netpage tags 4, then point sampling is an unsatisfactory means for acquiring a linear bar code. It would require incorporation of a separate barcode-reading system in the Netpage pen 101. Moreover, point sampling is a generally unreliable means for acquiring linear bar codes, because it requires a steady swiping motion of constant velocity.
7.4.3 FRAME-BASED BARCODE SCANNING
An alternative strategy for acquiring a linear bar code is to capture a series of overlapping partial 2D images of the bar code by swiping a Netpage pen 101 across a surface. To allow the reader to adopt this alternative strategy it must be able to guarantee sufficient overlap between successive images to allow it to unambiguously align them. The faster the reader is swiped across the surface, the higher its temporal sampling rate must be to ensure sufficient overlap. The larger the scale of the bar code, the larger the overlap needs to be, and so the higher the reader's temporal sampling rate must be to ensure sufficient overlap.
Although linear bar code acquisition and processing is normally done in one dimension, the use of a two-dimensional image sensor 132 allows the vertical dimension of the bar code to be used as a source of redundancy.
7.4.4 RECONSTRUCTION OF A BARCODE WAVEFORM 7.4.4.1 OVERVIEW
Frame-based barcode scanning can be used to decode barcodes using a 2D image sensor where the barcode is larger than the field of view of the imaging system. To do this, multiple images of the barcode are generated by scanning the image sensor across the barcode and capturing images at a constant rate. These regularly-sampled images are used to generate a set of one-dimensional (ID) frames (or waveform fragments) that represent the sequence of bars visible in the image. The frames (waveform fragments) are then aligned to generate a waveform that represents the entire barcode which is then used to decode the barcode. Obviously, the entire barcode must be scanned for decoding to be successful.
Unlike barcode processing methods that sample a single point during scanning, frame-based scanning is not sensitive to local variations in scan velocity. The method even allows the scan movement to stop and reverse during scanning (although processing is simplified if this is not allowed as in the method discussed below). The method is also more robust than the single point sampling technique, since imaging a large region of the barcode allows noise and distortion to be attenuated using filtering.
The alignment of the ID frames requires a minimum overlap between successive frames, which imposes a maximum scan velocity constraint: maximum scan velocity = (1 - minimum overlap) x sampling rate x field of view
As an example, if the minimum overlap required is 40%, the sampling rate is 100 Hz, and the field of view is 6 mm, then: maximum scan velocity = (1 - 0.4) x 100 x 6 = 360 mm /sec
If the maximum scan velocity is exceeded during a scan, the barcode will typically fail to decode. Obviously, the maximum scan velocity can be increased by increasing the sampling rate, increasing field of view, or decreasing the required minimum overlap between frames (although this may lead to errors in frame alignment and waveform reconstruction).
7.4.4.2 PROCESSING
The following steps are performed during frame-based barcode scanning:
Image Equalization Image equalization is performed to increase the signal-to-noise ratio (SNR) of the captured images of the barcode. The equalization filter is a band-pass filter that combines a low-pass characteristic for noise suppression (essentially a matched filter) and a high-pass component to attenuate the distortion caused by optical effects and non-uniform illumination. Orientation Estimation
The orientation of the barcode within an image must be estimated, since the imaging system may be arbitrarily oriented with respect to the barcode when an image is captured. To reduce the amount of processing required for orientation estimation, the process uses a decimated version of the equalized image. To estimate the orientation, the image is first filtered using an edge-enhancement filter (e.g. Laplacian) to emphasise the edges between adjacent bars. The edge-enhanced image is then processed using a Hough transform, which is used to identify the orientation of the edges in the image. To do this, the Hough transform output is first rectified (i.e. each bin is replaced with the absolute value of that bin) to ensure that both positive and negative extrema values generated by the edges during edge enhancement contribute to the maxima calculation. A profile of the transform space along the axis representing the quantized angle is then projected. This profile is smoothed, and the barcode orientation is estimated by finding the bin containing the maximum value.
Note that the estimated orientation of the barcode is in the range 0° to (180 - quantization)0 since the barcode is bilaterally symmetric through its centre axis. This means that it is possible for the orientation to "reverse direction" during succesive frames.
For example, the orientation may jump from 2° to 178°, a change in direction of 176°, instead of the more likely 4° change in direction to -2° (or 358°). Thus, the orientation is constrained to change by less the 90° between successive frames by adding or subtracting increments of 180°.
Frame Extraction
The barcode orientation is used to generate a ID frame from the full-resolution equalized image. To do this, a strip of the image oriented in the direction of the barcode is extracted, with the profile of this strip used as the ID frame. Since the strip is arbitrarily orientated within the image, sub-pixel interpolation (e.g. bi-linear interpolation) must be used to extract the pixel values.
The length of the strip is typically the size of the effective field of view within the image, and the width determines the level of smoothing applied to the profile. If the strip is too narrow, the profile will not be sufficiently smoothed, whilst if the strip is too wide, the barcode edges may be blurred due to noise and quantization error in the orientation estimation.
Frame Filtering
Extracted frames must be normalized to ensure accurate frame alignment. To do this, the signal is smoothed to reduce noise, and any illumination variation is removed. The signal is then normalized to a fixed range, with a zero mean to ensure the energy in successive frames is approximately equal. Frame Alignment
To generate the full waveform representation of the scanned barcode, the individual frames must be aligned. If the maximum scan velocity has not been exceeded, the signal in each frame will overlap that of the proceeding frame by at least the minimum overlap. Thus, two frames can be aligned by finding the sections of the frames that are similar.
The standard method of measuring the similarity of two sequences is cross- correlation. To find the optimal alignment between the two frames (i.e. the offset between the two frames caused by the movement of the image sensor over the barcode), a number of normalized cross-correlations are performed between the two frames, with the frames successively offset in the range 0 to (1 - minimum overlap) * frame size samples. The offset that produces the maximum cross-correlation is selected as the optimal alignment.
As an example, the two shown in Figure 16 show two successive frames from a barcode scan. Figure 17 shows the cross-correlation between the two frames shown in Figure 16. Finally, the graph shown in Figure 18 shows the optimal alignment of the two frames based on the maximum value of the cross-correlations.
Note the offset is dependent on the scan speed, with a slow scan generating small offsets, and a fast scan generating large offsets. In some cases, the cross-correlations between the frames can generate multiple maxima, each of which represents a possible frame alignment. By assuming the scanning speed does not change significantly between successive frames, linear prediction using previous (and possibly subsequent) frame offsets can be used to estimate the most likely offset within a frame, allowing the ambiguity of multiple cross-correlation maxima to be resolved.
Waveform Reconstruction Once the optimal alignment of the frames has been found, the waveform must be reconstructed by piecing the individual frames together into a single, continuous signal. A simple way to do this is to append each frame to the waveform, skipping the region that overlaps with the previous frame. However, this approach is not optimal and often produces discontinuities in the waveform at frame boundaries. An alternative approach is to use the average value of all the sample values in all frames that overlap a sample position within the waveform. Thus, the samples in each frame are simply added to the waveform using the appropriate alignment, and a count of the number of frame samples that contributed to each waveform sample is used to calculate the average sample value once all the frames have been added.
This process can be further improved by observing that the quality of the frame data is better near the centre of the frame, due to the effects of illumination and optical distortion in the captured images. Thus, the simple average can be replaced with a weighted average that emphasizes the samples near the centre of each frame (e.g. a
Gaussian window).
A final improvement is to align each frame with the partially reconstructed waveform (i.e. constructed using all the frames up to the current frame) rather than with the previous frame. This reduces the degradation caused by noisy frames and limits the cumulative effect of alignment error caused by quantization and noise.
Once the waveform corresponding to the linear bar code has been reconstructed, the bar code can be decoded in the usual way, e.g. to yield a product code The present invention has been described with reference to a preferred embodiment and number of specific alternative embodiments. However, it will be appreciated by those skilled in the relevant fields that a number of other embodiments, differing from those specifically described, will also fall within the spirit and scope of the present invention. Accordingly, it will be understood that the invention is not intended to be limited to the specific embodiments described in the present specification, including documents incorporated by cross-reference as appropriate. The scope of the invention is only limited by the attached claims.

Claims

1. A method of recovering a waveform representing a linear bar code, the method including the steps of: moving a sensing device relative to the barcode, said sensing device having a two- dimensional image sensor; capturing, using the image sensor, a plurality of two-dimensional partial images of said bar code during said movement; determining, from at least one of the images, a direction substantially perpendicular to the bars of the bar code; determining, substantially along the direction, a waveform fragment corresponding to each captured image; determining an alignment between each pair of successive waveform fragments; and recovering, from the aligned waveform fragments, the waveform.
2. The method of claim 1, wherein a field of view of the image sensor is smaller than the length of the bar code.
3. The method of claim 1 , wherein each partial two-dimensional image of said bar code contains a plurality of bars.
4. The method of claim 1, further comprising the step of: determining a product code by decoding the waveform.
5. The method claim 1, further comprising the step of: low-pass filtering the captured images in a direction substantially parallel to the bars.
6. The method of claim 1, wherein the direction is determined using a Hough transform for identifying an orientation of edges in the at least one image.
7. The method of claim 1, wherein the alignment between each pair of successive waveform fragments is determined by performing one or more normalized cross- correlations between each pair.
8. The method of claim 1, wherein the waveform is recovered from the aligned waveform fragments by appending each fragment to a previous fragment, and skipping a region overlapping with said previous fragment.
9. The method of claim 1, wherein the waveform is recovered from the aligned waveform fragments by: determining an average value for a plurality of sample values of the waveform, said sample values being contained in portions of the waveform contained in overlapping waveform fragments.
10. The method of claim 9, wherein the average value is a weighted average, whereby sample values captured from a centre portion of each image have a higher weight than sample values captured from an edge portion of each image.
11. The method of claim 10, wherein the sample values for each image are weighted in accordance with a Gaussian window for said image.
12. The method of claim 1, wherein the waveform is recovered from the aligned waveform fragments by: aligning a current waveform fragment with a partially -constructed waveform constructed using all waveform fragments up to the current fragment.
13. The method of claim 1, wherein said method is performed only in the absence of a location- indicating tag in a field of view of the image sensor.
14. A sensing device for recovering a waveform representing a linear bar code, said sensing device comprising: a two-dimensional image sensor for capturing a plurality of partial two-dimensional images of said bar code during movement of said sensing device relative to said bar code; and a processor configured for: determining, from at least one of the images, a direction substantially perpendicular to the bars of the bar code; determining, substantially along the direction, a waveform fragment corresponding to each captured image; determining an alignment between each pair of successive waveform fragments; and recovering, from the aligned waveform fragments, the waveform.
15. The sensing device of claim 14, wherein a field of view of the image sensor is smaller than the length of the bar code.
16. The sensing device of claim 12, wherein a field of view of the image sensor is sufficiently large for capturing an image of a plurality of bars.
17. The sensing device of claim 12, wherein the processor is further configured for: determining the alignment between each pair of successive waveform fragments by performing one or more normalized cross-correlations between each pair.
18. The sensing device of claim 12, wherein the processor is further configured for: determining an average value for a plurality of sample values of the waveform, said sample values being contained in portions of the waveform contained in overlapping waveform fragments.
19. The sensing device of claim 12 further comprising: communication means for communicating the waveform to a computer system.
20. The sensing device of claim 1, wherein said image sensor has a field of view sufficiently large for capturing an image of a whole location-indicating tag disposed on a surface, and said processor is configured for determining a position of the sensing device relative to the surface using the imaged tag.
PCT/AU2008/000046 2007-02-08 2008-01-17 Bar code reading method WO2008095226A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA002675689A CA2675689A1 (en) 2007-02-08 2008-01-17 Bar code reading method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US88877507P 2007-02-08 2007-02-08
US60/888,775 2007-02-08

Publications (1)

Publication Number Publication Date
WO2008095226A1 true WO2008095226A1 (en) 2008-08-14

Family

ID=39681171

Family Applications (6)

Application Number Title Priority Date Filing Date
PCT/AU2008/000046 WO2008095226A1 (en) 2007-02-08 2008-01-17 Bar code reading method
PCT/AU2008/000047 WO2008095227A1 (en) 2007-02-08 2008-01-17 System for controlling movement of a cursor on a display device
PCT/AU2008/000048 WO2008095228A1 (en) 2007-02-08 2008-01-17 Method of sensing motion of a sensing device relative to a surface
PCT/AU2008/000124 WO2008095232A1 (en) 2007-02-08 2008-02-05 Coding pattern comprising tags with x and y coordinate data divided into respective halves of each tag
PCT/AU2008/000123 WO2008095231A1 (en) 2007-02-08 2008-02-05 Coding pattern comprising translation symbols for aligning cells with tags
PCT/AU2008/000125 WO2008122070A1 (en) 2007-02-08 2008-02-05 Coding pattern comprising replicated and non-replicated coordinate data

Family Applications After (5)

Application Number Title Priority Date Filing Date
PCT/AU2008/000047 WO2008095227A1 (en) 2007-02-08 2008-01-17 System for controlling movement of a cursor on a display device
PCT/AU2008/000048 WO2008095228A1 (en) 2007-02-08 2008-01-17 Method of sensing motion of a sensing device relative to a surface
PCT/AU2008/000124 WO2008095232A1 (en) 2007-02-08 2008-02-05 Coding pattern comprising tags with x and y coordinate data divided into respective halves of each tag
PCT/AU2008/000123 WO2008095231A1 (en) 2007-02-08 2008-02-05 Coding pattern comprising translation symbols for aligning cells with tags
PCT/AU2008/000125 WO2008122070A1 (en) 2007-02-08 2008-02-05 Coding pattern comprising replicated and non-replicated coordinate data

Country Status (7)

Country Link
US (28) US20080192004A1 (en)
EP (3) EP2118723A4 (en)
JP (1) JP4986186B2 (en)
CN (2) CN101636709A (en)
AU (2) AU2008213887B2 (en)
CA (2) CA2675689A1 (en)
WO (6) WO2008095226A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE1019809A3 (en) * 2011-06-17 2012-12-04 Inventive Designers Nv TOOL FOR DESIGNING A DOCUMENT FLOW PROCESS

Families Citing this family (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7213989B2 (en) * 2000-05-23 2007-05-08 Silverbrook Research Pty Ltd Ink distribution structure for a printhead
JP4556705B2 (en) * 2005-02-28 2010-10-06 富士ゼロックス株式会社 Two-dimensional coordinate identification apparatus, image forming apparatus, and two-dimensional coordinate identification method
CA2675689A1 (en) * 2007-02-08 2008-08-14 Silverbrook Research Pty Ltd Bar code reading method
US20080256484A1 (en) * 2007-04-12 2008-10-16 Microsoft Corporation Techniques for aligning and positioning objects
CN101821749A (en) * 2007-09-21 2010-09-01 西尔弗布鲁克研究股份有限公司 The coding pattern that comprises the direction code
US8823645B2 (en) 2010-12-28 2014-09-02 Panasonic Corporation Apparatus for remotely controlling another apparatus and having self-orientating capability
US7546694B1 (en) * 2008-04-03 2009-06-16 Il Poom Jeong Combination drawing/measuring pen
JP4385169B1 (en) * 2008-11-25 2009-12-16 健治 吉田 Handwriting input / output system, handwriting input sheet, information input system, information input auxiliary sheet
US20100084481A1 (en) * 2008-10-02 2010-04-08 Silverbrook Research Pty Ltd Coding pattern having merged data symbols
JP2010185692A (en) * 2009-02-10 2010-08-26 Hitachi High-Technologies Corp Device, system and method for inspecting disk surface
US8947400B2 (en) * 2009-06-11 2015-02-03 Nokia Corporation Apparatus, methods and computer readable storage mediums for providing a user interface
US8483448B2 (en) 2009-11-17 2013-07-09 Scanable, Inc. Electronic sales method
US20110128258A1 (en) * 2009-11-30 2011-06-02 Hui-Hu Liang Mouse Pen
US8276828B2 (en) * 2010-01-27 2012-10-02 Silverbrook Research Pty Ltd Method of decoding coding pattern comprising control symbols
US8276827B2 (en) * 2010-01-27 2012-10-02 Silverbrook Research Pty Ltd Coding pattern comprising control symbols
US20110180611A1 (en) * 2010-01-27 2011-07-28 Silverbrook Research Pty Ltd Coding pattern comprising multi-ppm data symbols in a format identified by registration symbols
US20110182514A1 (en) * 2010-01-27 2011-07-28 Silverbrook Research Pty Ltd Method of decoding coding pattern having self-encoded format
US20110180612A1 (en) * 2010-01-27 2011-07-28 Silverbrook Research Pty Ltd Coding pattern comprising multi-ppm data symbols with minimal clustering of macrodots
US20110181916A1 (en) * 2010-01-27 2011-07-28 Silverbrook Research Pty Ltd Method of encoding coding pattern to minimize clustering of macrodots
WO2011091465A1 (en) * 2010-01-27 2011-08-04 Silverbrook Research Pty Ltd Coding pattern comprising control symbols
US20110182521A1 (en) * 2010-01-27 2011-07-28 Silverbrook Research Pty Ltd Method of decoding coding pattern with variable number of missing data symbols positioned outside imaging field-of-view
US20110180602A1 (en) * 2010-01-27 2011-07-28 Silverbrook Research Pty Ltd Method of imaging coding pattern using variant registration codewords
US8292190B2 (en) * 2010-01-27 2012-10-23 Silverbrook Research Pty Ltd Coding pattern comprising registration codeword having variants corresponding to possible registrations
WO2011091464A1 (en) * 2010-01-27 2011-08-04 Silverbrook Research Pty Ltd Coding pattern comprising registration codeword having variants corresponding to possible registrations
KR101785010B1 (en) 2011-07-12 2017-10-12 삼성전자주식회사 Nonvolatile memory device
JP5589597B2 (en) * 2010-06-22 2014-09-17 コニカミノルタ株式会社 Image forming apparatus, operation control method, and control program
US9025850B2 (en) * 2010-06-25 2015-05-05 Cireca Theranostics, Llc Method for analyzing biological specimens by spectral imaging
US8415968B2 (en) * 2010-07-30 2013-04-09 The Board Of Regents Of The University Of Texas System Data tag control for quantum-dot cellular automata
US9483677B2 (en) 2010-09-20 2016-11-01 Hid Global Corporation Machine-readable symbols
WO2012040209A2 (en) * 2010-09-20 2012-03-29 Lumidigm, Inc. Machine-readable symbols
TWI467462B (en) * 2010-10-01 2015-01-01 Univ Nat Taiwan Science Tech Active browsing method
US10620754B2 (en) * 2010-11-22 2020-04-14 3M Innovative Properties Company Touch-sensitive device with electrodes having location pattern included therein
US8619267B2 (en) * 2011-07-08 2013-12-31 Avago Technologies General Ip (Singapore) Pte. Ltd. Proximity sensor with motion detection
US20130033460A1 (en) * 2011-08-03 2013-02-07 Silverbrook Research Pty Ltd Method of notetaking using optically imaging pen with source document referencing
JP2013105367A (en) * 2011-11-15 2013-05-30 Hitachi Ltd Thin client system and server apparatus
US9389679B2 (en) * 2011-11-30 2016-07-12 Microsoft Technology Licensing, Llc Application programming interface for a multi-pointer indirect touch input device
CN102710978B (en) * 2012-04-12 2016-06-29 深圳Tcl新技术有限公司 The cursor-moving method of television set and device
TWI485577B (en) * 2012-05-03 2015-05-21 Compal Electronics Inc Electronic apparatus and operating method thereof
JP5544609B2 (en) * 2012-10-29 2014-07-09 健治 吉田 Handwriting input / output system
US11287897B2 (en) * 2012-12-14 2022-03-29 Pixart Imaging Inc. Motion detecting system having multiple sensors
TW201423484A (en) * 2012-12-14 2014-06-16 Pixart Imaging Inc Motion detection system
US9104933B2 (en) 2013-01-29 2015-08-11 Honeywell International Inc. Covert bar code pattern design and decoding
US20140340423A1 (en) * 2013-03-15 2014-11-20 Nexref Technologies, Llc Marker-based augmented reality (AR) display with inventory management
US20150058753A1 (en) * 2013-08-22 2015-02-26 Citrix Systems, Inc. Sharing electronic drawings in collaborative environments
KR20150057422A (en) * 2013-11-19 2015-05-28 한국전자통신연구원 Method for transmitting and receiving data, display apparatus and pointing apparatus
US9489048B2 (en) 2013-12-13 2016-11-08 Immersion Corporation Systems and methods for optical transmission of haptic display parameters
WO2015099200A1 (en) * 2013-12-27 2015-07-02 グリッドマーク株式会社 Information input assistance sheet
US9582864B2 (en) 2014-01-10 2017-02-28 Perkinelmer Cellular Technologies Germany Gmbh Method and system for image correction using a quasiperiodic grid
TWI503707B (en) * 2014-01-17 2015-10-11 Egalax Empia Technology Inc Active stylus with switching function
EP3271862B1 (en) 2014-06-19 2020-08-05 Samsung Electronics Co., Ltd. Methods and apparatus for barcode reading and encoding
US20160188806A1 (en) * 2014-12-30 2016-06-30 Covidien Lp System and method for cytopathological and genetic data based treatment protocol identification and tracking
WO2016122567A1 (en) * 2015-01-30 2016-08-04 Hewlett-Packard Development Company, L.P. M-ary cyclic coding
WO2016164473A1 (en) 2015-04-07 2016-10-13 Gen-Probe Incorporated Systems and methods for reading machine-readable labels on sample receptacles
US10037149B2 (en) * 2016-06-17 2018-07-31 Seagate Technology Llc Read cache management
BR112020001818A2 (en) 2017-07-28 2020-07-21 The Coca-Cola Company method and apparatus for encoding and decoding circular symbolic codes
US10859363B2 (en) 2017-09-27 2020-12-08 Stanley Black & Decker, Inc. Tape rule assembly with linear optical encoder for sensing human-readable graduations of length
US11429201B2 (en) 2018-04-13 2022-08-30 Hewlett-Packard Development Company, L.P. Surfaces with information marks
CN109163743B (en) * 2018-07-13 2021-04-02 合肥工业大学 Coding and decoding algorithm of two-dimensional absolute position measuring sensor
WO2020091764A1 (en) * 2018-10-31 2020-05-07 Hewlett-Packard Development Company, L.P. Recovering perspective distortions
EP3672251A1 (en) * 2018-12-20 2020-06-24 Koninklijke KPN N.V. Processing video data for a video player apparatus
US11195172B2 (en) * 2019-07-24 2021-12-07 Capital One Services, Llc Training a neural network model for recognizing handwritten signatures based on different cursive fonts and transformations
TWI790783B (en) * 2021-10-20 2023-01-21 財團法人工業技術研究院 Encoded substrate, coordinate-positioning system and method thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006101437A1 (en) * 2005-03-21 2006-09-28 Anoto Ab Combined detection of position-coding pattern and bar codes

Family Cites Families (109)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US237145A (en) * 1881-02-01 Fkancis h
AUPQ582900A0 (en) 2000-02-24 2000-03-16 Silverbrook Research Pty Ltd Printed media production
SE440721B (en) * 1982-04-02 1985-08-19 C G Folke Ericsson BACKGROUND MOTOR DEVICE
US4575581A (en) * 1984-02-06 1986-03-11 Edwin Langberg Digitizer and position encoders and calibration system for same
US4864618A (en) * 1986-11-26 1989-09-05 Wright Technologies, L.P. Automated transaction system with modular printhead having print authentication feature
US4924078A (en) * 1987-11-25 1990-05-08 Sant Anselmo Carl Identification symbol, system and method
US5979768A (en) * 1988-01-14 1999-11-09 Intermec I.P. Corp. Enhanced bar code resolution through relative movement of sensor and object
US5051736A (en) * 1989-06-28 1991-09-24 International Business Machines Corporation Optical stylus and passive digitizing tablet data input system
DE9013392U1 (en) * 1990-09-21 1991-04-25 Siemens Nixdorf Informationssysteme Ag, 4790 Paderborn, De
US5202552A (en) * 1991-04-22 1993-04-13 Macmillan Bloedel Limited Data with perimeter identification tag
US5412194A (en) * 1992-03-25 1995-05-02 Storage Technology Corporation Robust coding system
US5477012A (en) * 1992-04-03 1995-12-19 Sekendur; Oral F. Optical position determination
US5852434A (en) * 1992-04-03 1998-12-22 Sekendur; Oral F. Absolute optical position determination
CN1104791A (en) * 1993-12-30 1995-07-05 富冈信 Two dimensional code for processing data
JPH07225515A (en) * 1994-02-16 1995-08-22 Ricoh Co Ltd Developing device
US7387253B1 (en) * 1996-09-03 2008-06-17 Hand Held Products, Inc. Optical reader system comprising local host processor and optical reader
JP2788604B2 (en) * 1994-06-20 1998-08-20 インターナショナル・ビジネス・マシーンズ・コーポレイション Information display tag having two-dimensional information pattern, image processing method and image processing apparatus using the same
US5652412A (en) * 1994-07-11 1997-07-29 Sia Technology Corp. Pen and paper information recording system
US5661506A (en) * 1994-11-10 1997-08-26 Sia Technology Corporation Pen and paper information recording system using an imaging pen
JP2952170B2 (en) 1994-12-16 1999-09-20 オリンパス光学工業株式会社 Information reproduction system
US5852412A (en) * 1995-10-30 1998-12-22 Honeywell Inc. Differential ground station repeater
US6081261A (en) 1995-11-01 2000-06-27 Ricoh Corporation Manual entry interactive paper and electronic document handling and processing system
US5692073A (en) * 1996-05-03 1997-11-25 Xerox Corporation Formless forms and paper web using a reference-based mark extraction technique
US5862271A (en) 1996-12-20 1999-01-19 Xerox Corporation Parallel propagating embedded binary sequences for characterizing and parameterizing two dimensional image domain code patterns in N-dimensional address space
JP3856531B2 (en) * 1997-06-26 2006-12-13 山形カシオ株式会社 Coordinate data conversion method and apparatus
US6518950B1 (en) 1997-10-07 2003-02-11 Interval Research Corporation Methods and systems for providing human/computer interfaces
JPH11161731A (en) 1997-11-27 1999-06-18 Olympus Optical Co Ltd Reading auxiliary member having code pattern
JPH11201797A (en) 1998-01-12 1999-07-30 Aichi Tokei Denki Co Ltd Display device of service water meter
WO1999050736A1 (en) 1998-04-01 1999-10-07 Xerox Corporation Paper indexing of recordings
US6330976B1 (en) * 1998-04-01 2001-12-18 Xerox Corporation Marking medium area with encoded identifier for producing action through network
US6305608B1 (en) 1998-06-04 2001-10-23 Olympus Optical Co., Ltd. Pen type code reader
US6964374B1 (en) * 1998-10-02 2005-11-15 Lucent Technologies Inc. Retrieval and manipulation of electronically stored information via pointers embedded in the associated printed material
US6088482A (en) * 1998-10-22 2000-07-11 Symbol Technologies, Inc. Techniques for reading two dimensional code, including maxicode
US6980318B1 (en) * 1999-05-25 2005-12-27 Silverbrook Research Pty Ltd Method and system for delivery of a greeting card
US7793824B2 (en) 1999-05-25 2010-09-14 Silverbrook Research Pty Ltd System for enabling access to information
US7762453B2 (en) 1999-05-25 2010-07-27 Silverbrook Research Pty Ltd Method of providing information via a printed substrate with every interaction
US7105753B1 (en) 1999-05-25 2006-09-12 Silverbrook Research Pty Ltd Orientation sensing device
US7707082B1 (en) * 1999-05-25 2010-04-27 Silverbrook Research Pty Ltd Method and system for bill management
US7233320B1 (en) * 1999-05-25 2007-06-19 Silverbrook Research Pty Ltd Computer system interface surface with reference points
AU5263300A (en) 1999-05-28 2000-12-18 Anoto Ab Position determination
SE516522C2 (en) 1999-05-28 2002-01-22 Anoto Ab Position determining product for digitization of drawings or handwritten information, obtains displacement between symbol strings along symbol rows when symbol strings are repeated on symbol rows
AU2003900983A0 (en) * 2003-03-04 2003-03-20 Silverbrook Research Pty Ltd Methods, systems and apparatus (NPT023)
AU2002952259A0 (en) * 2002-10-25 2002-11-07 Silverbrook Research Pty Ltd Methods and apparatus
US7792298B2 (en) * 1999-06-30 2010-09-07 Silverbrook Research Pty Ltd Method of using a mobile device to authenticate a printed token and output an image associated with the token
AU2003900746A0 (en) 2003-02-17 2003-03-06 Silverbrook Research Pty Ltd Methods, systems and apparatus (NPS041)
SE517445C2 (en) * 1999-10-01 2002-06-04 Anoto Ab Position determination on a surface provided with a position coding pattern
US7015901B2 (en) 1999-10-25 2006-03-21 Silverbrook Research Pty Ltd Universal pen with code sensor
AU764601B2 (en) 1999-10-25 2003-08-21 Silverbrook Research Pty Ltd Category buttons on interactive paper
EP1096416B2 (en) * 1999-10-26 2017-11-22 Datalogic IP TECH S.r.l. Method for reconstructing a bar code through consecutive scans
EP1107064A3 (en) * 1999-12-06 2004-12-29 Olympus Optical Co., Ltd. Exposure apparatus
US6992655B2 (en) * 2000-02-18 2006-01-31 Anoto Ab Input unit arrangement
US7143952B2 (en) * 2000-03-21 2006-12-05 Anoto Ab Apparatus and methods relating to image coding
US6864880B2 (en) * 2000-03-21 2005-03-08 Anoto Ab Device and method for communication
JP4376425B2 (en) 2000-05-08 2009-12-02 株式会社ワコム Variable capacitor and position indicator
US6857571B2 (en) * 2000-06-30 2005-02-22 Silverbrook Research Pty Ltd Method for surface printing
EP1410281A2 (en) * 2000-07-10 2004-04-21 BMC Software, Inc. System and method of enterprise systems and business impact management
US6592039B1 (en) * 2000-08-23 2003-07-15 International Business Machines Corporation Digital pen using interferometry for relative and absolute pen position
AUPR440901A0 (en) * 2001-04-12 2001-05-17 Silverbrook Research Pty. Ltd. Error detection and correction
JP3523618B2 (en) 2001-08-02 2004-04-26 シャープ株式会社 Coordinate input system and coordinate pattern forming paper used for the coordinate input system
US7145556B2 (en) 2001-10-29 2006-12-05 Anoto Ab Method and device for decoding a position-coding pattern
US6964437B2 (en) * 2001-12-14 2005-11-15 Superba (Societa Anonyme) Process and device for knotting a yarn on a spool
US6966493B2 (en) * 2001-12-18 2005-11-22 Rf Saw Components, Incorporated Surface acoustic wave identification tag having enhanced data content and methods of operation and manufacture thereof
SE520748C2 (en) * 2001-12-27 2003-08-19 Anoto Ab Activation of products with embedded functionality in an information management system
US20030197878A1 (en) * 2002-04-17 2003-10-23 Eric Metois Data encoding and workpiece authentication using halftone information
AU2003228476A1 (en) * 2002-04-09 2003-10-27 The Escher Group, Ltd. Encoding and decoding data using angular symbology and beacons
US20040070616A1 (en) * 2002-06-02 2004-04-15 Hildebrandt Peter W. Electronic whiteboard
KR20050074961A (en) * 2002-10-08 2005-07-19 치팩, 인코포레이티드 Semiconductor stacked multi-package module having inverted second package
SE523931C2 (en) * 2002-10-24 2004-06-01 Anoto Ab Information processing system arrangement for printing on demand of position-coded base, allows application of graphic information and position data assigned for graphical object, to substrate for forming position-coded base
JP4294025B2 (en) 2002-10-25 2009-07-08 シルバーブルック リサーチ ピーティワイ リミテッド Method for generating interface surface and method for reading encoded data
US7502507B2 (en) * 2002-10-31 2009-03-10 Microsoft Corporation Active embedded interaction code
DE60236111D1 (en) 2002-12-03 2010-06-02 Silverbrook Res Pty Ltd ROTATION SYMMETRIC MARKINGS
SE0301143D0 (en) * 2003-04-17 2003-04-17 C Technologies Ab Method and device for loading data
JP4708186B2 (en) 2003-05-02 2011-06-22 豊 木内 2D code decoding program
US7637430B2 (en) * 2003-05-12 2009-12-29 Hand Held Products, Inc. Picture taking optical reader
US6833287B1 (en) * 2003-06-16 2004-12-21 St Assembly Test Services Inc. System for semiconductor package with stacked dies
US7364081B2 (en) * 2003-12-02 2008-04-29 Hand Held Products, Inc. Method and apparatus for reading under sampled bar code symbols
US7853193B2 (en) * 2004-03-17 2010-12-14 Leapfrog Enterprises, Inc. Method and device for audibly instructing a user to interact with a function
JP4301986B2 (en) * 2004-03-30 2009-07-22 アルパイン株式会社 Complex information processing device
US7342575B1 (en) * 2004-04-06 2008-03-11 Hewlett-Packard Development Company, L.P. Electronic writing systems and methods
US7048198B2 (en) * 2004-04-22 2006-05-23 Microsoft Corporation Coded pattern for an optical device and a prepared surface
CN101002217A (en) 2004-05-18 2007-07-18 西尔弗布鲁克研究有限公司 Pharmaceutical product tracking
TWI236239B (en) * 2004-05-25 2005-07-11 Elan Microelectronics Corp Remote controller
US7672532B2 (en) * 2004-07-01 2010-03-02 Exphand Inc. Dithered encoding and decoding information transference system and method
US7656395B2 (en) * 2004-07-15 2010-02-02 Microsoft Corporation Methods and apparatuses for compound tracking systems
GB0417069D0 (en) * 2004-07-30 2004-09-01 Hewlett Packard Development Co Methods, apparatus and software for validating entries made on a form
US20060028459A1 (en) 2004-08-03 2006-02-09 Silverbrook Research Pty Ltd Pre-loaded force sensor
US7166924B2 (en) * 2004-08-17 2007-01-23 Intel Corporation Electronic packages with dice landed on wire bonds
TWI276986B (en) * 2004-11-19 2007-03-21 Au Optronics Corp Handwriting input apparatus
JP4235167B2 (en) 2004-12-13 2009-03-11 新日本製鐵株式会社 Coil inner diameter holding jig and coil insertion method using the same
US20060139338A1 (en) * 2004-12-16 2006-06-29 Robrecht Michael J Transparent optical digitizer
US8094139B2 (en) * 2005-02-23 2012-01-10 Anoto Ab Method in electronic pen, computer program product, and electronic pen
JP4556705B2 (en) 2005-02-28 2010-10-06 富士ゼロックス株式会社 Two-dimensional coordinate identification apparatus, image forming apparatus, and two-dimensional coordinate identification method
US7729539B2 (en) 2005-05-31 2010-06-01 Microsoft Corporation Fast error-correcting of embedded interaction codes
GB2428124B (en) * 2005-07-07 2010-04-14 Hewlett Packard Development Co Data input apparatus and method
CN101248444A (en) 2005-07-25 2008-08-20 西尔弗布鲁克研究有限公司 Product item having first coded data and unique identifier
US7858307B2 (en) * 2005-08-09 2010-12-28 Maxwell Sensors, Inc. Light transmitted assay beads
US7622182B2 (en) 2005-08-17 2009-11-24 Microsoft Corporation Embedded interaction code enabled display
JP4586677B2 (en) * 2005-08-24 2010-11-24 富士ゼロックス株式会社 Image forming apparatus
US7605476B2 (en) * 2005-09-27 2009-10-20 Stmicroelectronics S.R.L. Stacked die semiconductor package
GB2431032A (en) * 2005-10-05 2007-04-11 Hewlett Packard Development Co Data encoding pattern comprising markings composed of coloured sub-markings
JP2007115201A (en) 2005-10-24 2007-05-10 Fuji Xerox Co Ltd Electronic document management system, medical information system, printing method of chart form and chart form
JP2007145317A (en) 2005-10-28 2007-06-14 Soriton Syst:Kk Taking off and landing device of flying body
GB2432341B (en) 2005-10-29 2009-10-14 Hewlett Packard Development Co Marking material
US7934660B2 (en) * 2006-01-05 2011-05-03 Hand Held Products, Inc. Data collection system having reconfigurable data collection terminal
US8179340B2 (en) * 2006-06-16 2012-05-15 Pioneer Corporation Two-dimensional code pattern, two-dimensional code pattern display device, and its reading device
JP4375377B2 (en) * 2006-09-19 2009-12-02 富士ゼロックス株式会社 WRITING INFORMATION PROCESSING SYSTEM, WRITING INFORMATION GENERATION DEVICE, AND PROGRAM
US7479236B2 (en) * 2006-09-29 2009-01-20 Lam Research Corporation Offset correction techniques for positioning substrates
US8291346B2 (en) * 2006-11-07 2012-10-16 Apple Inc. 3D remote control system employing absolute and relative position detection
CA2675689A1 (en) * 2007-02-08 2008-08-14 Silverbrook Research Pty Ltd Bar code reading method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006101437A1 (en) * 2005-03-21 2006-09-28 Anoto Ab Combined detection of position-coding pattern and bar codes

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE1019809A3 (en) * 2011-06-17 2012-12-04 Inventive Designers Nv TOOL FOR DESIGNING A DOCUMENT FLOW PROCESS
WO2012172079A1 (en) * 2011-06-17 2012-12-20 Inventive Designers Nv A tool for designing a document flow process

Also Published As

Publication number Publication date
US20080191040A1 (en) 2008-08-14
US7959081B2 (en) 2011-06-14
US20080191041A1 (en) 2008-08-14
AU2008213887B2 (en) 2010-04-22
EP2118723A4 (en) 2011-06-29
US20080191020A1 (en) 2008-08-14
CN101606167A (en) 2009-12-16
AU2008213887A1 (en) 2008-08-14
US20080191039A1 (en) 2008-08-14
WO2008095228A1 (en) 2008-08-14
US20080191016A1 (en) 2008-08-14
US20080191024A1 (en) 2008-08-14
JP2010518496A (en) 2010-05-27
US7878404B2 (en) 2011-02-01
US7905405B2 (en) 2011-03-15
US20080192004A1 (en) 2008-08-14
US20080191037A1 (en) 2008-08-14
US8416188B2 (en) 2013-04-09
US20080192234A1 (en) 2008-08-14
US20110084141A1 (en) 2011-04-14
US8204307B2 (en) 2012-06-19
US8107732B2 (en) 2012-01-31
JP4986186B2 (en) 2012-07-25
US20080193054A1 (en) 2008-08-14
WO2008095232A1 (en) 2008-08-14
US8118234B2 (en) 2012-02-21
US7604182B2 (en) 2009-10-20
US20130075482A1 (en) 2013-03-28
US20080193053A1 (en) 2008-08-14
US20100001083A1 (en) 2010-01-07
US7793855B2 (en) 2010-09-14
AU2008213886A1 (en) 2008-08-14
US20080191019A1 (en) 2008-08-14
US20080191017A1 (en) 2008-08-14
AU2008213886B2 (en) 2010-04-22
US20080191038A1 (en) 2008-08-14
US20080193030A1 (en) 2008-08-14
CA2675689A1 (en) 2008-08-14
WO2008095231A1 (en) 2008-08-14
US8016204B2 (en) 2011-09-13
US20080192022A1 (en) 2008-08-14
US8011595B2 (en) 2011-09-06
CN101636709A (en) 2010-01-27
US20080193007A1 (en) 2008-08-14
US20080273010A1 (en) 2008-11-06
US8006912B2 (en) 2011-08-30
EP2118819A1 (en) 2009-11-18
US20080191036A1 (en) 2008-08-14
US20080193045A1 (en) 2008-08-14
EP2132615A1 (en) 2009-12-16
US8107733B2 (en) 2012-01-31
US20080191021A1 (en) 2008-08-14
US7905406B2 (en) 2011-03-15
US20080191018A1 (en) 2008-08-14
WO2008095227A1 (en) 2008-08-14
US20080193044A1 (en) 2008-08-14
CA2675693A1 (en) 2008-08-14
US7905423B2 (en) 2011-03-15
US8320678B2 (en) 2012-11-27
EP2118723A1 (en) 2009-11-18
EP2132615A4 (en) 2012-02-08
US20120006906A1 (en) 2012-01-12
CN101606167B (en) 2012-04-04
US7913923B2 (en) 2011-03-29
US8028925B2 (en) 2011-10-04
WO2008122070A1 (en) 2008-10-16
EP2118819A4 (en) 2011-05-04
US20080192006A1 (en) 2008-08-14

Similar Documents

Publication Publication Date Title
US7878404B2 (en) Bar code reading method
US6627870B1 (en) Sensing device with interchangeable nibs
US7891253B2 (en) Capacitive force sensor
US7649523B2 (en) Method of estimating position of writing nib relative to an optical sensor
AU2004201007B2 (en) Coded Surface
US20090079692A1 (en) Interactive digital clippings
EP1567975B1 (en) Rotationally symmetric tags
AU2008238595B2 (en) Sensing device having capacitive force sensor
AU2003262335B2 (en) Sensing device with interchangeable nibs
AU2001216798A1 (en) Code sensor attachment for pen

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08700344

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2675689

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08700344

Country of ref document: EP

Kind code of ref document: A1