US7389310B1 - Supercomputing environment for duplicate detection on web-scale data - Google Patents
Supercomputing environment for duplicate detection on web-scale data Download PDFInfo
- Publication number
- US7389310B1 US7389310B1 US12/045,406 US4540608A US7389310B1 US 7389310 B1 US7389310 B1 US 7389310B1 US 4540608 A US4540608 A US 4540608A US 7389310 B1 US7389310 B1 US 7389310B1
- Authority
- US
- United States
- Prior art keywords
- document
- nodes
- data packets
- node
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 claims abstract description 25
- 230000015654 memory Effects 0.000 claims abstract description 18
- 238000005259 measurement Methods 0.000 claims description 2
- 238000004590 computer program Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 108090000623 proteins and genes Proteins 0.000 description 2
- 238000013341 scale-up Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005389 magnetism Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99931—Database or file accessing
- Y10S707/99933—Query processing, i.e. searching
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99953—Recoverability
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99954—Version management
Definitions
- IBM® and Blue Gene® are registered trademarks of International Business Machines Corporation, Armonk, N.Y., U.S.A. Other names used herein may be registered trademarks, trademarks, or product names of International Business Machines Corporation or other companies.
- This invention generally relates to duplicate documents. Specifically, this invention relates to efficient duplicate detection on web-scale data in supercomputing environments.
- Identifying duplicate records is typically termed duplicate detection.
- Duplicate detection is a key operation if dealing with large volumes of data, and especially if integrating data from multiple sources.
- Web-scale data is data that may be used within Internet resources such as web-sites, servers, or similar resources. There are multiple reasons for the presence of duplicate data on the Internet, including, mirroring, versioning, different formats (e.g., html format, portable document format, etc.), user-copies, backups, and error-pages (e.g., soft 404 errors).
- the duplicate data results in a significant portion of the Internet having duplicate content.
- a scale-out supercomputing environment includes a plurality of interconnected nodes arranged in a three-dimensional cubic grid and configured to perform a method of duplicate detection.
- the method includes computing a fingerprint of at least one document to generate data packets from the at least one document and to generate a fixed size tuple of information from the at least one document, distributing the data packets to each node of the plurality of nodes to ensure all elements of the fixed size tuple fit into memory of the plurality of nodes, applying localized detection techniques to data packets on each node of the plurality of nodes to remove data packet duplicates, redistributing the data packets to each node of the plurality of nodes based on the document fingerprint, reapplying the localized detection techniques on each node to the redistributed packets to remove exact data packet duplicates, and performing a global merge of results of the localized detection techniques in a distributed fashion in the supercomputing environment such that an entire corpus of web-scale data is represented based on document duplication.
- the data packets are associated with elements of the fixed size tuple, the fixed size tuple includes at least a document identifier, a document fingerprint, and a document quality measurement, and the data packets are redistributed by dynamically allocating data packets to nodes to ensure all elements of the fixed size tuple fit into memory of the plurality of nodes.
- FIG. 1 illustrates a supercomputing environment, according to an exemplary embodiment
- FIG. 2 depicts a method for duplicate detection on web-scale data in a supercomputing environment, according to an exemplary embodiment.
- a solution has been achieved which significantly decreases the computational time associated with duplicate detection on web-scale data.
- the decrease in computational time results in more available time for other tasks.
- a method for duplicate detection may be performed by a supercomputer.
- Supercomputers may be divided into two sub-groups, scale-up and scale-out.
- Scale-up generally refers to computers that are increased in size to make them more powerful than most other available computers. This increase in power is used for computation intensive activities and warrants the term supercomputer if the power increase is relatively large compared to conventionally available equipment.
- scale-out is used to describe a computer system where increased power is derived from sharing data and computational tasks among many nodes. Thus, increases in performance are realized in comparison to a single machine.
- FIG. 1 illustrates a supercomputing environment (or supercomputer configuration), according to an exemplary embodiment.
- the supercomputing environment 100 includes a plurality of nodes 101 .
- Nodes 101 may be computer apparatuses capable of performing computational tasks. Therefore, each of nodes 101 may include at least one processor, storage media, memory, cache associated with any processor included thereon, and any additional equipment needed for operation. It is noted that the present invention should not be limited to any particular number of nodes, as many more or less nodes than illustrated are applicable to exemplary embodiments of the present invention.
- connection 102 may be a relatively fast connection system employing switches, hubs, and/or other fast networking equipment.
- a torus interconnection network as may be implemented in a BLUE GENE parallel supercomputer, may be used for node interconnections.
- nodes are arranged in a three-dimensional cubic grid in which each node is connected to its six nearest neighbors with high-speed dedicated links. Torus networks provide high-bandwidth nearest-neighbor connectivity while also allowing construction of a network without edges.
- each node of the plurality of nodes 101 may therefore share resources across connection 102 .
- each node may share information contained on storage media, may share instructions, and/or may divide computational tasks among all nodes. Therefore, the connection 102 promotes a scale-out type configuration of the plurality of node 101 . More clearly, connection 102 promotes additional computational power derived from the combined computational power of all nodes sharing computational tasks across connection 102 (i.e., nodes are arranged in a scale-out supercomputing fashion).
- each node may not be directly and globally shared, data is shared between nodes, and the logical-to-physical mapping of information is routed around to all nodes in the supercomputing environment 100 .
- Each node may communicate with each other node through the connection 102 , eliminating the need for additional wiring.
- a unique file system such as a general parallel file system (GPFS)
- GPFS general parallel file system
- the unique file system would allow sequential access to electronic documents stored within the plurality of nodes 101 .
- an exemplary embodiment of the present invention provides a supercomputing environment where computational tasks may be shared across a plurality of nodes, thereby enabling a relatively faster computational time if compared to a single node.
- a method for duplicate detection on web-scale data in a supercomputing environment is described in detail with reference to FIG. 2 .
- the method 200 includes computing a fingerprint of a document at block 201 .
- Computing a fingerprint of the document includes reading data from the document (e.g., an electronic document) to generate data packets from the document and generating a three-element tuple (fixed size collection of elements) of the document.
- an electronic document may include information or data associated with its content. This data may be separated into packets and associated with different elements of the three-element tuple.
- the three-element tuple includes a document identification representation (document ID, or DocId), an identifier (fingerprint), and a measure of a document's significance (document quality).
- document ID or DocId
- identifier or fingerprint
- a measure of a document's significance document quality
- any element of the three-element tuple may be used to sort data packets and identify data packets. For example, if a plurality of documents are being processed, document ID for respective packets may allow for easy identification of which document a particular packet belongs to. Further, a fingerprint for the document may also be used to identify a document's data packet.
- said node may perform duplicate detection without communication with other nodes in the supercomputing environment.
- the document quality may be used to identify an order or priority of the document compared to other documents.
- the method 200 further includes distributing data at block 202 . If the data is separated into data packets, the data packets may be distributed based on an element of the three-element tuple and/or to ensure all tuples fit into memory. At this portion of method 200 , the data packets may be distributed among nodes of a supercomputing environment based upon the fingerprint of the document the packets belong to. Distributing the data may therefore include dynamically allocating nodes to ensure all tuples fit in memory of nodes. Upon distribution, the data packets are stored in the memory of a corresponding node. Alternatively, the data packets may be distributed in a random fashion. For example, the data packets may be dynamically allocated to nodes to ensure all tuples fit into memory, without taking the document fingerprint into consideration.
- the method 200 further includes applying localized detection techniques at block 203 .
- the localized detection techniques may be applied to data packets that have been distributed. For example, the nodes corresponding to data packets may apply these localized detection techniques.
- the localized detection techniques include sorting data packets by fingerprint and removing duplicate data packets based on a comparison to locate the duplicates within a corresponding node. It is noted that blocks 202 and 203 are optional in exemplary embodiments. For example, as will be described below, blocks 204 and 205 provide sufficient duplicate detection to accurately detect duplicate documents.
- the method 200 further includes re-distributing data at block 204 .
- Re-distributing data includes shuffling data-packets among nodes based on a fingerprint of the respective document.
- Re-distributing the data further includes dynamically allocating nodes to ensure all tuples fit into memory of the nodes. As data packets have been redistributed based on a fingerprint, additional removal of duplicate packets may be performed. Redistributing the data thus includes dynamically allocating nodes to ensure all tuples fit in memory of nodes. Upon redistribution, the data packets are stored in the memory of a corresponding node.
- the method 200 further includes applying (or re-applying) localized detection techniques at block 205 .
- the localized detection techniques may be applied to the re-distributed data packets.
- the localized detection techniques includes sorting data packets based on the fingerprint of a respective document and further sorting by document ID. Exact duplicates of data packets may be removed at each node independent of communication with other nodes. For example, because data packets have been redistributed based on document fingerprints and sorted within the allocated nodes, each node may perform document detection relatively fast, and remove any duplicates from memory.
- the duplicate detection may be based on comparisons to data packets according to elements of the three-element tuple. Thus similar documents may be allocated to the same node, further increasing the speed of duplicate detection.
- Upon removal of duplicates only unique data packets (i.e., documents) should remain in the nodes. Therefore, even if blocks 202 and 203 are omitted, fast duplicate detection may be performed with blocks 204 and
- the method 200 further includes merging data at block 206 .
- the merging includes performing a global merge of the results of the localized detection techniques in a distributed fashion in the supercomputing environment. Thus, if documents have been duplicated, by globally merging the results, duplicates are detected and the results are made available.
- the results may be organized into sets of duplicated and non-duplicated documents.
- “unique documents” are documents with a single copy
- “document groups” are a set of duplicate documents with the same fingerprint
- “master documents” are the highest ranked documents within a given document group.
- the sets of unique documents and master documents form a complete list of non-duplicate documents.
- a set of duplicated documents may be provided. For example, if a listing of duplicated documents is provided, all non-duplicated documents may be inferred from a listing of all documents.
- the above method may be implemented in a supercomputing environment as described with reference to FIG. 1 .
- the supercomputing environment may provide a unique file system allowing fast sequential access of electronic documents stored thereon. This may reduce the complexity of disk reads and writes, with the computation time itself consuming only a fraction of the overall time. For example, if GPFS is employed in the supercomputing environment, on a 6-billion document corpus, a turnaround time of less than about one hour has been achieved in practice of an exemplary embodiment of the present invention.
- portions or the entirety of the method may be executed as instructions in a processor of a computer system.
- the present invention may be implemented, in software, for example, as any suitable computer program.
- a program in accordance with the present invention may be a computer program product causing a computer to execute the example method described herein.
- the computer program product may include a computer-readable medium having computer program logic or code portions embodied thereon for enabling a processor of a computer apparatus to perform one or more functions in accordance with one or more of the example methodologies described above.
- the computer program logic may thus cause the processor to perform one or more of the example methodologies, or one or more functions of a given methodology described herein.
- the computer-readable storage medium may be a built-in medium installed inside a computer main body or removable medium arranged so that it can be separated from the computer main body.
- Examples of the built-in medium include, but are not limited to, rewriteable non-volatile memories, such as RAMs, ROMs, flash memories, and hard disks.
- Examples of a removable medium may include, but are not limited to, optical storage media such as CD-ROMs and DVDs; magneto-optical storage media such as MOs; magnetism storage media such as floppy disks (trademark), cassette tapes, and removable hard disks; media with a built-in rewriteable non-volatile memory such as memory cards; and media with a built-in ROM, such as ROM cassettes.
- Such programs when recorded on computer-readable storage media, may be readily stored and distributed.
- the storage medium as it is read by a computer, may enable the method for duplicate detection on web-scale data in a supercomputing environment, in accordance with an exemplary embodiment of the present invention.
Abstract
A scale-out supercomputing environment includes a plurality of interconnected nodes arranged in a three-dimensional cubic grid and configured to perform a method of duplicate detection. The method includes at least computing a fingerprint of at least one document in the supercomputing environment to generate data packets from the at least one document and to generate a fixed size tuple of information from the at least one document, distributing the data packets to each node of the plurality of nodes to ensure all elements of the fixed size tuple fit into memory of the plurality of nodes, applying localized detection techniques to data packets on each node of the plurality of nodes to remove data packet duplicates, redistributing the data packets to each node of the plurality of nodes based on the document fingerprint, and performing a global merge of results of the localized detection techniques.
Description
This application is a continuation of U.S. patent application Ser. No. 11/939,378, filed Nov. 13, 2007, the contents of which are incorporated herein by reference thereto.
IBM® and Blue Gene® are registered trademarks of International Business Machines Corporation, Armonk, N.Y., U.S.A. Other names used herein may be registered trademarks, trademarks, or product names of International Business Machines Corporation or other companies.
1. Technical Field
This invention generally relates to duplicate documents. Specifically, this invention relates to efficient duplicate detection on web-scale data in supercomputing environments.
2. Description of Background
Identifying duplicate records is typically termed duplicate detection. Duplicate detection is a key operation if dealing with large volumes of data, and especially if integrating data from multiple sources. Web-scale data is data that may be used within Internet resources such as web-sites, servers, or similar resources. There are multiple reasons for the presence of duplicate data on the Internet, including, mirroring, versioning, different formats (e.g., html format, portable document format, etc.), user-copies, backups, and error-pages (e.g., soft 404 errors). The duplicate data results in a significant portion of the Internet having duplicate content.
A scale-out supercomputing environment includes a plurality of interconnected nodes arranged in a three-dimensional cubic grid and configured to perform a method of duplicate detection. The method includes computing a fingerprint of at least one document to generate data packets from the at least one document and to generate a fixed size tuple of information from the at least one document, distributing the data packets to each node of the plurality of nodes to ensure all elements of the fixed size tuple fit into memory of the plurality of nodes, applying localized detection techniques to data packets on each node of the plurality of nodes to remove data packet duplicates, redistributing the data packets to each node of the plurality of nodes based on the document fingerprint, reapplying the localized detection techniques on each node to the redistributed packets to remove exact data packet duplicates, and performing a global merge of results of the localized detection techniques in a distributed fashion in the supercomputing environment such that an entire corpus of web-scale data is represented based on document duplication. According to the method, the data packets are associated with elements of the fixed size tuple, the fixed size tuple includes at least a document identifier, a document fingerprint, and a document quality measurement, and the data packets are redistributed by dynamically allocating data packets to nodes to ensure all elements of the fixed size tuple fit into memory of the plurality of nodes.
Additional features and advantages are realized through the techniques of the exemplary embodiments described herein. Other embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed invention. For a better understanding of the invention with advantages and features, refer to the detailed description and to the drawings.
The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
The detailed description explains an exemplary embodiment, together with advantages and features, by way of example with reference to the drawings.
According to an exemplary embodiment, a solution has been achieved which significantly decreases the computational time associated with duplicate detection on web-scale data. The decrease in computational time results in more available time for other tasks. According to an exemplary embodiment, a method for duplicate detection may be performed by a supercomputer.
Supercomputers may be divided into two sub-groups, scale-up and scale-out. Scale-up generally refers to computers that are increased in size to make them more powerful than most other available computers. This increase in power is used for computation intensive activities and warrants the term supercomputer if the power increase is relatively large compared to conventionally available equipment. In contrast, scale-out is used to describe a computer system where increased power is derived from sharing data and computational tasks among many nodes. Thus, increases in performance are realized in comparison to a single machine.
Turning back to FIG. 1 , each node of the plurality of nodes may be interconnected via connection 102. For example, connection 102 may be a relatively fast connection system employing switches, hubs, and/or other fast networking equipment. For example, a torus interconnection network, as may be implemented in a BLUE GENE parallel supercomputer, may be used for node interconnections. According to a torus network, nodes are arranged in a three-dimensional cubic grid in which each node is connected to its six nearest neighbors with high-speed dedicated links. Torus networks provide high-bandwidth nearest-neighbor connectivity while also allowing construction of a network without edges.
Turning back to FIG. 1 , because each node of the plurality of nodes 101 is interconnected via fast networking equipment, each node of the plurality of nodes 101 may therefore share resources across connection 102. For example, each node may share information contained on storage media, may share instructions, and/or may divide computational tasks among all nodes. Therefore, the connection 102 promotes a scale-out type configuration of the plurality of node 101. More clearly, connection 102 promotes additional computational power derived from the combined computational power of all nodes sharing computational tasks across connection 102 (i.e., nodes are arranged in a scale-out supercomputing fashion).
It is noted that while the cache of each node may not be directly and globally shared, data is shared between nodes, and the logical-to-physical mapping of information is routed around to all nodes in the supercomputing environment 100. Each node may communicate with each other node through the connection 102, eliminating the need for additional wiring. Furthermore, a unique file system, such as a general parallel file system (GPFS), may be employed in the supercomputing environment. For example, the unique file system would allow sequential access to electronic documents stored within the plurality of nodes 101.
Therefore, as described above, an exemplary embodiment of the present invention provides a supercomputing environment where computational tasks may be shared across a plurality of nodes, thereby enabling a relatively faster computational time if compared to a single node. Hereinafter, a method for duplicate detection on web-scale data in a supercomputing environment is described in detail with reference to FIG. 2 .
As illustrated in FIG. 2 , the method 200 includes computing a fingerprint of a document at block 201. Computing a fingerprint of the document includes reading data from the document (e.g., an electronic document) to generate data packets from the document and generating a three-element tuple (fixed size collection of elements) of the document.
With regards to generating data-packets from the document it is noted that an electronic document may include information or data associated with its content. This data may be separated into packets and associated with different elements of the three-element tuple. The three-element tuple includes a document identification representation (document ID, or DocId), an identifier (fingerprint), and a measure of a document's significance (document quality). Thus, any element of the three-element tuple may be used to sort data packets and identify data packets. For example, if a plurality of documents are being processed, document ID for respective packets may allow for easy identification of which document a particular packet belongs to. Further, a fingerprint for the document may also be used to identify a document's data packet. Thus, if all tuples of a particular document fingerprint are assigned to a node of a supercomputing environment, said node may perform duplicate detection without communication with other nodes in the supercomputing environment. Additionally, the document quality may be used to identify an order or priority of the document compared to other documents.
The method 200 further includes distributing data at block 202. If the data is separated into data packets, the data packets may be distributed based on an element of the three-element tuple and/or to ensure all tuples fit into memory. At this portion of method 200, the data packets may be distributed among nodes of a supercomputing environment based upon the fingerprint of the document the packets belong to. Distributing the data may therefore include dynamically allocating nodes to ensure all tuples fit in memory of nodes. Upon distribution, the data packets are stored in the memory of a corresponding node. Alternatively, the data packets may be distributed in a random fashion. For example, the data packets may be dynamically allocated to nodes to ensure all tuples fit into memory, without taking the document fingerprint into consideration.
The method 200 further includes applying localized detection techniques at block 203. The localized detection techniques may be applied to data packets that have been distributed. For example, the nodes corresponding to data packets may apply these localized detection techniques. The localized detection techniques include sorting data packets by fingerprint and removing duplicate data packets based on a comparison to locate the duplicates within a corresponding node. It is noted that blocks 202 and 203 are optional in exemplary embodiments. For example, as will be described below, blocks 204 and 205 provide sufficient duplicate detection to accurately detect duplicate documents.
The method 200 further includes re-distributing data at block 204. Re-distributing data includes shuffling data-packets among nodes based on a fingerprint of the respective document. Re-distributing the data further includes dynamically allocating nodes to ensure all tuples fit into memory of the nodes. As data packets have been redistributed based on a fingerprint, additional removal of duplicate packets may be performed. Redistributing the data thus includes dynamically allocating nodes to ensure all tuples fit in memory of nodes. Upon redistribution, the data packets are stored in the memory of a corresponding node.
The method 200 further includes applying (or re-applying) localized detection techniques at block 205. The localized detection techniques may be applied to the re-distributed data packets. The localized detection techniques includes sorting data packets based on the fingerprint of a respective document and further sorting by document ID. Exact duplicates of data packets may be removed at each node independent of communication with other nodes. For example, because data packets have been redistributed based on document fingerprints and sorted within the allocated nodes, each node may perform document detection relatively fast, and remove any duplicates from memory. The duplicate detection may be based on comparisons to data packets according to elements of the three-element tuple. Thus similar documents may be allocated to the same node, further increasing the speed of duplicate detection. Upon removal of duplicates, only unique data packets (i.e., documents) should remain in the nodes. Therefore, even if blocks 202 and 203 are omitted, fast duplicate detection may be performed with blocks 204 and 205.
The method 200 further includes merging data at block 206. The merging includes performing a global merge of the results of the localized detection techniques in a distributed fashion in the supercomputing environment. Thus, if documents have been duplicated, by globally merging the results, duplicates are detected and the results are made available.
The results may be organized into sets of duplicated and non-duplicated documents. According to an exemplary embodiment, “unique documents” are documents with a single copy, “document groups” are a set of duplicate documents with the same fingerprint, and “master documents” are the highest ranked documents within a given document group. As such, for an entire corpus of documents, the sets of unique documents and master documents form a complete list of non-duplicate documents. Alternatively, for an entire corpus of documents, a set of duplicated documents may be provided. For example, if a listing of duplicated documents is provided, all non-duplicated documents may be inferred from a listing of all documents.
It is noted, that the above method may be implemented in a supercomputing environment as described with reference to FIG. 1 . The supercomputing environment may provide a unique file system allowing fast sequential access of electronic documents stored thereon. This may reduce the complexity of disk reads and writes, with the computation time itself consuming only a fraction of the overall time. For example, if GPFS is employed in the supercomputing environment, on a 6-billion document corpus, a turnaround time of less than about one hour has been achieved in practice of an exemplary embodiment of the present invention.
It is further noted that portions or the entirety of the method may be executed as instructions in a processor of a computer system. Thus, the present invention may be implemented, in software, for example, as any suitable computer program. For example, a program in accordance with the present invention may be a computer program product causing a computer to execute the example method described herein.
The computer program product may include a computer-readable medium having computer program logic or code portions embodied thereon for enabling a processor of a computer apparatus to perform one or more functions in accordance with one or more of the example methodologies described above. The computer program logic may thus cause the processor to perform one or more of the example methodologies, or one or more functions of a given methodology described herein.
The computer-readable storage medium may be a built-in medium installed inside a computer main body or removable medium arranged so that it can be separated from the computer main body. Examples of the built-in medium include, but are not limited to, rewriteable non-volatile memories, such as RAMs, ROMs, flash memories, and hard disks. Examples of a removable medium may include, but are not limited to, optical storage media such as CD-ROMs and DVDs; magneto-optical storage media such as MOs; magnetism storage media such as floppy disks (trademark), cassette tapes, and removable hard disks; media with a built-in rewriteable non-volatile memory such as memory cards; and media with a built-in ROM, such as ROM cassettes.
Further, such programs, when recorded on computer-readable storage media, may be readily stored and distributed. The storage medium, as it is read by a computer, may enable the method for duplicate detection on web-scale data in a supercomputing environment, in accordance with an exemplary embodiment of the present invention.
While an exemplary embodiment has been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements which fall within the scope of the claims which follow. These claims should be construed to maintain the proper protection for the invention first described.
Claims (5)
1. A scale-out supercomputing system, comprising:
a plurality of interconnected nodes, wherein each node of the plurality of interconnected nodes includes a computer processor, arranged in a three-dimensional cubic grid and configured to perform a method of duplicate detection, the method comprising:
computing a fingerprint of at least one document to generate data packets from the at least one document and to generate a fixed size tuple of information from the at least one document, wherein,
the data packets are associated with elements of the fixed size tuple, and
the fixed size tuple includes at least a document identifier, a document fingerprint, and a document quality measurement;
distributing the data packets to each node of the plurality of nodes to ensure all elements of the fixed size tuple fit into memory of the plurality of nodes;
applying localized detection techniques to data packets on each node of the plurality of nodes to remove data packet duplicates;
redistributing the data packets to each node of the plurality of nodes based on the document fingerprint, wherein,
the data packets are redistributed by dynamically allocating data packets to nodes to ensure all elements of the fixed size tuple fit into memory of the plurality of nodes;
reapplying the localized detection techniques on each node to the redistributed packets to remove exact data packet duplicates; and
performing a global merge of results of the localized detection techniques in a distributed fashion in the supercomputing system such that an entire corpus of web-scale data is represented based on document duplication.
2. The supercomputing system of claim 1 , further comprising a unique computer file system allowing for sequential access to a plurality of documents for duplicate detection.
3. The supercomputing system of claim 1 , wherein the entire corpus of web-scale data is represented by at least two lists, the at least two lists including:
documents with a single copy; and
highest ranked documents within a given document group, the given document group including a set of duplicate documents with the same fingerprint.
4. The supercomputing system of claim 1 , wherein the entire corpus of web-scale data is represented by one list including all duplicated documents.
5. The supercomputing system of claim 1 , wherein applying localized detection techniques includes:
sorting data packets by an associated document fingerprint;
comparing data packets to locate duplicate data packets; and
removing the located duplicate data packets from the node on which the duplicate data-packets reside.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/045,406 US7389310B1 (en) | 2007-11-13 | 2008-03-10 | Supercomputing environment for duplicate detection on web-scale data |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/939,378 US7363329B1 (en) | 2007-11-13 | 2007-11-13 | Method for duplicate detection on web-scale data in supercomputing environments |
US12/045,406 US7389310B1 (en) | 2007-11-13 | 2008-03-10 | Supercomputing environment for duplicate detection on web-scale data |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/939,378 Continuation US7363329B1 (en) | 2007-11-13 | 2007-11-13 | Method for duplicate detection on web-scale data in supercomputing environments |
Publications (1)
Publication Number | Publication Date |
---|---|
US7389310B1 true US7389310B1 (en) | 2008-06-17 |
Family
ID=39310268
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/939,378 Expired - Fee Related US7363329B1 (en) | 2007-11-13 | 2007-11-13 | Method for duplicate detection on web-scale data in supercomputing environments |
US12/045,406 Expired - Fee Related US7389310B1 (en) | 2007-11-13 | 2008-03-10 | Supercomputing environment for duplicate detection on web-scale data |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/939,378 Expired - Fee Related US7363329B1 (en) | 2007-11-13 | 2007-11-13 | Method for duplicate detection on web-scale data in supercomputing environments |
Country Status (1)
Country | Link |
---|---|
US (2) | US7363329B1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060230149A1 (en) * | 2005-04-07 | 2006-10-12 | Cluster Resources, Inc. | On-Demand Access to Compute Resources |
US9015324B2 (en) | 2005-03-16 | 2015-04-21 | Adaptive Computing Enterprises, Inc. | System and method of brokering cloud computing resources |
US9231886B2 (en) | 2005-03-16 | 2016-01-05 | Adaptive Computing Enterprises, Inc. | Simple integration of an on-demand compute environment |
US10333862B2 (en) | 2005-03-16 | 2019-06-25 | Iii Holdings 12, Llc | Reserving resources in an on-demand compute environment |
US11467883B2 (en) | 2004-03-13 | 2022-10-11 | Iii Holdings 12, Llc | Co-allocating a reservation spanning different compute resources types |
US11494235B2 (en) | 2004-11-08 | 2022-11-08 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11522952B2 (en) | 2007-09-24 | 2022-12-06 | The Research Foundation For The State University Of New York | Automatic clustering for self-organizing grids |
US11526304B2 (en) | 2009-10-30 | 2022-12-13 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US11630704B2 (en) | 2004-08-20 | 2023-04-18 | Iii Holdings 12, Llc | System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information |
US11650857B2 (en) | 2006-03-16 | 2023-05-16 | Iii Holdings 12, Llc | System and method for managing a hybrid computer environment |
US11652706B2 (en) | 2004-06-18 | 2023-05-16 | Iii Holdings 12, Llc | System and method for providing dynamic provisioning within a compute environment |
US11720290B2 (en) | 2009-10-30 | 2023-08-08 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US11960937B2 (en) | 2022-03-17 | 2024-04-16 | Iii Holdings 12, Llc | System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8594392B2 (en) * | 2009-11-18 | 2013-11-26 | Yahoo! Inc. | Media identification system for efficient matching of media items having common content |
US8462781B2 (en) | 2011-04-06 | 2013-06-11 | Anue Systems, Inc. | Systems and methods for in-line removal of duplicate network packets |
TWI420333B (en) * | 2011-08-10 | 2013-12-21 | Inventec Corp | A distributed de-duplication system and the method therefore |
CN102298633B (en) * | 2011-09-08 | 2013-05-29 | 厦门市美亚柏科信息股份有限公司 | Method and system for investigating repeated data in distributed mass data |
CN104112005B (en) * | 2014-07-15 | 2017-05-10 | 电子科技大学 | Distributed mass fingerprint identification method |
US10044625B2 (en) | 2014-11-25 | 2018-08-07 | Keysight Technologies Singapore (Holdings) Pte Ltd | Hash level load balancing for deduplication of network packets |
US10142263B2 (en) | 2017-02-21 | 2018-11-27 | Keysight Technologies Singapore (Holdings) Pte Ltd | Packet deduplication for network packet monitoring in virtual processing environments |
CN113259470B (en) * | 2021-06-03 | 2021-09-24 | 长视科技股份有限公司 | Data synchronization method and data synchronization system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5913208A (en) | 1996-07-09 | 1999-06-15 | International Business Machines Corporation | Identifying duplicate documents from search results without comparing document content |
US6658423B1 (en) | 2001-01-24 | 2003-12-02 | Google, Inc. | Detecting duplicate and near-duplicate files |
-
2007
- 2007-11-13 US US11/939,378 patent/US7363329B1/en not_active Expired - Fee Related
-
2008
- 2008-03-10 US US12/045,406 patent/US7389310B1/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5913208A (en) | 1996-07-09 | 1999-06-15 | International Business Machines Corporation | Identifying duplicate documents from search results without comparing document content |
US6658423B1 (en) | 2001-01-24 | 2003-12-02 | Google, Inc. | Detecting duplicate and near-duplicate files |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11467883B2 (en) | 2004-03-13 | 2022-10-11 | Iii Holdings 12, Llc | Co-allocating a reservation spanning different compute resources types |
US11652706B2 (en) | 2004-06-18 | 2023-05-16 | Iii Holdings 12, Llc | System and method for providing dynamic provisioning within a compute environment |
US11630704B2 (en) | 2004-08-20 | 2023-04-18 | Iii Holdings 12, Llc | System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information |
US11494235B2 (en) | 2004-11-08 | 2022-11-08 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11886915B2 (en) | 2004-11-08 | 2024-01-30 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11861404B2 (en) | 2004-11-08 | 2024-01-02 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11762694B2 (en) | 2004-11-08 | 2023-09-19 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11709709B2 (en) | 2004-11-08 | 2023-07-25 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11656907B2 (en) | 2004-11-08 | 2023-05-23 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11537434B2 (en) | 2004-11-08 | 2022-12-27 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11537435B2 (en) | 2004-11-08 | 2022-12-27 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11134022B2 (en) | 2005-03-16 | 2021-09-28 | Iii Holdings 12, Llc | Simple integration of an on-demand compute environment |
US10333862B2 (en) | 2005-03-16 | 2019-06-25 | Iii Holdings 12, Llc | Reserving resources in an on-demand compute environment |
US9015324B2 (en) | 2005-03-16 | 2015-04-21 | Adaptive Computing Enterprises, Inc. | System and method of brokering cloud computing resources |
US9231886B2 (en) | 2005-03-16 | 2016-01-05 | Adaptive Computing Enterprises, Inc. | Simple integration of an on-demand compute environment |
US11658916B2 (en) | 2005-03-16 | 2023-05-23 | Iii Holdings 12, Llc | Simple integration of an on-demand compute environment |
US10608949B2 (en) | 2005-03-16 | 2020-03-31 | Iii Holdings 12, Llc | Simple integration of an on-demand compute environment |
US11356385B2 (en) | 2005-03-16 | 2022-06-07 | Iii Holdings 12, Llc | On-demand compute environment |
US10277531B2 (en) | 2005-04-07 | 2019-04-30 | Iii Holdings 2, Llc | On-demand access to compute resources |
US20060230149A1 (en) * | 2005-04-07 | 2006-10-12 | Cluster Resources, Inc. | On-Demand Access to Compute Resources |
US9075657B2 (en) * | 2005-04-07 | 2015-07-07 | Adaptive Computing Enterprises, Inc. | On-demand access to compute resources |
US11533274B2 (en) | 2005-04-07 | 2022-12-20 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11496415B2 (en) | 2005-04-07 | 2022-11-08 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11765101B2 (en) | 2005-04-07 | 2023-09-19 | Iii Holdings 12, Llc | On-demand access to compute resources |
US10986037B2 (en) | 2005-04-07 | 2021-04-20 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11522811B2 (en) | 2005-04-07 | 2022-12-06 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11831564B2 (en) | 2005-04-07 | 2023-11-28 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11650857B2 (en) | 2006-03-16 | 2023-05-16 | Iii Holdings 12, Llc | System and method for managing a hybrid computer environment |
US11522952B2 (en) | 2007-09-24 | 2022-12-06 | The Research Foundation For The State University Of New York | Automatic clustering for self-organizing grids |
US11720290B2 (en) | 2009-10-30 | 2023-08-08 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US11526304B2 (en) | 2009-10-30 | 2022-12-13 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US11960937B2 (en) | 2022-03-17 | 2024-04-16 | Iii Holdings 12, Llc | System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter |
Also Published As
Publication number | Publication date |
---|---|
US7363329B1 (en) | 2008-04-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7389310B1 (en) | Supercomputing environment for duplicate detection on web-scale data | |
US10372559B2 (en) | Managing a redundant computerized database using a replicated database cache | |
US9372762B2 (en) | Systems and methods for restoring application data | |
US7890626B1 (en) | High availability cluster server for enterprise data management | |
JP5539683B2 (en) | Scalable secondary storage system and method | |
US8370833B2 (en) | Method and system for implementing a virtual storage pool in a virtual environment | |
US11314420B2 (en) | Data replica control | |
US20140222953A1 (en) | Reliable and Scalable Image Transfer For Data Centers With Low Connectivity Using Redundancy Detection | |
US20120192207A1 (en) | Pipeline Across Isolated Computing Environments | |
US8099553B2 (en) | Refactoring virtual data storage hierarchies | |
US7987325B1 (en) | Method and apparatus for implementing a storage lifecycle based on a hierarchy of storage destinations | |
CN102142032A (en) | Method and system for reading and writing data of distributed file system | |
US9177034B2 (en) | Searchable data in an object storage system | |
CN117377941A (en) | Generating a dataset using an approximate baseline | |
US20120054429A1 (en) | Method and apparatus for optimizing data allocation | |
US20120072394A1 (en) | Determining database record content changes | |
Vallath | Oracle real application clusters | |
EP1208432B1 (en) | System and method for logging transaction records in a computer system | |
CN108536822A (en) | Data migration method, device, system and storage medium | |
Khattak et al. | Enhancing integrity technique using distributed query operation | |
US20230351040A1 (en) | Methods and Systems Directed to Distributed Personal Data Management | |
Paris | Voting with bystanders | |
GB2378789A (en) | Removal of duplicates from large data sets | |
Kaseb et al. | Redundant independent files (RIF): a technique for reducing storage and resources in big data replication | |
US6877108B2 (en) | Method and apparatus for providing error isolation in a multi-domain computer system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
REMI | Maintenance fee reminder mailed | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
SULP | Surcharge for late payment | ||
REMI | Maintenance fee reminder mailed | ||
LAPS | Lapse for failure to pay maintenance fees | ||
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20160617 |