Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20060230024 A1
Publication typeApplication
Application numberUS 11/101,667
Publication date12 Oct 2006
Filing date8 Apr 2005
Priority date8 Apr 2005
Also published asUS8140499
Publication number101667, 11101667, US 2006/0230024 A1, US 2006/230024 A1, US 20060230024 A1, US 20060230024A1, US 2006230024 A1, US 2006230024A1, US-A1-20060230024, US-A1-2006230024, US2006/0230024A1, US2006/230024A1, US20060230024 A1, US20060230024A1, US2006230024 A1, US2006230024A1
InventorsYang Lei, Hasan Muhammad
Original AssigneeInternational Business Machines Corporation
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method and apparatus for a context based cache infrastructure to enable subset query over a cached object
US 20060230024 A1
Abstract
A method, an apparatus, and computer instructions are provided for a context based cache infrastructure to enable subset query over a cached object. Responsive to detecting a query to a root context of a context tree, the tree is traversed for a parent context of a subcontext corresponding to the name and value pair, which is identified by a user in the query. If the parent context caches all query results, the query results are iterated and the remaining name and value pairs are filtered out. However, if the parent context does not cache all query results, the traversing step is repeated for next parent context of the subcontext until a root context is encountered. If a root context is encountered, a query is issued to the database for the name and value pair and the result of the database query is cached in a new context.
Images(5)
Previous page
Next page
Claims(20)
1. A method in a data processing system for a context based infrastructure to enable subset query over a cached object, the method comprising:
detecting a query to a root context of a context tree from a user, wherein the query includes a name and value pair;
responsive to detecting the query, traversing the context tree for a parent context of a subcontext corresponding to the name and value pair; and
determining if the parent context caches all query results.
2. The method of claim 1, further comprising:
if the parent context does not cache all query results, repeating the traversing step for next parent context of the subcontext until a root context is encountered;
responsive to encountering the root context, issuing a query to the database for the name and value pair; and
caching result of the database query in a new context.
3. The method of claim 1, further comprising:
if the parent context caches all query results, iterating each query result and filtering out remaining name and value pairs.
4. The method of claim 1, wherein the root context includes a set of objects corresponding to a bean type.
5. The method of claim 4, wherein the subcontext includes a subset of objects filtered based on the name and value pair.
6. The method of claim 1, wherein the context tree includes a root context and a set of subcontexts, wherein each subcontext includes one or more subcontexts.
7. The method of claim 1, wherein the context tree includes a root cache context for each bean type.
8. A data processing system comprising:
a bus;
a memory connected to the bus, wherein a set of instructions are located in the memory; and
a processor connected to the bus, wherein the processor executes the set of instructions to detect a query to a root context of a context tree from a user, wherein the query includes a name and value pair; traverse the context tree for a parent context of a subcontext corresponding to the name and value pair responsive to detecting the query; and determine if the parent context caches all query results.
9. The data processing system of claim 8, wherein the processor further executes the set of instructions to repeat the traversing step for next parent context of the subcontext until a root context is encountered if the parent context does not cache all query results; issue a query to the database for the name and value pair responsive to encountering a root context; and cache result of the database query in a new context.
10. The data processing system of claim 8, wherein the processor further executes the set of instructions to iterate each query result and filtering out remaining name and value pairs if the parent context caches all query results.
11. The data processing system of claim 8, wherein the root context includes a set of objects corresponding to a bean type.
12. The data processing system of claim 11, wherein the subcontext includes a subset of objects filtered based on the name and value pair.
13. The data processing system of claim 8, wherein the context tree includes a root context and a set of subcontexts, wherein each subcontext includes one or more subcontexts.
14. The data processing system of claim 8, wherein the context tree includes a root cache context for each bean type.
15. A computer program product in a computer readable medium for a context based infrastructure to enable subset query over a cached object, the computer program product comprising:
first instructions for detecting a query to a root context of a context tree from a user, wherein the query includes a name and value pair;
second instructions for traversing the context tree for a parent context of a subcontext corresponding to the name and value pair responsive to detecting the query; and
third instructions for determining if the parent context caches all query results.
16. The computer program product of claim 15, further comprising:
fourth instructions for repeating the traversing step for next parent context of the subcontext until a root context is encountered if the parent context does not cache all query results;
fifth instructions for issuing a query to the database for the name and value pair responsive to encountering a root context; and
sixth instructions for caching result of the database query in a new context.
17. The computer program product of claim 15, further comprising:
seventh instructions for iterating each query result and filtering out remaining name and value pairs if the parent context caches all query results.
18. The computer program product of claim 15, wherein the root context includes a set of objects corresponding to a bean type.
19. The computer program product of claim 15, wherein the subcontext includes a subset of objects filtered based on the name and value pair.
20. The computer program product of claim 15, wherein the context tree includes a root context and a set of subcontexts, wherein each subcontext includes one or more subcontexts.
Description
    BACKGROUND OF THE INVENTION
  • [0001]
    1. Technical Field
  • [0002]
    The present invention relates to an improved data processing system. In particular, the present invention relates to cached object returned from a database query. Still more particular, the present invention relates to a context based cached infrastructure that enables a subset query over the cached object returned from a database query in a data processing system.
  • [0003]
    2. Description of Related Art
  • [0004]
    In the current enterprise JavaBeans™ (EJB) specification, lifecycle methods are provided for managing an entity bean's lifecycle. Examples of lifecycle methods include ejbCreate, which manages the creation of entity beans; ejbStore, which manages update of entity beans; and ejbRemove, which manages removal of entity beans. An entity bean is an enterprise JavaBean™ that has a physical data representation in a data store, for example, a row in a relational database table. Enterprise JavaBean™ or J2EE is a product available from Sun Microsystems, Inc.
  • [0005]
    In addition to lifecycle methods, enterprise JavaBeans™ specification provides ejbFind and ejbSelect methods to query entity beans that satisfy a search condition. For applications that seldom update their data, it is more efficient to cache the data locally rather than querying the database each time an update occurs, since database queries affect application performance.
  • [0006]
    Currently, query results may be cached and a user may search the query results by a certain criteria. For example, a catalog may have a “product” field and a “type” field, a user may search by the product, such as product=“electronics” or product=“books”. Since the catalog is seldom updated, the query results may be cached by the criteria, such that when the user performs the same search, the result is returned from the cached object instead of the database, thus, improving the search response time. If query results are cached without context, for each query, data may be returned if and only if it is an exact match.
  • [0007]
    Currently, no existing mechanism is present that allows a search to be performed on the subset of the existing cached query results. For example, to perform a search on query results returned by product=“books” for type=“bestsellers”. If all the “books” are already cached, it is more efficient to iterate the result of “books” and filter them to retrieve the “bestsellers”, rather than performing a separate search on the database based on the product and type.
  • [0008]
    In addition, no existing mechanism is available that sets up query results in such a way that makes it easy for user to iterate and filter query results. Therefore, it would be advantageous to have an improved method for a context based cache infrastructure that enables subset query over a cached object, such that database queries may be minimized to improve search performance.
  • BRIEF SUMMARY OF THE INVENTION
  • [0009]
    The present invention provides a method, an apparatus, and computer instructions for a context based infrastructure to enable subset query over a cached object. The mechanism of the present invention detects a query to a root context of a context tree from a user, wherein the query includes a name and value pair. Responsive to detecting the query, the mechanism traverses the context tree for a parent context of a subcontext corresponding to the name and value pair, and determines if the parent context caches all query results.
  • [0010]
    If the parent context does not cache all query results, the mechanism repeats the traversing step for next parent context of the subcontext until a root context is encountered. When a root context is encountered, the mechanism issues a query to the database for the name and value pair, and caches the result of the database query in a new context.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • [0011]
    The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
  • [0012]
    FIG. 1 is a pictorial representation of a network of data processing systems in which the present invention may be implemented in accordance with a preferred embodiment of the present invention;
  • [0013]
    FIG. 2 is a block diagram of a data processing system that may be implemented as a server in which the present invention may be implemented in accordance with a preferred embodiment of the present invention;
  • [0014]
    FIG. 3 is a block diagram illustrating a data processing system in which the present invention may be implemented in accordance with a preferred embodiment of the present invention;
  • [0015]
    FIG. 4 is a diagram illustrating an exemplary context tree cached by the mechanism of the present invention for a query result in accordance with an illustrative embodiment of the present invention;
  • [0016]
    FIG. 5 is a diagram illustrating data structures representing root context 400, subcontexts 402, and 404 in FIG. 4 in accordance with an illustrative embodiment of the present invention; and
  • [0017]
    FIG. 6 is a flowchart of an exemplary process for context based cache infrastructure to enable subset query over a cached object in accordance with an illustrative embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • [0018]
    With reference now to the figures, FIG. 1 depicts a pictorial representation of a network of data processing systems in which the present invention may be implemented. Network data processing system 100 is a network of computers in which the present invention may be implemented. Network data processing system 100 contains a network 102, which is the medium used to provide communications links between various devices and computers connected together within network data processing system 100. Network 102 may include connections, such as wire, wireless communication links, or fiber optic cables.
  • [0019]
    In the depicted example, server 104 is connected to network 102 along with storage unit 106. In addition, clients 108, 110, and 112 are connected to network 102. These clients 108, 110, and 112 may be, for example, personal computers or network computers. In the depicted example, server 104 provides data, such as boot files, operating system images, and applications to clients 108-112. Clients 108, 110, and 112 are clients to server 104. Network data processing system 100 may include additional servers, clients, and other devices not shown. In the depicted example, network data processing system 100 is the Internet with network 102 representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, government, educational and other computer systems that route data and messages. Of course, network data processing system 100 also may be implemented as a number of different types of networks, such as for example, an intranet, a local area network (LAN), or a wide area network (WAN). FIG. 1 is intended as an example, and not as an architectural limitation for the present invention.
  • [0020]
    Referring to FIG. 2, a block diagram of a data processing system that may be implemented as a server, such as server 104 in FIG. 1, is depicted in accordance with a preferred embodiment of the present invention. Data processing system 200 may be a symmetric multiprocessor (SMP) system including a plurality of processors 202 and 204 connected to system bus 206. Alternatively, a single processor system may be employed. Also connected to system bus 206 is memory controller/cache 208, which provides an interface to local memory 209. I/O Bus Bridge 210 is connected to system bus 206 and provides an interface to I/o bus 212. Memory controller/cache 208 and I/O Bus Bridge 210 may be integrated as depicted.
  • [0021]
    Peripheral component interconnect (PCI) bus bridge 214 connected to I/O bus 212 provides an interface to PCI local bus 216. A number of modems may be connected to PCI local bus 216. Typical PCI bus implementations will support four PCI expansion slots or add-in connectors. Communications links to clients 108-112 in FIG. 1 may be provided through modem 218 and network adapter 220 connected to PCI local bus 216 through add-in connectors.
  • [0022]
    Additional PCI bus bridges 222 and 224 provide interfaces for additional PCI local buses 226 and 228, from which additional modems or network adapters may be supported. In this manner, data processing system 200 allows connections to multiple network computers. A memory-mapped graphics adapter 230 and hard disk 232 may also be connected to I/O bus 212 as depicted, either directly or indirectly.
  • [0023]
    Those of ordinary skill in the art will appreciate that the hardware depicted in FIG. 2 may vary. For example, other peripheral devices, such as optical disk drives and the like, also may be used in addition to or in place of the hardware depicted. The depicted example is not meant to imply architectural limitations with respect to the present invention.
  • [0024]
    The data processing system depicted in FIG. 2 may be, for example, an IBM eServer pSeries system, a product of International Business Machines Corporation in Armonk, N.Y., running the Advanced Interactive Executive (AIX) operating system or LINUX operating system.
  • [0025]
    With reference now to FIG. 3, a block diagram illustrating a data processing system is depicted in which the present invention may be implemented. Data processing system 300 is an example of a client computer. Data processing system 300 employs a peripheral component interconnect (PCI) local bus architecture. Although the depicted example employs a PCI bus, other bus architectures such as Accelerated Graphics Port (AGP) and Industry Standard Architecture (ISA) may be used. Processor 302 and main memory 304 are connected to PCI local bus 306 through PCI Bridge 308. PCI Bridge 308 also may include an integrated memory controller and cache memory for processor 302. Additional connections to PCI local bus 306 may be made through direct component interconnection or through add-in boards. In the depicted example, local area network (LAN) adapter 310, small computer system interface (SCSI) host bus adapter 312, and expansion bus interface 314 are connected to PCI local bus 306 by direct component connection. In contrast, audio adapter 316, graphics adapter 318, and audio/video adapter 319 are connected to PCI local bus 306 by add-in boards inserted into expansion slots. Expansion bus interface 314 provides a connection for a keyboard and mouse adapter 320, modem 322, and additional memory 324. SCSI host bus adapter 312 provides a connection for hard disk drive 326, tape drive 328, and CD-ROM drive 330. Typical PCI local bus implementations will support three or four PCI expansion slots or add-in connectors.
  • [0026]
    An operating system runs on processor 302 and is used to coordinate and provide control of various components within data processing system 300 in FIG. 3. The operating system may be a commercially available operating system, such as Windows XP, which is available from Microsoft Corporation. An object oriented programming system such as Java may run in conjunction with the operating system and provide calls to the operating system from Java programs or applications executing on data processing system 300. “Java” is a trademark of Sun Microsystems, Inc. Instructions for the operating system, the object-oriented programming system, and applications or programs are located on storage devices, such as hard disk drive 326, and may be loaded into main memory 304 for execution by processor 302.
  • [0027]
    Those of ordinary skill in the art will appreciate that the hardware in FIG. 3 may vary depending on the implementation. Other internal hardware or peripheral devices, such as flash read-only memory (ROM), equivalent nonvolatile memory, or optical disk drives and the like, may be used in addition to or in place of the hardware depicted in FIG. 3. Also, the processes of the present invention may be applied to a multiprocessor data processing system.
  • [0028]
    As another example, data processing system 300 may be a stand-alone system configured to be bootable without relying on some type of network communication interfaces As a further example, data processing system 300 may be a personal digital assistant (PDA) device, which is configured with ROM and/or flash ROM in order to provide non-volatile memory for storing operating system files and/or user-generated data.
  • [0029]
    The depicted example in FIG. 3 and above-described examples are not meant to imply architectural limitations. For example, data processing system 300 also may be a notebook computer or hand held computer in addition to taking the form of a PDA. Data processing system 300 also may be a kiosk or a Web appliance.
  • [0030]
    The processes and mechanisms of the present invention may be implemented as computer instructions executed by processor 302 in data processing system 300 in FIG. 3, or processors 202 and 204 in data processing system 200 in FIG. 2.
  • [0031]
    The present invention provides a method, an apparatus, and computer instructions for a context based cache infrastructure to enable subset query over cached object. The present invention provides a mechanism that enables in-memory or cached object query by constructing the cache as a context tree. The context tree includes a root cache context, ‘/’, for each EJB type. The root cache context can hold objects that belong to the EJB type without any filtering. For example, a root cache context may hold the entire catalog data returned from catalog.findAll( ) query.
  • [0032]
    Each root cache context may include sub contexts, which indicate detailed filtering of cached results of the current root cache context by a group of field name/field value pairs. For example, an EJB type “catalog” may include a “product” field and a “type” field, and a root cache context ‘/’ may include sub context ‘/product/books’, which hold objects returned from catalog.findbyProduct(“book”) query. Sub context ‘/product/books’ may also include its sub context ‘/product/books/types/bestsellers’, which hold objects returned from catalog.findByProductAndType(“books”, “bestsellers”) query.
  • [0033]
    When a query is detected by the mechanism of the present invention, a findContext( ) method is called to the root cache context with a field name and field pair pairs, for example, {“product”, “book”} (“type”, “bestsellers”). In turn, a context at the level of ‘/product/book/type/bestsellers’ is returned. The mechanism of the present invention then traverses the parents of ‘/product/book/type/bestsellers’ context until it reaches the root cache context to identify the nearest context that cached the query results.
  • [0034]
    In the above example, the mechanism of the present invention traverses first in subcontext ‘/product/book’, and then in root cache context ‘/’. If a parent context that cached query results is found, the mechanism of the present invention iterates the cached results of the upper level and filters out the remaining field name and field value pairs, that is, the original field name and value pairs excluding the upper level context represented. However, if no parent context is found, the mechanism of the present invention issues a query to the database and caches the result at the new context level.
  • [0035]
    Turning now to FIG. 4, a diagram illustrating an exemplary context tree cached by the mechanism of the present invention for a query result is depicted in accordance with an illustrative embodiment of the present invention. As shown in FIG. 4, for each EJB type, the mechanism of the present invention creates a root cache context. In this example, root context ‘/’ 400 includes all catalog data returned from a Catalog.findAll( ) query.
  • [0036]
    Root cache context 400 has subcontext that indicates detail filtering of cache result by a group of field name/field value pair. In this example, root cache context 400 has subcontext ‘/product/books’ 402, which hold objects filtered from a Catalog.findByProduct(“book”) subset query. In turn, subcontext ‘/product/books’ 402 has a subcontext ‘/product/books/type/bestsellers’ 404 that hold objects filtered from a Catalog.findByProductAndType(“books”,“bestseller”) subset query.
  • [0037]
    Turning now to FIG. 5, a diagram illustrating data structures representing root context 400, subcontexts 402, and 404 in FIG. 4 is depicted in accordance with an illustrative embodiment of the present invention. As shown in FIG. 5, catalog 500 includes two fields, product 502 and type 504. Catalog 500 represents root cache context 400 in FIG. 4.
  • [0038]
    Product field 502 has a set of fields, including books 506, CDs 508, and magazines 510. Books 506 represents subcontext ‘/product/books’ 402 in FIG. 4. In addition, type fields 504 has a set of fields, including bestsellers 512, hard cover 514, and soft cover 516. Bestsellers 512 represent ‘/product/books/type/bestsellers’ 404 in FIG. 4. Bestsellers 512 include a number of entries, including Good Story 518 and Great Book 520. These are entries that are returned when subset query Catalog.findByProductAndType(“books”,“bestsellers”) is filtered.
  • [0039]
    Turning now to FIG. 6, a flowchart of an exemplary process for context based cache infrastructure to enable subset query over a cached object is depicted in accordance with an illustrative embodiment of the present invention. As shown in FIG. 6, the process begins when a user issues a query to the root cache context of the context tree with the field name and value pair (step 600). An example of the field name and field value pair may be {“product”, “book”} {“type”, “bestsellers”}.
  • [0040]
    Next, the mechanism of the present invention traverses the parent of the subcontext according to the field name and value pair (step 602). A determination is then by the mechanism as to whether the parent context caches all query results (step 604). If the parent context has all query results, the mechanism of the present invention iterates the cache result of the parent context and filters out the remaining field name and value pairs (step 608). The process then terminates.
  • [0041]
    However, if the parent context does not have all query results, the mechanism of the present invention then makes a determination as to whether the parent context is the root context (step 606). This means that the root context has been reached. If the parent context is not the root context, the mechanism traverses to the next parent context up the context tree (step 610). However, if the parent context is the root context, the mechanism of the present invention issues a query to the database and caches the query result in a new context (step 612) and the process terminates thereafter.
  • [0042]
    In summary, the present invention provides a context based infrastructure to enable subset query over a cached object. By using the mechanism of the present invention, a user may now iterate and filter query results. In addition, database queries may now be minimized to improve search performance.
  • [0043]
    It is important to note that while the present invention has been described in the context of a fully functioning data processing system, those of ordinary skill in the art will appreciate that the processes of the present invention are capable of being distributed in the form of a computer readable medium of instructions and a variety of forms and that the present invention applies equally regardless of the particular type of signal bearing media actually used to carry out the distribution. Examples of computer readable media include recordable-type media, such as a floppy disk, a hard disk drive, a RAM, CD-ROMs, DVD-ROMs, and transmission-type media, such as digital and analog communications links, wired or wireless communications links using transmission forms, such as, for example, radio frequency and light wave transmissions. The computer readable media may take the form of coded formats that are decoded for actual use in a particular data processing system.
  • [0044]
    The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US5842219 *14 Mar 199624 Nov 1998International Business Machines CorporationMethod and system for providing a multiple property searching capability within an object-oriented distributed computing network
US5864819 *8 Nov 199626 Jan 1999International Business Machines CorporationInternal window object tree method for representing graphical user interface applications for speech navigation
US5890151 *9 May 199730 Mar 1999International Business Machines CorporationMethod and system for performing partial-sum queries on a data cube
US6145056 *8 Jun 19987 Nov 2000Compaq Computer CorporationMethod and apparatus for caching the results of function applications with dynamic, fine-grained dependencies
US6208993 *22 Jul 199927 Mar 2001Ori Software Development Ltd.Method for organizing directories
US6421683 *31 Mar 199916 Jul 2002Verizon Laboratories Inc.Method and product for performing data transfer in a computer system
US6535970 *4 Jan 200018 Mar 2003International Business Machines CorporationMethod and apparatus for enhanced performance caching for path names
US6704736 *28 Jun 20009 Mar 2004Microsoft CorporationMethod and apparatus for information transformation and exchange in a relational database environment
US6735593 *9 Nov 199911 May 2004Simon Guy WilliamsSystems and methods for storing data
US6748374 *7 Dec 19988 Jun 2004Oracle International CorporationMethod for generating a relational database query statement using one or more templates corresponding to search conditions in an expression tree
US6799184 *30 Jan 200228 Sep 2004Sybase, Inc.Relational database system providing XML query support
US6868525 *26 May 200015 Mar 2005Alberti Anemometer LlcComputer graphic display visualization system and method
US6928466 *28 Sep 20009 Aug 2005Emc CorporationMethod and system for identifying memory component identifiers associated with data
US6934699 *1 Sep 199923 Aug 2005International Business Machines CorporationSystem and method for loading a cache with query results
US6950815 *23 Apr 200227 Sep 2005International Business Machines CorporationContent management system and methodology featuring query conversion capability for efficient searching
US7020644 *27 Aug 200228 Mar 2006Kevin Wade JamesonCollection installable knowledge
US7047242 *31 Mar 199916 May 2006Verizon Laboratories Inc.Weighted term ranking for on-line query tool
US7130839 *29 May 200131 Oct 2006Sun Microsystems, Inc.Method and system for grouping entries in a directory server by group memberships defined by roles
US7181438 *30 May 200020 Feb 2007Alberti Anemometer, LlcDatabase access system
US7219091 *29 Dec 200315 May 2007At&T Corp.Method and system for pattern matching having holistic twig joins
US7467131 *30 Sep 200316 Dec 2008Google Inc.Method and system for query data caching and optimization in a search engine system
US20030018898 *23 Jul 200123 Jan 2003Lection David B.Method, system, and computer-program product for providing selective access to certain child nodes of a document object model (DOM)
US20030065874 *23 Nov 20013 Apr 2003Marron Pedro JoseLDAP-based distributed cache technology for XML
US20030195870 *15 Apr 200216 Oct 2003International Business Machines CorporationSystem and method for performing lookups across namespace domains using universal resource locators
US20030212664 *10 May 200213 Nov 2003Martin BreiningQuerying markup language data sources using a relational query processor
US20040059719 *23 Sep 200225 Mar 2004Rajeev GuptaMethods, computer programs and apparatus for caching directory queries
US20040128615 *27 Dec 20021 Jul 2004International Business Machines CorporationIndexing and querying semi-structured documents
US20040168169 *3 Nov 200326 Aug 2004Christophe EbroLookup facility in distributed computer systems
US20040230584 *14 May 200318 Nov 2004International Business Machines CorporationObject oriented query root leaf inheritance to relational join translator method, system, article of manufacture, and computer program product
US20060112090 *7 Mar 200525 May 2006Sihem Amer-YahiaAdaptive processing of top-k queries in nested-structure arbitrary markup language such as XML
US20060190355 *12 Apr 200624 Aug 2006Microsoft CorporationSystem and Method for Designing and Operating an Electronic Store
US20060224610 *8 Jun 20065 Oct 2006Microsoft CorporationElectronic Inking Process
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US75874001 Apr 20058 Sep 2009Oracle International CorporationSuspending a result set and continuing from a suspended result set for transparent session migration
US7613710 *1 Apr 20053 Nov 2009Oracle International CorporationSuspending a result set and continuing from a suspended result set
US7634465 *28 Jul 200615 Dec 2009Microsoft CorporationIndexing and caching strategy for local queries
US77433331 Apr 200522 Jun 2010Oracle International CorporationSuspending a result set and continuing from a suspended result set for scrollable cursors
US824474117 Jul 200914 Aug 2012Qliktech International AbMethod and apparatus for extracting information from a database
US20060036616 *1 Apr 200516 Feb 2006Oracle International CorporationSuspending a result set and continuing from a suspended result set for scrollable cursors
US20060036617 *1 Apr 200516 Feb 2006Oracle International CorporationSuspending a result set and continuing from a suspended result set for transparent session migration
US20060059176 *1 Apr 200516 Mar 2006Oracle International CorporationSuspending a result set and continuing from a suspended result set
US20070078848 *28 Jul 20065 Apr 2007Microsoft CorporationIndexing and caching strategy for local queries
US20080222129 *5 Mar 200711 Sep 2008Komatsu Jeffrey GInheritance of attribute values in relational database queries
EP2146292A1 *3 Jul 200920 Jan 2010QlikTech International ABMethod and apparatus for extracting information from a database
Classifications
U.S. Classification1/1, 707/999.003
International ClassificationG06F17/30
Cooperative ClassificationG06F17/3048, G06F17/30607
European ClassificationG06F17/30S8T, G06F17/30S4P4C
Legal Events
DateCodeEventDescription
29 Apr 2005ASAssignment
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEI, YANG;MUHAMMAD, HASAN;REEL/FRAME:016185/0355
Effective date: 20050407
3 Aug 2015FPAYFee payment
Year of fee payment: 4