WO2016076831A1 - System and method for augmenting a search query - Google Patents

System and method for augmenting a search query Download PDF

Info

Publication number
WO2016076831A1
WO2016076831A1 PCT/US2014/064886 US2014064886W WO2016076831A1 WO 2016076831 A1 WO2016076831 A1 WO 2016076831A1 US 2014064886 W US2014064886 W US 2014064886W WO 2016076831 A1 WO2016076831 A1 WO 2016076831A1
Authority
WO
WIPO (PCT)
Prior art keywords
search query
visual element
search
query
user
Prior art date
Application number
PCT/US2014/064886
Other languages
French (fr)
Inventor
Aravind Musuluri
Original Assignee
Aravind Musuluri
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aravind Musuluri filed Critical Aravind Musuluri
Priority to PCT/US2014/064886 priority Critical patent/WO2016076831A1/en
Publication of WO2016076831A1 publication Critical patent/WO2016076831A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results

Definitions

  • the present disclosure generally relates to search engines and a method of retrieving search results based on a search query. More particularly, the disclosure relates to augmenting a search query by identifying at least one pertinent visual element for the search query.
  • Search engines assist users in retrieving relevant data from data sources.
  • the data source herein may refer to data and/or document(s) on the Internet, intranet, storage devices, and so on.
  • a user seeking information on a desired topic generally inputs a search query consisting of keyword(s) or phrase(s) relevant to the topic into the search interface of the search engine.
  • the search engine identifies documents in a data source that are relevant to the search query and displays a report with a prioritized list of links pointing to relevant documents containing the search keywords.
  • search engines like Google®, Bing®, Yahoo®, etc. have provided a variety of functionalities to improve the user experience and speedup the search process.
  • One such functionality is providing for search query suggestions as the user incrementally types in the search query. These search query suggestions may be available as a list or drop down list and the user may select one of the search query suggestions to initiate the search. The user can also ignore the search query suggestions and continue to input the search query. While these search query suggestions are very useful to the users, there is a need to provide additional improvements that can further not only speedup the search but also improve the quality of the results.
  • the present disclosure relates to means of augmenting a search query by automatically identifying at least one visual element for the search query.
  • the visual elements in accordance with the present disclosure may include, but not are not limited to, paragraph, table, list, menu, fixed width text, key/value, graph/chart, question/answer, timeline, interactive data and combinations thereof.
  • the means comprises (1) means to receive a search query or a portion of the search query comprising keyword(s) (2) means to identify at least one visual element pertinent to the search query or a portion of the search query (3) means to return the search query with the identified visual element(s).
  • the search query visual element(s) may be identified on the basis of historical search data.
  • the historical data may be selected from the group comprising but not limited to historical search queries of the corresponding user, historical search queries of a group of users or historical search queries of all the users.
  • the search query visual element(s) may be identified on the basis of the category of the query.
  • the search query visual element(s) may be identified on the basis of the length of the search query.
  • a system comprising a search engine unit.
  • the search engine unit may comprise one or more logics configured to perform the functions and operations associated with the above disclosed means.
  • a computer program product executable in a memory of a search engine unit is provided.
  • FIG. 1 is a block diagram illustrating an exemplary search environment in accordance with an embodiment of the present disclosure.
  • FIG. 2 is a block diagram of an exemplary computing device of FIG. 1.
  • FIG. 3 is a high level flowchart of an exemplary means for augmenting a search query with visual element(s) in accordance with one or more embodiments.
  • FIG. 4 depicts a portion of exemplary historical search queries having the same keyword that are made available to the search engine unit of FIG. 1.
  • FIG. 5 is a flowchart of a method for augmenting a search query with visual element(s) based on historical data in accordance with one or more embodiments.
  • FIG. 6 depicts a portion of exemplary categories & sub categories and their corresponding predefined visual element(s) that are made available to the search engine unit of FIG. 1.
  • FIG. 7 is a flowchart of a method for augmenting a search query with visual elements(s) based on the query keyword(s) category in accordance with one or more embodiments.
  • FIG. 8 depicts exemplary predefined visual element(s) for a search query based on the count of keywords in the search query that are made available to the search engine unit of FIG. 1.
  • FIG. 9 is a flowchart of a method for augmenting a search query with visual elements(s) based on the count of keyword(s) in the search query in accordance with one or more embodiments.
  • FIG. 1 is an exemplary search environment 100 in accordance with the present disclosure. It will be understood and appreciated by those of ordinary skill in the art that the computing system architecture 100 shown in FIG. 1 is merely an example of one suitable computing system and is not intended to suggest any limitation as to the scope of use or functionality of the present disclosure. Neither should the computing system architecture 100 be interpreted as having any dependency or requirement related to any single module/component or combination of modules/components illustrated therein.
  • the system 100 comprises a search engine unit 110, a client 120 and a storage unit 140.
  • the search engine unit 110, the client 120 and the storage unit 140 all communicate over a network 130.
  • the network 130 can include any type of network known in the art or future-developed.
  • the network 130 may be an Ethernet, a local area network (LAN), or a wide area network (WAN), e.g., the Internet, or a combination of networks.
  • LAN local area network
  • WAN wide area network
  • the search engine unit 110 may be a dedicated or shared server including but not limited to any type of application server, database server, or file server configurable and combinations thereof.
  • the search engine unit 110 and the client 120 may include, but are not limited to, a computer, handheld unit, mobile unit, consumer electronic unit, or the like.
  • the exemplary search engine unit 110 comprises a query visual element identification logic 111, a historical query identification logic 112, a category query identification logic 113 and a length query identification logic 114.
  • the historical query identification logic 112 may be configured to identify the historical search queries of the corresponding user, historical search queries of a group of users or historical search queries of all the users. An example of a portion of historical queries along with the visual element(s) is discussed under FIG. 4.
  • the category query identification logic 113 may be configured to identify the category of the query. An example of a portion of existing categories and their predefined visual elements is discussed under FIG. 6.
  • the length query identification logic 114 may be configured to identify the count of keywords in the query. An example of a keyword counts and their predefined visual elements is discussed under FIG. 8.
  • the query visual element identification logic 111 upon receiving a search query, identifies the most pertinent visual element(s) for the query based on the historical query identification logic 112 and/or category query identification logic 113 and/or length query identification logic 114 and returns the search query augmented with the visual element(s).
  • the storage unit 140 is configured to store information associated with search results, historical search queries, user data, categories or the like.
  • information may include, without limitation, domains, URLs, webpages, websites, indexes, visual element data, historical search queries, counts of similar historical queries, user data, historical queries executed by the user, list of queries that fall within a category, visual element(s) mapped to a category, information associated therewith, and the like.
  • the storage unit 140 is configured to be searchable for one or more of the items stored in association therewith. It will be understood and appreciated by those of ordinary skill in the art that the information stored in association with the storage unit 140 may be configurable and may include any information relevant to search results, historical search queries, user data, categories, or the like.
  • the storage unit 140 may, in fact, be a plurality of storage units, for instance a database cluster, portions of which may reside on the search engine unit 110, the client 120, another external computing device (not shown), and/or any combination thereof. Moreover, the storage unit 140 may be included within the search engine unit 110 or client 120 as a computer- storage medium.
  • the single unit depictions are meant for clarity, not to limit the scope of embodiments in any form.
  • a user 122 through the client logic 121 on the client 120 may enter a search query consisting of keyword(s) which may identify the type of information that the user is interested in retrieving.
  • the client logic 121 may comprise, for example, an Internet browser; however, other types of client logic 121 for interfacing with the user 122 and for communicating with the search engine unit 110 may be used in other embodiments of the present disclosure.
  • the client logic 121 transmits the user search query to the search engine unit 110 via the network 130.
  • the search engine unit 110 examines the storage unit 140 and automatically assigns visual element(s) to the user search query.
  • the search engine unit 110 may further compile a prioritized list of all the documents containing all or some of the keyword(s) in the identified visual element type(s) and returns the list to the client logic 121, which displays the results to the user 122 in a window.
  • the visual element(s) may be assigned implicitly i.e. the user may not be shown the visual element(s) and may be hidden from the user.
  • Fig 2 is an exemplary search engine unit 110 in accordance with the present disclosure. It should be noted, however, that embodiments are not limited to implementation on such computing devices, but may be implemented on any of a variety of different types of computing units within the scope of embodiments hereof.
  • the search engine unit 110 (as shown in FIG. 1) is only one example of a suitable computing/search environment and it is not intended to suggest any limitation as to the scope of use or functionality of the disclosure.
  • the search engine unit 110 includes a bus 206, a processor 201, memory 202, network device 203, input device 204, and an output device 205.
  • Bus 206 may include a path that permits communication among the components of the search engine unit 110.
  • the memory 202 stores the query visual element identification logic 111, the historical query identification logic 112, the category query identification logic 113 and the length query identification logic 114.
  • the memory 202 may be any type of computer memory known in the art or future-developed for electronically storing data and/or logic, including volatile and non-volatile memory.
  • memory 202 can include random access memory (RAM), read-only memory (ROM), flash memory, any magnetic computer storage unit, including hard disks, floppy discs, or magnetic tapes, and optical discs.
  • the processor 201 comprises processing hardware for interpreting or executing tasks or instructions stored in memory 202.
  • the processor 201 may be a microprocessor, a digital processor, or other type of circuitry configured to run and/or execute instructions.
  • the network device 203 may be any type of network unit (e.g., a modem) known in the art or future-developed for communicating over a network 130 (FIG. 1).
  • the search engine unit 110 (FIG. 1) communicates with the storage unit 140 (FIG. 1) and the client 120 (FIG. 1) over the network 130 (FIG. 1) via the network device 203.
  • the input device 204 is any type of input unit known in the art or future-developed for receiving data.
  • the input unit 204 may be a keyboard, a mouse, a touch screen, a serial port, a scanner, a camera, or a microphone.
  • the output device 205 may be any type of output unit known in the art or future-developed for displaying or outputting data.
  • the output device 205 may be a liquid crystal display (LCD) or other type of video display unit, a speaker, or a printer.
  • LCD liquid crystal display
  • search engine unit 110 (FIG. 1) components may be implemented by software, hardware, firmware or any combination thereof.
  • the exemplary search engine unit 110, depicted by FIG. 1 all the components are implemented by software and stored in memory 202.
  • FIG. 3 is a high level flowchart of an exemplary means for augmenting a search query with visual element(s) in accordance with one or more embodiments.
  • the search engine unit 110 may receive a search query or a portion of the search query.
  • the search engine unit 110 may identify pertinent visual element(s) for the search query. Once the pertinent visual element(s) are identified, in step 303, the search engine returns the search query with the augmented visual element(s).
  • FIG. 4 illustrates an exemplary portion of historical search queries in accordance with the present disclosure.
  • the queries consist of both keyword(s) and visual element(s).
  • the depicted portion of historical search queries all comprise the same keyword i.e., "Diabetes". Further for simplicity, assume that these are the only historical search queries with the keyword "Diabetes”.
  • the historical search queries may be stored in storage unit 140 (FIG. 1) and made available to the historical query identification logic 112 (FIG. 1).
  • the historical query identification logic 112 may compute queries 401 comprising the visual elements table and list combination with three hits as the most popular followed by queries 402 comprising the visual element question/answer with two hits as the second most popular for queries comprising the keyword "Diabetes”.
  • the historical query identification logic 112 may identify the visual elements of the historical queries 401, table and list, as the visual elements of the user search query.
  • FIG. 5 is a flowchart illustrating an exemplary method to identify visual element(s) for a search query in accordance with the present disclosure.
  • the query visual element identification logic 111 (FIG. 1) of the search engine unit 110 (FIG. 1) may receive a search query or a portion of search query from the user and forwards the search query to the historical query identification logic 112 (FIG. 1).
  • the historical query identification logic 112 (FIG. 1) matches the user search query with the historical search queries from the storage unit 140 (FIG. 1) with the same keyword(s).
  • the historical query identification logic 112 (FIG. 1) identifies the visual element(s) of the most popular search query among the marching historical search queries as the visual element(s) for the user search query.
  • the user search query is augmented with the identified visual element(s) and returned to the user.
  • FIG. 6 illustrates an exemplary portion of categories and subcategories in accordance with the present disclosure. Also, depicted are the visual element(s) that are predefined with each category and subcategory. For simplicity, assume that the sub-categories shown in FIG. 6 are the only sub-categories for the "Health" category.
  • the category, subcategories and their associated visual element(s) may be stored in the storage unit 140 (FIG. 1) and made available to the category query identification logic 113 (FIG. 1).
  • the category query identification logic 113 may identify the category "Health"; subcategory "Diseases" and the corresponding predefined visual element(s) question/answer and list as the visual elements of the search query.
  • FIG. 7 is a flowchart illustrating another exemplary method to identify visual element(s) in accordance with the present disclosure.
  • the query visual element identification logic 111 (FIG. 1) of the search engine unit 110 (FIG. 1) may receive a search query or a portion of the search query from the user and forwards the search query to the category query identification logic 113 (FIG. 1).
  • the category query identification logic 113 (FIG. 1) may identify the closest category under which the user query may fall from the various existing categories present in the storage unit 140 (FIG. 1) and subsequently identify the visual element(s) of the closest query as the visual element(s) of the user query.
  • the category query identification logic 113 (FIG. 1) appends the identified visual element(s) to the search query and returns the search query along with the identified visual element(s) to the user.
  • the category query identification logic 113 may assign visual element(s) associated with a default category as the visual element(s) of the user search query.
  • FIG. 8 depicts exemplary predefined visual element(s) for a search query based on the count of keywords in the search query in accordance with the present disclosure.
  • the length query identification logic 114 (FIG. 1) computes the count of keywords in a search query and based on the count assigns the predefined visual element(s). Note that certain frequently occurring stop words, such as the, is, a, an, at, of and on may be ignored while computing the count of keywords in the search query.
  • the length query identification logic 114 may identify the word "of" within the user search query as a stop word and computes the keyword count as two and from the table depicted in FIG. 8 identifies, table, key/value and question/answer, as the visual elements of the user search query.
  • FIG. 9 is a flowchart illustrating yet another exemplary method to identify visual element(s) in accordance with the present disclosure.
  • the query visual element identification logic 111 (FIG. 1) of the search engine unit 110 (FIG. 1) may receive a search query or a portion of the search query from the user and forwards the search query to the length query identification logic 114 (FIG. 1).
  • the length query identification logic 114 (FIG. 1) counts the number of keywords present in the search query.
  • visual element(s) are assigned to the user search query based on the count of keyword(s) in the user query.
  • the length query identification logic 114 (FIG. 1) appends the identified visual element(s) to the search query and returns the search query along with the identified visual element(s) to the user.

Abstract

System and means for augmenting a search query by identifying at least one visual element for the search query is disclosed.

Description

SYSTEM AND METHOD FOR AUGMENTING A
SEARCH QUERY
TECHNICAL FIELD
[0001] The present disclosure generally relates to search engines and a method of retrieving search results based on a search query. More particularly, the disclosure relates to augmenting a search query by identifying at least one pertinent visual element for the search query.
CROSS REFERENCES
[0002] This international application is a continuation-in-part under PCT Rule 49bis.l(d) and 35 USC 120 of U.S. patent application Ser. No. 12/897,500, which is the national stage of international application No. PCT/US2010/051357, filed Oct. 4, 2010, now published as "SYSTEM AND METHOD FOR BLOCK SEGMENTING, IDENTIFYING AND INDEXING VISUAL ELEMENTS, AND SEARCHING DOCUMENTS", which claims priority benefit of U.S. Provisional Patent Application 61/247,973, filed Oct. 2, 2009, claiming priority under PCT Rule 49bis. l(d) and 35 USC 120, the entire contents of all the listed applications herein incorporated by reference in their entirety.
[0003] This international application is a patent of addition under PCT Rule 49bis.l(c) of U.S. patent application Ser. No. 12/897,500, which is the national stage of international application No. PCT/US2010/051357, filed Oct. 4, 2010, now published as "SYSTEM AND METHOD FOR BLOCK SEGMENTING, IDENTIFYING AND INDEXING VISUAL ELEMENTS, AND SEARCHING DOCUMENTS", which claims priority benefit of U.S. Provisional Patent Application 61/247,973, filed Oct. 2, 2009, claiming priority under PCT Rule 49bis. l(c), the entire contents of all the listed applications herein incorporated by reference in their entirety.
BACKGROUND
[0004] The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
[0005] Search engines assist users in retrieving relevant data from data sources. The data source herein may refer to data and/or document(s) on the Internet, intranet, storage devices, and so on. In order to use a search engine, a user seeking information on a desired topic generally inputs a search query consisting of keyword(s) or phrase(s) relevant to the topic into the search interface of the search engine. Once the search query is received, the search engine identifies documents in a data source that are relevant to the search query and displays a report with a prioritized list of links pointing to relevant documents containing the search keywords.
[0006] Known search engines like Google®, Bing®, Yahoo®, etc. have provided a variety of functionalities to improve the user experience and speedup the search process. One such functionality is providing for search query suggestions as the user incrementally types in the search query. These search query suggestions may be available as a list or drop down list and the user may select one of the search query suggestions to initiate the search. The user can also ignore the search query suggestions and continue to input the search query. While these search query suggestions are very useful to the users, there is a need to provide additional improvements that can further not only speedup the search but also improve the quality of the results.
SUMMARY
[0007] The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and it does not identify key/critical elements of the disclosure or delineate the scope of the disclosure. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
[0008] According to the aspects related herein, the present disclosure relates to means of augmenting a search query by automatically identifying at least one visual element for the search query.
[0009] The visual elements in accordance with the present disclosure may include, but not are not limited to, paragraph, table, list, menu, fixed width text, key/value, graph/chart, question/answer, timeline, interactive data and combinations thereof.
[0010] In one aspect of the present disclosure, the means comprises (1) means to receive a search query or a portion of the search query comprising keyword(s) (2) means to identify at least one visual element pertinent to the search query or a portion of the search query (3) means to return the search query with the identified visual element(s).
[0011] In a preferred embodiment, the search query visual element(s) may be identified on the basis of historical search data. The historical data may be selected from the group comprising but not limited to historical search queries of the corresponding user, historical search queries of a group of users or historical search queries of all the users. [0012] In another embodiment, the search query visual element(s) may be identified on the basis of the category of the query.
[0013] In a third embodiment, the search query visual element(s) may be identified on the basis of the length of the search query.
[0014] In accordance with one or more preferred embodiments, a system comprising a search engine unit is provided. The search engine unit may comprise one or more logics configured to perform the functions and operations associated with the above disclosed means.
[0015] In accordance with one or more preferred embodiments, a computer program product executable in a memory of a search engine unit is provided.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in anyway. Throughout the disclosure, like elements are represented by like reference numerals, which are given by way of illustration only and thus are not limitative of the various embodiments.
[0017] Other objects and advantages of the present disclosure will become apparent to those skilled in the art upon reading the following detailed description of the preferred embodiments, in conjunction with the accompanying drawings, wherein:
[0018] FIG. 1 is a block diagram illustrating an exemplary search environment in accordance with an embodiment of the present disclosure.
[0019] FIG. 2 is a block diagram of an exemplary computing device of FIG. 1.
[0020] FIG. 3 is a high level flowchart of an exemplary means for augmenting a search query with visual element(s) in accordance with one or more embodiments.
[0021] FIG. 4 depicts a portion of exemplary historical search queries having the same keyword that are made available to the search engine unit of FIG. 1.
[0022] FIG. 5 is a flowchart of a method for augmenting a search query with visual element(s) based on historical data in accordance with one or more embodiments.
[0023] FIG. 6 depicts a portion of exemplary categories & sub categories and their corresponding predefined visual element(s) that are made available to the search engine unit of FIG. 1. [0024] FIG. 7 is a flowchart of a method for augmenting a search query with visual elements(s) based on the query keyword(s) category in accordance with one or more embodiments.
[0025] FIG. 8 depicts exemplary predefined visual element(s) for a search query based on the count of keywords in the search query that are made available to the search engine unit of FIG. 1.
[0026] FIG. 9 is a flowchart of a method for augmenting a search query with visual elements(s) based on the count of keyword(s) in the search query in accordance with one or more embodiments.
[0027] Features, elements, and aspects that are referenced by the same numerals in different figures represent the same, equivalent, or similar features, elements, or aspects, in accordance with one or more embodiments.
DETAILED DESCRIPTION
[0028] It is to be understood that the present disclosure is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. The present disclosure is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting.
[0029] The use of "including", "comprising" or "having" and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. The terms "a" and "an" herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item. Further, the use of terms "first", "second", and "third", and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another.
[0030] The disclosure described here is equally applicable to queries from users or from systems.
[0031] FIG. 1 is an exemplary search environment 100 in accordance with the present disclosure. It will be understood and appreciated by those of ordinary skill in the art that the computing system architecture 100 shown in FIG. 1 is merely an example of one suitable computing system and is not intended to suggest any limitation as to the scope of use or functionality of the present disclosure. Neither should the computing system architecture 100 be interpreted as having any dependency or requirement related to any single module/component or combination of modules/components illustrated therein. [0032] The system 100 comprises a search engine unit 110, a client 120 and a storage unit 140. The search engine unit 110, the client 120 and the storage unit 140 all communicate over a network 130.
[0033] The network 130 can include any type of network known in the art or future-developed. In this regard, the network 130 may be an Ethernet, a local area network (LAN), or a wide area network (WAN), e.g., the Internet, or a combination of networks.
[0034] The search engine unit 110 may be a dedicated or shared server including but not limited to any type of application server, database server, or file server configurable and combinations thereof. The search engine unit 110 and the client 120 may include, but are not limited to, a computer, handheld unit, mobile unit, consumer electronic unit, or the like.
[0035] The exemplary search engine unit 110 comprises a query visual element identification logic 111, a historical query identification logic 112, a category query identification logic 113 and a length query identification logic 114.
[0036] The historical query identification logic 112 may be configured to identify the historical search queries of the corresponding user, historical search queries of a group of users or historical search queries of all the users. An example of a portion of historical queries along with the visual element(s) is discussed under FIG. 4. The category query identification logic 113 may be configured to identify the category of the query. An example of a portion of existing categories and their predefined visual elements is discussed under FIG. 6. The length query identification logic 114 may be configured to identify the count of keywords in the query. An example of a keyword counts and their predefined visual elements is discussed under FIG. 8. The query visual element identification logic 111 upon receiving a search query, identifies the most pertinent visual element(s) for the query based on the historical query identification logic 112 and/or category query identification logic 113 and/or length query identification logic 114 and returns the search query augmented with the visual element(s).
[0037] The storage unit 140 is configured to store information associated with search results, historical search queries, user data, categories or the like. In various embodiments, such information may include, without limitation, domains, URLs, webpages, websites, indexes, visual element data, historical search queries, counts of similar historical queries, user data, historical queries executed by the user, list of queries that fall within a category, visual element(s) mapped to a category, information associated therewith, and the like. In embodiments, the storage unit 140 is configured to be searchable for one or more of the items stored in association therewith. It will be understood and appreciated by those of ordinary skill in the art that the information stored in association with the storage unit 140 may be configurable and may include any information relevant to search results, historical search queries, user data, categories, or the like. The content and volume of such information are not intended to limit the scope of embodiments of the present disclosure in any way. Further, though illustrated as a single, independent component, the storage unit 140 may, in fact, be a plurality of storage units, for instance a database cluster, portions of which may reside on the search engine unit 110, the client 120, another external computing device (not shown), and/or any combination thereof. Moreover, the storage unit 140 may be included within the search engine unit 110 or client 120 as a computer- storage medium. The single unit depictions are meant for clarity, not to limit the scope of embodiments in any form.
[0038] A user 122 through the client logic 121 on the client 120 may enter a search query consisting of keyword(s) which may identify the type of information that the user is interested in retrieving. The client logic 121 may comprise, for example, an Internet browser; however, other types of client logic 121 for interfacing with the user 122 and for communicating with the search engine unit 110 may be used in other embodiments of the present disclosure. The client logic 121 transmits the user search query to the search engine unit 110 via the network 130. Upon receiving the user search query, the search engine unit 110 examines the storage unit 140 and automatically assigns visual element(s) to the user search query. The search engine unit 110 may further compile a prioritized list of all the documents containing all or some of the keyword(s) in the identified visual element type(s) and returns the list to the client logic 121, which displays the results to the user 122 in a window.
[0039] Note that the visual element(s) may be assigned implicitly i.e. the user may not be shown the visual element(s) and may be hidden from the user.
[0040] Fig 2 is an exemplary search engine unit 110 in accordance with the present disclosure. It should be noted, however, that embodiments are not limited to implementation on such computing devices, but may be implemented on any of a variety of different types of computing units within the scope of embodiments hereof. The search engine unit 110 (as shown in FIG. 1) is only one example of a suitable computing/search environment and it is not intended to suggest any limitation as to the scope of use or functionality of the disclosure.
[0041] In the exemplary embodiment, the search engine unit 110 includes a bus 206, a processor 201, memory 202, network device 203, input device 204, and an output device 205. Bus 206 may include a path that permits communication among the components of the search engine unit 110.
[0042] The memory 202 stores the query visual element identification logic 111, the historical query identification logic 112, the category query identification logic 113 and the length query identification logic 114.
[0043] The memory 202 may be any type of computer memory known in the art or future-developed for electronically storing data and/or logic, including volatile and non-volatile memory. In this regard, memory 202 can include random access memory (RAM), read-only memory (ROM), flash memory, any magnetic computer storage unit, including hard disks, floppy discs, or magnetic tapes, and optical discs.
[0044] The processor 201 comprises processing hardware for interpreting or executing tasks or instructions stored in memory 202. Note that the processor 201 may be a microprocessor, a digital processor, or other type of circuitry configured to run and/or execute instructions.
[0045] The network device 203 may be any type of network unit (e.g., a modem) known in the art or future-developed for communicating over a network 130 (FIG. 1). In this regard, the search engine unit 110 (FIG. 1) communicates with the storage unit 140 (FIG. 1) and the client 120 (FIG. 1) over the network 130 (FIG. 1) via the network device 203.
[0046] The input device 204 is any type of input unit known in the art or future-developed for receiving data. As an example, the input unit 204 may be a keyboard, a mouse, a touch screen, a serial port, a scanner, a camera, or a microphone.
[0047] The output device 205 may be any type of output unit known in the art or future-developed for displaying or outputting data. As an example, the output device 205 may be a liquid crystal display (LCD) or other type of video display unit, a speaker, or a printer.
[0048] Note that the disclosure may also be practiced in a distributed computing environment where tasks or instructions of search engine unit 110 (FIG. 1) are performed by multiple computing units communicatively coupled to the network.
[0049] Further note that, the search engine unit 110 (FIG. 1) components may be implemented by software, hardware, firmware or any combination thereof. In the exemplary search engine unit 110, depicted by FIG. 1, all the components are implemented by software and stored in memory 202.
[0050] FIG. 3 is a high level flowchart of an exemplary means for augmenting a search query with visual element(s) in accordance with one or more embodiments. In step 301, the search engine unit 110 (FIG. 1) may receive a search query or a portion of the search query. In step 302, the search engine unit 110 (FIG. 1) may identify pertinent visual element(s) for the search query. Once the pertinent visual element(s) are identified, in step 303, the search engine returns the search query with the augmented visual element(s).
[0051] FIG. 4 illustrates an exemplary portion of historical search queries in accordance with the present disclosure. The queries consist of both keyword(s) and visual element(s). For simplicity sake, the depicted portion of historical search queries all comprise the same keyword i.e., "Diabetes". Further for simplicity, assume that these are the only historical search queries with the keyword "Diabetes". The historical search queries may be stored in storage unit 140 (FIG. 1) and made available to the historical query identification logic 112 (FIG. 1). The historical query identification logic 112 (FIG. 1) may compute queries 401 comprising the visual elements table and list combination with three hits as the most popular followed by queries 402 comprising the visual element question/answer with two hits as the second most popular for queries comprising the keyword "Diabetes".
[0052] Thus, for an exemplary user search query "Diabetes" and exemplary historical search queries as shown in FIG. 4, the historical query identification logic 112 (FIG. 1) may identify the visual elements of the historical queries 401, table and list, as the visual elements of the user search query.
[0053] FIG. 5 is a flowchart illustrating an exemplary method to identify visual element(s) for a search query in accordance with the present disclosure. In step 501, the query visual element identification logic 111 (FIG. 1) of the search engine unit 110 (FIG. 1) may receive a search query or a portion of search query from the user and forwards the search query to the historical query identification logic 112 (FIG. 1). In step 502, the historical query identification logic 112 (FIG. 1) matches the user search query with the historical search queries from the storage unit 140 (FIG. 1) with the same keyword(s). In step 503, the historical query identification logic 112 (FIG. 1) identifies the visual element(s) of the most popular search query among the marching historical search queries as the visual element(s) for the user search query. In step 504, the user search query is augmented with the identified visual element(s) and returned to the user.
[0054] FIG. 6 illustrates an exemplary portion of categories and subcategories in accordance with the present disclosure. Also, depicted are the visual element(s) that are predefined with each category and subcategory. For simplicity, assume that the sub-categories shown in FIG. 6 are the only sub-categories for the "Health" category. The category, subcategories and their associated visual element(s) may be stored in the storage unit 140 (FIG. 1) and made available to the category query identification logic 113 (FIG. 1). Thus, for an exemplary search query "Diabetes", the category query identification logic 113 (FIG. 1) may identify the category "Health"; subcategory "Diseases" and the corresponding predefined visual element(s) question/answer and list as the visual elements of the search query.
[0055] FIG. 7 is a flowchart illustrating another exemplary method to identify visual element(s) in accordance with the present disclosure. In step 701, the query visual element identification logic 111 (FIG. 1) of the search engine unit 110 (FIG. 1) may receive a search query or a portion of the search query from the user and forwards the search query to the category query identification logic 113 (FIG. 1). In step 702, the category query identification logic 113 (FIG. 1) may identify the closest category under which the user query may fall from the various existing categories present in the storage unit 140 (FIG. 1) and subsequently identify the visual element(s) of the closest query as the visual element(s) of the user query. In step 703, the category query identification logic 113 (FIG. 1) appends the identified visual element(s) to the search query and returns the search query along with the identified visual element(s) to the user.
[0056] In one embodiment, if the category query identification logic 113 (FIG. 1) could not identify the category of the user search query, the category query identification logic 113 (FIG. 1) may assign visual element(s) associated with a default category as the visual element(s) of the user search query.
[0057] FIG. 8 depicts exemplary predefined visual element(s) for a search query based on the count of keywords in the search query in accordance with the present disclosure. The length query identification logic 114 (FIG. 1) computes the count of keywords in a search query and based on the count assigns the predefined visual element(s). Note that certain frequently occurring stop words, such as the, is, a, an, at, of and on may be ignored while computing the count of keywords in the search query.
[0058] Thus, for an exemplary user search query "Symptoms of Diabetes", the length query identification logic 114 (FIG. 1) may identify the word "of" within the user search query as a stop word and computes the keyword count as two and from the table depicted in FIG. 8 identifies, table, key/value and question/answer, as the visual elements of the user search query.
[0059] FIG. 9 is a flowchart illustrating yet another exemplary method to identify visual element(s) in accordance with the present disclosure. In step 901, the query visual element identification logic 111 (FIG. 1) of the search engine unit 110 (FIG. 1) may receive a search query or a portion of the search query from the user and forwards the search query to the length query identification logic 114 (FIG. 1). In step 902, the length query identification logic 114 (FIG. 1) counts the number of keywords present in the search query. In step 903, visual element(s) are assigned to the user search query based on the count of keyword(s) in the user query. In step 904, the length query identification logic 114 (FIG. 1) appends the identified visual element(s) to the search query and returns the search query along with the identified visual element(s) to the user.
[0060] The claimed subject matter has been provided here with reference to one or more features or embodiments. Those skilled in the art will recognize and appreciate that, despite of the detailed nature of the exemplary embodiments provided here, changes and modifications may be applied to said embodiments without limiting or departing from the generally intended scope. These and various other adaptations and combinations of the embodiments provided here are within the scope of the disclosed subject matter as defined by the claims and their full set of equivalents.

Claims

1 . A means for augmenting a search query by automatically identifying at least one visual element for the search query.
2. The means for augmenting a search query as in claim 1, said means comprising (1) means to receive a search query or a portion of a search query comprising keyword(s) (2) means to identify at least one visual element pertinent to the search query or a portion of the search query (3) means to return the search query or a portion of the search query with the identified visual element(s).
3. The means as in claim 1, wherein the visual elements is selected from the group comprising paragraph, table, list, menu, fixed width text, key/value, graph/chart, question/answer, timeline, interactive data and combinations thereof.
4. The means to identify at least one visual element as in claim 2, wherein said means comprises identifying the visual element based on the historical data.
5. The means to identify at least one visual element as in claim 2, wherein said means comprises identifying the visual element based on the category of the search query.
6. The means to identify at least one visual element as in claim 2, wherein said means comprises identifying the visual element based on the length of the search query.
7. A method for augmenting a search query by automatically identifying at least one visual element for the search query, the method comprising:
receiving a search query or a portion of the search query comprising keyword(s);
identifying at least one visual element pertinent to the search query or a portion of the search query; and
returning the search query or a portion of the search query with the identified visual element(s).
8. The method of claim 7, wherein the visual element is selected from the group comprising paragraph, table, list, menu, fixed width text, key/value, graph/chart, question/answer, timeline, interactive data and combinations thereof.
9. The method of identifying at least one visual element as in claim 7, wherein said method comprises identifying the visual element based on the historical data.
10. The method of identifying at least one visual element as in claim 7, wherein said method comprises identifying the visual element based on the category of the search query.
1 1 . The method of identifying at least one visual element as in claim 7, wherein said method comprises identifying the visual element based on the length of the search query.
12. A system comprising a search engine unit, wherein the search engine unit configured to receive a search query and logics configured to automatically identify at least one visual element for the search query.
PCT/US2014/064886 2014-11-10 2014-11-10 System and method for augmenting a search query WO2016076831A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2014/064886 WO2016076831A1 (en) 2014-11-10 2014-11-10 System and method for augmenting a search query

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2014/064886 WO2016076831A1 (en) 2014-11-10 2014-11-10 System and method for augmenting a search query

Publications (1)

Publication Number Publication Date
WO2016076831A1 true WO2016076831A1 (en) 2016-05-19

Family

ID=55954755

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/064886 WO2016076831A1 (en) 2014-11-10 2014-11-10 System and method for augmenting a search query

Country Status (1)

Country Link
WO (1) WO2016076831A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070233692A1 (en) * 2006-04-03 2007-10-04 Lisa Steven G System, methods and applications for embedded internet searching and result display
US20080235207A1 (en) * 2007-03-21 2008-09-25 Kathrin Berkner Coarse-to-fine navigation through paginated documents retrieved by a text search engine

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070233692A1 (en) * 2006-04-03 2007-10-04 Lisa Steven G System, methods and applications for embedded internet searching and result display
US20080235207A1 (en) * 2007-03-21 2008-09-25 Kathrin Berkner Coarse-to-fine navigation through paginated documents retrieved by a text search engine

Similar Documents

Publication Publication Date Title
US11176124B2 (en) Managing a search
US9703874B2 (en) System and method for presenting search extract title
US8498984B1 (en) Categorization of search results
US9418128B2 (en) Linking documents with entities, actions and applications
JP5661200B2 (en) Providing search information
US10540408B2 (en) System and method for constructing search results
US9189565B2 (en) Managing tag clouds
US10783195B2 (en) System and method for constructing search results
US9208236B2 (en) Presenting search results based upon subject-versions
JP2019514124A (en) System and method for providing visualizable result lists
US10102272B2 (en) System and method for ranking documents
US10269080B2 (en) Method and apparatus for providing a response to an input post on a social page of a brand
CN104050183A (en) Content matching result prompting method and device for browser input frame
US10255379B2 (en) System and method for displaying timeline search results
US10521461B2 (en) System and method for augmenting a search query
KR20190109628A (en) Method for providing personalized article contents and apparatus for the same
US10579660B2 (en) System and method for augmenting search results
WO2016076831A1 (en) System and method for augmenting a search query
US10606904B2 (en) System and method for providing contextual information in a document
US20150286727A1 (en) System and method for enhancing user experience in a search environment
US20160350413A1 (en) System and method for enhancing user experience in a search environment
US11238052B2 (en) Refining a search request to a content provider
US10949437B2 (en) System and method for variable presentation semantics of search results in a search environment
US20150347535A1 (en) System and method for displaying table search results

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14905785

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14905785

Country of ref document: EP

Kind code of ref document: A1