WO2001016694A1 - Automatic conversion between sets of text urls and cohesive scenes of visual urls - Google Patents

Automatic conversion between sets of text urls and cohesive scenes of visual urls Download PDF

Info

Publication number
WO2001016694A1
WO2001016694A1 PCT/US2000/024067 US0024067W WO0116694A1 WO 2001016694 A1 WO2001016694 A1 WO 2001016694A1 US 0024067 W US0024067 W US 0024067W WO 0116694 A1 WO0116694 A1 WO 0116694A1
Authority
WO
WIPO (PCT)
Prior art keywords
visually
linked objects
file references
user
cohesive
Prior art date
Application number
PCT/US2000/024067
Other languages
French (fr)
Other versions
WO2001016694A9 (en
Inventor
Brian Backus
Nathaniel Kushman
Original Assignee
Ububu, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ububu, Inc. filed Critical Ububu, Inc.
Priority to AU73419/00A priority Critical patent/AU7341900A/en
Publication of WO2001016694A1 publication Critical patent/WO2001016694A1/en
Publication of WO2001016694A9 publication Critical patent/WO2001016694A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/954Navigation, e.g. using categorised browsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • G06F16/748Hypervideo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • G06F16/9558Details of hyperlinks; Management of linked annotations

Definitions

  • the present invention relates generally to the field of data representation and more specifically to the representation of URLs and other file references.
  • GUIs Graphical user interfaces
  • icons represent visual file references which are created by converting corresponding textual file references.
  • the area on the display screen where icons are grouped is referred to as the desktop because the icons are intended to represent real objects on a real desktop.
  • the icons are usually presented as visually unrelated to each other. Each icon sits in its own screen space with no relevance to the adjacent icons.
  • conversion of textual references to files and directories into visual references is typically purely functional, with no consideration for visual cues and context.
  • Web browsers in contrast, do take into account visual effects when presenting file references such as URLs to the users.
  • images can be used to represent hyperlinks to URLs.
  • existing tools offer no capability for automatically associating URLs and other file references with meaningful visual objects and for integrating the resulting visual objects into a contextually relevant cohesive scene.
  • textual file references are automatically converted into visual file references by providing a conversion interface enabling a user to identify a set of textual file references, creating a set of visually-linked objects corresponding to the set of textual file references identified by the user, and integrating the set of visually- linked objects into a cohesive scene.
  • visually-linked objects are converted into textual file references by receiving a request to convert a cohesive scene of visually-linked objects into a set of textual file references and creating a set of textual file references corresponding to visually-linked objects within the cohesive scene.
  • a conversion between cohesive scenes of visually-linked objects is performed by providing a conversion interface enabling a user to identify a set of visually-linked objects within a first cohesive scene, extracting the identified set of visually-linked objects from the first cohesive scene, and converting the extracted set of visually-linked objects into a set of visually-linked objects of a second cohesive scene.
  • Figure 1 is a block diagram of one embodiment of a system in which automatic conversion between textual file references and visual file references can be performed;
  • Figure 2 is a block diagram of one embodiment for an architecture of a computer system
  • Figure 3 is a block diagram of one embodiment for a data manipulation and display architecture
  • Figure 4 is a block diagram of one embodiment for the data representation of a cohesive scene or a repertoire
  • Figure 5 is a flow diagram of one embodiment of a process for automatically converting a set of textual file references into a cohesive scene of visually-linked objects;
  • Figures 6A and 6B show an exemplary conversion interface, according to one embodiment of the present invention;
  • Figures 7A-7D are display windows of exemplary cohesive scenes of visually linked objects in which hierarchical structures of initial sets of textual file references are maintained, according to some embodiments of the present invention.
  • Figure 8A-8E show exemplary user interfaces illustrating a process of adding visually-linked objects to a cohesive scene, according to one embodiment of the present invention
  • Figure 9 is a flow diagram of one embodiment of a process for automatically converting a set of visually-linked objects into a set of textual file references.
  • Figure 10 is a flow diagram of one embodiment of a process for automatically converting a set of visually-linked objects of a first cohesive scene into a set of visually-linked objects of a second cohesive scene.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • the algorithms and displays presented herein are not inherently related to any particular computer or other apparatus.
  • Various general-purpose machines may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these machines will appear from the description below.
  • the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
  • Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • a computer readable storage medium such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • file references includes references to various files (e.g., text files, data files, program files, directory files, etc.), to various collections of files (e.g., folders, computer applications, streaming content, updated content, etc.), or to other file references.
  • file references include references to documents and resources on public and private networks (e.g., World Wide Web), including URLs and other similar references.
  • visually-linked objects refers to graphic objects (e.g., images, video clips, 3-D graphics, 2- D graphics, etc.) that contain a link to a file reference (e.g., a file address such as a URL).
  • the link may be embedded within the visually-linked object at the time such object is downloaded or may be added to the downloaded object by the automatic conversion system.
  • Activating a visually-linked object may open an application, access or launch a web page, open a file, or perform any other suitable action. For instance, activating a visually-linked object containing a link to a document opens the document, and activating a visually-linked object containing a link to a program executable file executes the program.
  • activating a visually-linked object containing a link to a URL may automatically initiate the Internet connection and open the web browser, thereby allowing the user to automatically access the content referred to by the URL.
  • Other functionality of the visually-linked objects will become apparent by reference to the drawings and by reading the description that follows.
  • repertoire refers to a grouping of visually-linked objects within a cohesive scene. It should be noted that a repertoire may also recursively contain other repertoires in addition to visually-linked objects.
  • the present invention enables automatic conversion between textual file references and visual file references.
  • a visually-linked data manipulation and display system provides a conversion interface that enables a user to identify a set of textual file references, creates a set of visually-linked objects that corresponds to the set of textual file references identified by the user, and integrates the set of visually-linked objects into a cohesive scene.
  • a cohesive scene of visually-linked objects is automatically converted into a set of textual file references upon receiving a conversion request from the user.
  • visually-linked objects of one cohesive scene are automatically converted into visually-linked objects of a different cohesive scene.
  • textual file references include file references in any type of textual form including but not limited to a binary form.
  • FIG. 1 illustrates one embodiment of a system in which automatic conversion between textual file references and visual file references can be performed.
  • system 100 represents a networked visually- linked data manipulation and display system which consists of clients 106, 108 connected via wide area network (WAN) 112 to server 102.
  • Server 102 is connected to mass storage device 104.
  • Mass storage device 104 may be any suitable storage medium such as, for example, read only memory (ROM), random access memory (RAM), EPROM's, EEPROM's, magnetic optical discs, or any type of medium suitable for storing electronic data.
  • WAN wide area network
  • LAN local area network
  • a user may access and download visually-linked objects from server 102 onto client 106. Additionally, the user may download the visually-linked objects from another client 108. Alternatively, the visually- linked objects may be downloaded onto client 106 in response to the user request to convert a set of textual file references into a cohesive scene of visually-linked objects.
  • Activating a visually-linked object causes programs within the client 106 to be activated to open an application, access or launch a web page, open a file, or any other suitable action.
  • the basic application to build and modify the visually-linked objects, together with the objects themselves, are maintained and accessed on server 102.
  • the basic application may be downloaded to client 106.
  • the basic application may be downloaded and initiated on client 106 if the user accesses a visually-linked object for the first time, or alternatively, in response to the user request for conversion. After the basic application is initiated, the visually-linked objects will be downloaded to client 106 and displayed upon client 106 display.
  • the visually-linked objects are integrated into a cohesive scene using a visual metaphor.
  • the visual metaphor is a real world visual metaphor.
  • the repertoires and /or the visually-linked objects included in the cohesive scene may be represented as planets, solar systems, galaxies, universes, cities, buildings, floors, rooms, etc.
  • other repertoires and visually-linked objects may be used such as, for example, a house containing rooms with the rooms containing visually-linked objects.
  • system 100 uses a non-real world metaphor to create a cohesive scene of repertoires and visually-linked objects.
  • any fanciful repertoire may be used for the placement of visually-linked objects which may have geometric shapes (e.g., cubes, sphere, pyramids, etc.) or other non-real world representations.
  • the visually-linked objects are displayed, together with windows containing a set of graphical tools, on client 106.
  • the graphical tools allow the user to modify the repertoires and visually-linked objects.
  • the real world metaphor specifically, a planet theme
  • any other visual metaphor real world or non- real world may be used by the basic application without loss of generality.
  • a wizard guides the user through the process of converting textual file references (e.g., bookmarks) into visually-linked objects.
  • the visually-linked objects refer to the visual representations of links to sites, files, folders, and the like that may take the form of buildings or cities on the planet.
  • the user's planet may contain visually-linked objects with links to, for example, bookmarks, sponsor (or branded) sites, and objects with links to planets of other users (e.g., a user of client 108).
  • the user may place the visually-linked objects anywhere on the planet.
  • the user may publish the planet with visually-linked objects on server 102 for access by other users. The access may be available only to a defined group of people (e.g., a group of students doing research for school related projects) or to the public in general.
  • the application graphical tools are part of the basic application that may be downloaded over WAN 112.
  • the basic application enables the manipulation and modification of visually-linked objects and their attributes such as size, position, color, texture, and embedded links.
  • Client 106 does not need to be connected to WAN 112 to build, manipulate, or move the visually- linked objects.
  • the visually-linked objects may be manipulated in a three dimensional (3-D) manner.
  • the planet may have a motion either around an axis or on a plane. That is, the view of the planet may be altered by rotating the planet on its axis, zooming in or out, expanding the view to include a solar system, contracting the view to a single building, or the like.
  • a planet automatically rotates on its axis whenever it is in full-planet view.
  • the planet is rotated and manipulated only at the direction of the user.
  • solar systems, galaxies, and universes may rotate around an axis.
  • FIG. 2 is a block diagram of one embodiment for a computer system 200 suitable for use with the present invention.
  • Computer system 200 may be used in various capacities with the present invention.
  • computer system 200 may be used as a server 102 or as a client 106, 108.
  • computer system 200 includes CPU 202 connected via bus 215 to a variety of memory structures and input/output 210.
  • the memory structures may include, for example, read only memory (ROM) 204, random access memory (RAM) 206, and /or non- volatile memory 208.
  • ROM read only memory
  • RAM random access memory
  • CPU 202 is also connected via bus 215 to network interface 212.
  • Network interface 212 is used to communicate between computer system 200 (e.g., server 102) and a variety of other computer terminals (including clients 106 and 108).
  • Network interface 212 may be connected to WAN 112 by any of a variety of means such as, for example, a telephone connection via modem, a DSL line, or the like.
  • FIG. 3 is a block diagram of one embodiment for a basic application
  • GUI graphical tool
  • Application 300 is connected to network interface 212 and local disk 350. Application 300 may be contained within
  • application 300 may be downloaded to client 106 when a user accesses a visually-linked object or a web site with a selection of visually-linked objects. Alternatively, application 300 may be downloaded to client 106 upon the user request to convert textual file references into visual file references. Controller 320 contains software routines to build and modify visually-linked objects and repertoires. Once application 300 is downloaded and initially launched, controller 320 instructs resource manager 330 to download the visually-linked objects from server 102 or from another client 108. Resource manager 330 first checks local disk 350 to determine if the object is saved locally on client 106. If not, resource manager 330 downloads visually- linked objects via network interface 212 to local disk 350.
  • Resource manager 330 transfers the visually-linked objects via controller 320 to scene renderer 325.
  • Scene renderer 325 integrates the visually-linked objects into a cohesive scene.
  • a planet or any other appropriate surface for a cohesive scene
  • the visually-linked objects are placed on the planet.
  • the planet is made the current view and added to a repertoire of planets.
  • Scene renderer 325 uses the planet and visually-linked objects to create the display.
  • the repertoire of planets is visually represented as a solar system.
  • the repertoire, together with the visually-linked objects, is saved in RAM 206 or non- volatile memory 208 for use when needed.
  • FIG. 4 is a block diagram of one embodiment for the data representation of a downloaded world 400.
  • the downloaded world 400 may be the entire cohesive scene or a portion of the cohesive scene such as a repertoire.
  • world 400 includes repertoire 450 and one or more visually-linked objects 460.
  • Repertoire 450 includes world meta data 405 and 3-D model world data 410.
  • World 400 is downloaded by resource manager 330 and saved in local disk 350.
  • any repertoire 450 may be downloaded.
  • multiple visually-linked objects 460 may be downloaded.
  • repertoire 450 and visually-linked objects 460 may be downloaded together.
  • visually-linked objects 460 may be downloaded separate from repertoire 450.
  • 3-D model world data 410 and 3-D object data 420 contain the graphical renderings of world meta data 405 and object meta data 415 respectively.
  • World meta data 405 contains the data used by scene renderer 325 to build the planet for display.
  • world meta data 405 contains the name of world 400, a list of visually-linked objects 460, and a reference to a 3-D model world data 410.
  • the list of visually-linked objects 460 contains a set of pointers in which each pointer points to an individual visually-linked object 460 associated with repertoire 450.
  • each pointer is an identification to separate visually-linked objects 460.
  • World meta data 405 may point to a number of object meta data 415.
  • world meta data 405 contains a position for each visually-linked object on a planet.
  • World meta data 405 contains a pointer to the 3-D model world data 410 and each object meta data 415 contains a pointer to 3-D object data 420.
  • the representation within 3-D model world data 410 is the actual graphical data used by scene renderer 325 to display the planet repertoire and each visually-linked object contained within object meta data 420 is the actual graphical data used by scene renderer 325 to display the visually-linked object.
  • each visually-linked object within object meta data 420 may be a JPEG or GIF image.
  • scene renderer 325 uses the data representation of world 400, together with information contained within preferences 340 to create graphical representations for a particular display.
  • 3-D model world data 410 and 3-D object data 420 may be three-dimensional representations. In an alternate embodiment, either or both may be two- dimensional representations.
  • graphics (410 and 420) cannot be changed by the user. In an alternate embodiment, graphics may be changed by the user.
  • Figure 5 is a flow diagram of one embodiment of a process for automatically converting a set of textual file references into a cohesive scene of visually-linked objects.
  • the process is performed by processing logic, which may comprise hardware, software, or a combination of both.
  • Processing logic may be either in the computer system of client 106 or server 102, or partially or entirely in a separate device and /or system(s).
  • the process begins with providing a conversion interface which enables a user to identify a set of textual file references to be converted (processing block 504).
  • the conversion interface allows the user to enter file references manually. For instance, the user may input a list of textual URLs that the user wants to convert into visual URLs.
  • the conversion interface enables the user to specify a file containing textual file references.
  • the user may want to convert the URLs contained in the Bookmark file stored by the Netscape Navigator or Netscape Communicator web browser, or the URLs contained in the Favorites file stored by the Microsoft Internet Explorer web browser.
  • the user may want to convert the URLs of web pages referred to in a particular web site.
  • the user may specify the URL of this web site (i.e., the URL of its home web page). The home web page is then searched for references to other web pages to create an initial set of file references.
  • each web page referred to in the initial set is searched for references to other web pages, thereby creating subsets of file references that are associated with each file reference in the initial web site.
  • the search may continue until no more references are found or until the number of file references exceeds the limit specified by the user or defined programmatically.
  • the conversion interface may enable the user to select a subset of the textual file references contained in the file that is specified by the user. It should be noted that any other user interface techniques known in the art may be used to enable the user to identify file references to be converted.
  • basic application 300 identifies hierarchies in the set of textual file references specified by the user using any conversion interface and stores the identified hierarchical structure for the subsequent use.
  • Figures 6A and 6B illustrate exemplary conversion interfaces that enable the user to identify a set of textual file references to be converted into a cohesive scene of visually-linked objects.
  • the conversion interface allows the user to enter URLs into text boxes 602 that are set up in a list format.
  • a set of textual URLs is already entered into text boxes 602, either manually or loaded from a file (e.g., the Bookmark file or the Favorites file), and the user can select a subset of URLs to be converted by clicking in corresponding check boxes 654.
  • processing logic receives a user request to convert the set of textual file references into a cohesive scene of visually-linked objects.
  • the user request includes a particular theme (e.g., a planet theme) to be used for the cohesive scene.
  • the theme may be selected from a list of graphical representations of various themes presented to the user.
  • basic application 300 uses a default theme when performing the user request or assigns a certain theme to the user request according to the personal information provided by the user, e.g., user age, area of interest, geographic location, etc.
  • the user request identifies visually-linked objects to be used to convert the textual file references.
  • the user may specify a particular set of visually-linked objects from a list of visually-linked object sets displayed to the user, or the user may select (from a list of individual visually-linked objects or a list of visually-linked object sets) a visually-linked object for each textual file reference to be converted.
  • processing logic creates a set of visually-linked objects corresponding to the set of textual file references.
  • basic application 300 uses the objects specified in the client request.
  • a default set of visually-linked objects may be used. If the user specifies a particular theme (e.g., a planet theme) to be used for the cohesive scene, a default set of visually-linked objects for this particular cohesive scene is used.
  • basic application 300 intelligently assigns visually-linked objects to the set of textual file references.
  • the visually-linked objects may be assigned based on the user personal information, or based on visual association with the content referred to by file references or with the text of file references themselves (e.g., if the textual URL is www.flowers.com, a visually-linked object represented as a rose is selected).
  • the set of textual file references may contain hierarchical groups.
  • the Microsoft Internet Explorer web browser allows users to group their Favorites URLs into folders of URLs and folders of folders of URLs.
  • a group may be formed by a subset of web pages that are referred to in a higher-level web page.
  • the hierarchical structure contained in the textual file references is preserved and incorporated into the created visually-linked objects as illustrated below by Figures 7A - 7D.
  • textual file references may be associated with importance indicators reflecting the degree of importance of the references to the user.
  • the size of a visually-linked object may be determined based on the above indicator. It should be noted that any other characteristics contained in the set of textual references may be preserved and incorporated into the created set of visually-linked objects.
  • the created set of visually-linked objects is integrated into a cohesive scene.
  • the resulting cohesive scene may be two- dimensional or three-dimensional.
  • a real world visual metaphor is used to integrate visually-linked objects and repertoire (s) into a cohesive scene.
  • the repertoire and the visually-linked objects may be represented as planets, solar systems, galaxies, clusters, universes, cities, buildings, floors, rooms, etc.
  • any non-real world visual metaphor may be used as discussed in more detail above.
  • graphical tools may be used to modify the visually- linked objects. For instance, visual characteristics of the objects may be modified to reflect the importance of each object to the user.
  • the created cohesive scene of visually-linked objects can be emailed to a different user or saved on server 102 for access by other users.
  • the hierarchies contained in the set of textual file references are maintained in the resulting cohesive scene of visually-linked objects.
  • visually-linked objects and repertoires are represented as naturally occurring hierarchies using the hierarchical structure contained in the set of textual references.
  • the hierarchies may be represented as land masses, cities, buildings, floors and rooms, or as clusters of people and individual persons.
  • FIGs 7A - 7D illustrate display windows of exemplary cohesive scenes of visually-linked objects in which hierarchical structures of initial sets of textual file references are maintained.
  • FIG 7 A an integrated cohesive scene produced by the conversion process is shown, in which visually-linked objects are represented as buildings and repertoires of buildings are represented as cities.
  • Figure 7B an integrated cohesive scene produced by the conversion process includes visually-linked objects that are represented as buildings, repertoires of buildings that are represented as cities, and repertoires of cities that are represented as planets.
  • the next hierarchical level is represented as solar systems.
  • Each solar system contains a set of planets, with each planet including a set of cities, and each city including a set of buildings which are each visually-linked objects.
  • some of the shown planets are visually-linked objects rather than repertoires.
  • FIG 7D a different example of an integrated cohesive scene produced by the conversion process is shown.
  • visually- linked objects are represented as people and repertoires of people are represented as clusters of peoples.
  • Cohesive scenes of visually-linked objects may be two-dimensional or three-dimensional and may be realistic or fanciful.
  • Each cohesive scene produced by the conversion process provides meaningful graphical representation of the information frequently used by the user.
  • drag and drop graphical tools are utilized to add other visually-linked objects to the cohesive scene produced by the conversion process or to a new cohesive scene.
  • a visually-linked object is selected from a template of visually-linked objects by the user.
  • the graphical tools allow the user to drag the visually-linked object to the existing cohesive scene or to the image of the cohesive scene being created (or to an icon representing the cohesive scene being created) and to drop the visually-linked object into the cohesive scene.
  • the graphical tools enable the user to select a textual file reference (e.g., a URL), drag it to a desired object and drop it into the object, thereby converting the textual file reference into a visually-linked object.
  • a textual file reference e.g., a URL
  • the graphical tools are used to drag a visually-linked object which is already included in the cohesive scene to a textual file reference, thereby relinking the visually-linked object to a different file address. It should be noted that various other drag and drop techniques to add, modify and delete visually-linked objects and repertoires can be used with the present invention without loss of generality.
  • FIGs 8A - 8E are exemplary user interfaces illustrating a process of adding a visually-linked object to a cohesive scene using drag and drop graphical tools, according to one embodiment of the present invention.
  • cohesive scene 800 that is being created and template 802 that includes a list of objects are shown.
  • Cohesive scene 800 includes visually-linked objects that are represented as fruits and repertoires of fruits that are represented as bowls of fruits.
  • the user selects desired object 804 by clicking on it.
  • the user drags object 804 to cohesive scene 800 and further to bowl 806.
  • the user drops object 804 into bowl 806, thereby adding a visually- linked object to cohesive scene 800. Conversion of Visually-Linked Objects
  • a set of visually-linked objects may be automatically converted into a set of textual file references.
  • Figure 9 is a flow diagram of one embodiment of a process for automatically converting a set of visually-linked objects into a set of textual file references. The process is performed by processing logic, which may comprise hardware, software, or a combination of both. Processing logic may be either in the computer system of client 106 or server 102, or partially or entirely in a separate device and /or system(s).
  • the process begins with providing a conversion interface that enables a user to identify a set of visually-linked objects to be converted (processing block 904).
  • the entire cohesive scene of visually-linked objects is converted.
  • the conversion interface may allow the user to specify the name of the cohesive scene being converted or to point to the cohesive scene being converted.
  • sets of visually-linked objects within one or more cohesive scenes may be converted.
  • the conversion interface may allow the user to specify the objects individually. It should be noted that any user interface techniques known in the art may be used to enable the user to identify visually-linked objects to be converted.
  • processing logic receives a user request to convert the set of visually-linked objects into a set of textual file references.
  • the user request specifies a desired format for the converted set of textual references.
  • Some examples of formats may include plain text lists of references, Bookmark files, Favorites files, etc.
  • a default format may be used for conversion or basic application 300 may select the format based on relevant user information (e.g., the type of the user's web browser).
  • processing logic creates a set of textual file references corresponding to the set of visually-linked objects.
  • the set of visually-linked objects contains hierarchical groups.
  • the hierarchical structure contained in the set of visually- linked objects is preserved and incorporated into the created set of textual file references.
  • other characteristics of the visually-linked objects e.g., their degree of importance to the user may be preserved and incorporated into the created set of textual file references.
  • Figure 10 is a flow diagram of one embodiment of a process for automatically converting a set of visually- linked objects of a first cohesive scene into a set of visually-linked objects of a second cohesive scene.
  • the process is performed by processing logic, which may comprise hardware, software, or a combination of both.
  • Processing logic may be either in the computer system of client 106 or server 102, or partially or entirely in a separate device and /or system(s).
  • the process begins with providing a conversion interface which enables a user to identify a set of visually-linked objects to be converted (processing block 1004).
  • all visually-linked objects of the first cohesive scene are converted.
  • the conversion interface may allow the user to specify the name of the cohesive scene being converted or to point to the cohesive scene being converted. In alternate embodiments, only a portion of the visually-linked objects of the first cohesive scene may be converted. In these embodiments, the conversion interface may allow the user to specify the objects individually. It should be noted that any user interface techniques known in the art may be used to enable the user to identify visually-linked objects to be converted.
  • processing logic extracts the set of visually-linked objects specified by the user from the first cohesive scene (processing block 1006) and converts this set into a set of visually-linked objects of a second cohesive scene (processing block 1008).
  • Each of the two cohesive scenes may be two- dimensional or three-dimensional and may be realistic or fanciful, real- world or non-real world.
  • graphical representations and other attributes pertaining to the visually-linked objects are changed during the conversion process.
  • the set of visually-linked objects of the first cohesive scene contains hierarchical groups. In this embodiment, the hierarchical structure contained in the set being converted is preserved and incorporated into the created set of visually-linked objects of the second cohesive scene.

Abstract

Methods and systems for automatic conversion between textual file references and visual file references are described. According to one aspect of the present invention, textual file references are automatically converted into visual file references by providing a conversion interface enabling a user to identify a set of textual file references (504), creating a set of visually-linked objects (460) corresponding to the set of textual file references identified by the user (508), and integrating the set of visually-linked objects into a cohesive scene (510, 800).

Description

AUTOMATIC CONVERSION BETWEEN SETS OF TEXT URLS AND COHESIVE SCENES
OF VISUAL URLS
This application claims the benefit of U.S. Provisional Application Nos. 60/151,672 filed August 31, 1999, and 60/152,141 filed August 31, 1999. This application is also related to U.S. Patent Application Serial No. 09/540,860, filed March 31, 2000 and U.S. Patent Application Serial No. 09/540,433, filed March 31, 2000.
FIELD OF THE INVENTION
The present invention relates generally to the field of data representation and more specifically to the representation of URLs and other file references.
COPYRIGHT NOTICE /PERMISSION
A portion of the disclosure of this patent document may contain material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
BACKGROUND OF THE INVENTION With the increasing popularity of computing and the use of the Internet, in many fields, communication with computers and similar devices in a simple and convenient manner becomes extremely important. Graphical user interfaces (GUIs) take advantage of the computer's graphics capability to simplify the interaction between users and devices. For instance, such features of GUIs as icons provide the users with a convenient way of executing a command or opening a window by merely clicking on an icon. The icons represent visual file references which are created by converting corresponding textual file references. The area on the display screen where icons are grouped is referred to as the desktop because the icons are intended to represent real objects on a real desktop. The icons are usually presented as visually unrelated to each other. Each icon sits in its own screen space with no relevance to the adjacent icons. Similarly, the conversion of textual references to files and directories into visual references is typically purely functional, with no consideration for visual cues and context.
Web browsers, in contrast, do take into account visual effects when presenting file references such as URLs to the users. Currently, images can be used to represent hyperlinks to URLs. However, existing tools offer no capability for automatically associating URLs and other file references with meaningful visual objects and for integrating the resulting visual objects into a contextually relevant cohesive scene.
SUMMARY OF THE INVENTION
Methods and systems for automatic conversion between textual file references and visual file references are described. In one embodiment, textual file references are automatically converted into visual file references by providing a conversion interface enabling a user to identify a set of textual file references, creating a set of visually-linked objects corresponding to the set of textual file references identified by the user, and integrating the set of visually- linked objects into a cohesive scene.
In another embodiment, visually-linked objects are converted into textual file references by receiving a request to convert a cohesive scene of visually-linked objects into a set of textual file references and creating a set of textual file references corresponding to visually-linked objects within the cohesive scene.
In yet another embodiment, a conversion between cohesive scenes of visually-linked objects is performed by providing a conversion interface enabling a user to identify a set of visually-linked objects within a first cohesive scene, extracting the identified set of visually-linked objects from the first cohesive scene, and converting the extracted set of visually-linked objects into a set of visually-linked objects of a second cohesive scene.
BRIEF DESCRIPTION OF THE DRAWINGS
The objects, features and advantages of the present invention will be apparent to one skilled in the art in light of the following detailed description in which:
Figure 1 is a block diagram of one embodiment of a system in which automatic conversion between textual file references and visual file references can be performed;
Figure 2 is a block diagram of one embodiment for an architecture of a computer system;
Figure 3 is a block diagram of one embodiment for a data manipulation and display architecture;
Figure 4 is a block diagram of one embodiment for the data representation of a cohesive scene or a repertoire;
Figure 5 is a flow diagram of one embodiment of a process for automatically converting a set of textual file references into a cohesive scene of visually-linked objects; Figures 6A and 6B show an exemplary conversion interface, according to one embodiment of the present invention;
Figures 7A-7D are display windows of exemplary cohesive scenes of visually linked objects in which hierarchical structures of initial sets of textual file references are maintained, according to some embodiments of the present invention;
Figure 8A-8E show exemplary user interfaces illustrating a process of adding visually-linked objects to a cohesive scene, according to one embodiment of the present invention;
Figure 9 is a flow diagram of one embodiment of a process for automatically converting a set of visually-linked objects into a set of textual file references; and
Figure 10 is a flow diagram of one embodiment of a process for automatically converting a set of visually-linked objects of a first cohesive scene into a set of visually-linked objects of a second cohesive scene.
DETAILED DESCRIPTION
Methods and systems for automatic conversion between textual file references and visual file references are described. In the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The present invention also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose machines may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these machines will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
Some portions of the detailed description that follows are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory in the form of a computer program. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
Definitions
The term "automatic" refers to a process which is performed without requiring any user assistance to the process.
The term "file references" includes references to various files (e.g., text files, data files, program files, directory files, etc.), to various collections of files (e.g., folders, computer applications, streaming content, updated content, etc.), or to other file references. In addition, file references include references to documents and resources on public and private networks (e.g., World Wide Web), including URLs and other similar references.
The term "visually-linked objects" (also referred herein as "visual file references") refers to graphic objects (e.g., images, video clips, 3-D graphics, 2- D graphics, etc.) that contain a link to a file reference (e.g., a file address such as a URL). The link may be embedded within the visually-linked object at the time such object is downloaded or may be added to the downloaded object by the automatic conversion system. Activating a visually-linked object may open an application, access or launch a web page, open a file, or perform any other suitable action. For instance, activating a visually-linked object containing a link to a document opens the document, and activating a visually-linked object containing a link to a program executable file executes the program. If the user has not opened a connection to the Internet, activating a visually-linked object containing a link to a URL may automatically initiate the Internet connection and open the web browser, thereby allowing the user to automatically access the content referred to by the URL. Other functionality of the visually-linked objects will become apparent by reference to the drawings and by reading the description that follows.
The term "repertoire" refers to a grouping of visually-linked objects within a cohesive scene. It should be noted that a repertoire may also recursively contain other repertoires in addition to visually-linked objects.
Overview
The present invention enables automatic conversion between textual file references and visual file references.
In one embodiment, a visually-linked data manipulation and display system provides a conversion interface that enables a user to identify a set of textual file references, creates a set of visually-linked objects that corresponds to the set of textual file references identified by the user, and integrates the set of visually-linked objects into a cohesive scene. In another embodiment, a cohesive scene of visually-linked objects is automatically converted into a set of textual file references upon receiving a conversion request from the user. In yet another embodiment, visually-linked objects of one cohesive scene are automatically converted into visually-linked objects of a different cohesive scene. It should be noted that textual file references include file references in any type of textual form including but not limited to a binary form.
An Exemplary System
Figure 1 illustrates one embodiment of a system in which automatic conversion between textual file references and visual file references can be performed. Referring to Figure 1, system 100 represents a networked visually- linked data manipulation and display system which consists of clients 106, 108 connected via wide area network (WAN) 112 to server 102. Server 102 is connected to mass storage device 104. Mass storage device 104 may be any suitable storage medium such as, for example, read only memory (ROM), random access memory (RAM), EPROM's, EEPROM's, magnetic optical discs, or any type of medium suitable for storing electronic data. In an alternate embodiment, wide area network (WAN) 112 may be a local area network (LAN).
In one embodiment, a user may access and download visually-linked objects from server 102 onto client 106. Additionally, the user may download the visually-linked objects from another client 108. Alternatively, the visually- linked objects may be downloaded onto client 106 in response to the user request to convert a set of textual file references into a cohesive scene of visually-linked objects. Activating a visually-linked object (e.g., by clicking on the visually-linked object) causes programs within the client 106 to be activated to open an application, access or launch a web page, open a file, or any other suitable action. In one embodiment, the basic application to build and modify the visually-linked objects, together with the objects themselves, are maintained and accessed on server 102.
In an alternate embodiment, the basic application, together with the objects themselves, may be downloaded to client 106. In this embodiment, the basic application may be downloaded and initiated on client 106 if the user accesses a visually-linked object for the first time, or alternatively, in response to the user request for conversion. After the basic application is initiated, the visually-linked objects will be downloaded to client 106 and displayed upon client 106 display.
The visually-linked objects are integrated into a cohesive scene using a visual metaphor. In one embodiment, the visual metaphor is a real world visual metaphor. The repertoires and /or the visually-linked objects included in the cohesive scene may be represented as planets, solar systems, galaxies, universes, cities, buildings, floors, rooms, etc. In this embodiment, other repertoires and visually-linked objects may be used such as, for example, a house containing rooms with the rooms containing visually-linked objects.
In another embodiment, system 100 uses a non-real world metaphor to create a cohesive scene of repertoires and visually-linked objects. For instance, any fanciful repertoire may be used for the placement of visually-linked objects which may have geometric shapes (e.g., cubes, sphere, pyramids, etc.) or other non-real world representations.
In one embodiment, the visually-linked objects are displayed, together with windows containing a set of graphical tools, on client 106. The graphical tools allow the user to modify the repertoires and visually-linked objects. In the description that follows, the real world metaphor (specifically, a planet theme) will be used. However, any other visual metaphor (real world or non- real world) may be used by the basic application without loss of generality.
In one embodiment, a wizard guides the user through the process of converting textual file references (e.g., bookmarks) into visually-linked objects. The visually-linked objects refer to the visual representations of links to sites, files, folders, and the like that may take the form of buildings or cities on the planet. The user's planet may contain visually-linked objects with links to, for example, bookmarks, sponsor (or branded) sites, and objects with links to planets of other users (e.g., a user of client 108). The user may place the visually-linked objects anywhere on the planet. In one embodiment, the user may publish the planet with visually-linked objects on server 102 for access by other users. The access may be available only to a defined group of people (e.g., a group of students doing research for school related projects) or to the public in general.
The application graphical tools are part of the basic application that may be downloaded over WAN 112. The basic application enables the manipulation and modification of visually-linked objects and their attributes such as size, position, color, texture, and embedded links. Client 106 does not need to be connected to WAN 112 to build, manipulate, or move the visually- linked objects.
In one embodiment, after the visually-linked objects are created, they may be manipulated in a three dimensional (3-D) manner. For example, the planet may have a motion either around an axis or on a plane. That is, the view of the planet may be altered by rotating the planet on its axis, zooming in or out, expanding the view to include a solar system, contracting the view to a single building, or the like. In one embodiment, a planet automatically rotates on its axis whenever it is in full-planet view. In an alternate embodiment, the planet is rotated and manipulated only at the direction of the user. Similarly, solar systems, galaxies, and universes may rotate around an axis.
Figure 2 is a block diagram of one embodiment for a computer system 200 suitable for use with the present invention. Computer system 200 may be used in various capacities with the present invention. For example, computer system 200 may be used as a server 102 or as a client 106, 108. Referring to Figure 2, computer system 200 includes CPU 202 connected via bus 215 to a variety of memory structures and input/output 210. The memory structures may include, for example, read only memory (ROM) 204, random access memory (RAM) 206, and /or non- volatile memory 208. In one embodiment, CPU 202 is also connected via bus 215 to network interface 212. Network interface 212 is used to communicate between computer system 200 (e.g., server 102) and a variety of other computer terminals (including clients 106 and 108). Network interface 212 may be connected to WAN 112 by any of a variety of means such as, for example, a telephone connection via modem, a DSL line, or the like.
Figure 3 is a block diagram of one embodiment for a basic application
300. Referring to Figure 3, basic application 300 includes graphical tool (GUI)
305, undo 315, controller 320, scene renderer 325, resource manager 330, sound manager 335, and preferences 340. Application 300 is connected to network interface 212 and local disk 350. Application 300 may be contained within
RAM 206 or non- volatile memory 208. In one embodiment, application 300 may be downloaded to client 106 when a user accesses a visually-linked object or a web site with a selection of visually-linked objects. Alternatively, application 300 may be downloaded to client 106 upon the user request to convert textual file references into visual file references. Controller 320 contains software routines to build and modify visually-linked objects and repertoires. Once application 300 is downloaded and initially launched, controller 320 instructs resource manager 330 to download the visually-linked objects from server 102 or from another client 108. Resource manager 330 first checks local disk 350 to determine if the object is saved locally on client 106. If not, resource manager 330 downloads visually- linked objects via network interface 212 to local disk 350. Resource manager 330 transfers the visually-linked objects via controller 320 to scene renderer 325. Scene renderer 325 integrates the visually-linked objects into a cohesive scene. In one embodiment, a planet (or any other appropriate surface for a cohesive scene) is also downloaded to client 106. The visually-linked objects are placed on the planet. The planet is made the current view and added to a repertoire of planets. Scene renderer 325 uses the planet and visually-linked objects to create the display. In one embodiment, the repertoire of planets is visually represented as a solar system. The repertoire, together with the visually-linked objects, is saved in RAM 206 or non- volatile memory 208 for use when needed. In one embodiment, as a user moves a pointer over a visually-linked object, sound manager 335 produces a sound associated with the object. Undo 315 is used to undo an action by the user and GUI 305 is used to enable the user to identify textual file references to be converted, modify the visually-linked objects and interact with the system (e.g., rotate planet, change view, etc.). Figure 4 is a block diagram of one embodiment for the data representation of a downloaded world 400. The downloaded world 400 may be the entire cohesive scene or a portion of the cohesive scene such as a repertoire. Referring to Figure 4, world 400 includes repertoire 450 and one or more visually-linked objects 460. Repertoire 450 includes world meta data 405 and 3-D model world data 410. World 400 is downloaded by resource manager 330 and saved in local disk 350. In one embodiment, any repertoire 450 may be downloaded. In one embodiment, multiple visually-linked objects 460 may be downloaded. In one embodiment, repertoire 450 and visually-linked objects 460 may be downloaded together. In an alternate embodiment, visually-linked objects 460 may be downloaded separate from repertoire 450. In one embodiment, 3-D model world data 410 and 3-D object data 420 contain the graphical renderings of world meta data 405 and object meta data 415 respectively. World meta data 405 contains the data used by scene renderer 325 to build the planet for display. In one embodiment, world meta data 405 contains the name of world 400, a list of visually-linked objects 460, and a reference to a 3-D model world data 410. The list of visually-linked objects 460 contains a set of pointers in which each pointer points to an individual visually-linked object 460 associated with repertoire 450. In one embodiment, each pointer is an identification to separate visually-linked objects 460. World meta data 405 may point to a number of object meta data 415. In addition, world meta data 405 contains a position for each visually-linked object on a planet. World meta data 405 contains a pointer to the 3-D model world data 410 and each object meta data 415 contains a pointer to 3-D object data 420. The representation within 3-D model world data 410 is the actual graphical data used by scene renderer 325 to display the planet repertoire and each visually-linked object contained within object meta data 420 is the actual graphical data used by scene renderer 325 to display the visually-linked object. For example, each visually-linked object within object meta data 420 may be a JPEG or GIF image.
In one embodiment, scene renderer 325 uses the data representation of world 400, together with information contained within preferences 340 to create graphical representations for a particular display. In one embodiment, 3-D model world data 410 and 3-D object data 420 may be three-dimensional representations. In an alternate embodiment, either or both may be two- dimensional representations.
In one embodiment, graphics (410 and 420) cannot be changed by the user. In an alternate embodiment, graphics may be changed by the user.
Conversion of Textual File References
Figure 5 is a flow diagram of one embodiment of a process for automatically converting a set of textual file references into a cohesive scene of visually-linked objects. The process is performed by processing logic, which may comprise hardware, software, or a combination of both. Processing logic may be either in the computer system of client 106 or server 102, or partially or entirely in a separate device and /or system(s).
Referring to Figure 5, the process begins with providing a conversion interface which enables a user to identify a set of textual file references to be converted (processing block 504). In one embodiment, the conversion interface allows the user to enter file references manually. For instance, the user may input a list of textual URLs that the user wants to convert into visual URLs.
In an alternate embodiment, the conversion interface enables the user to specify a file containing textual file references. For example, the user may want to convert the URLs contained in the Bookmark file stored by the Netscape Navigator or Netscape Communicator web browser, or the URLs contained in the Favorites file stored by the Microsoft Internet Explorer web browser. In another example, the user may want to convert the URLs of web pages referred to in a particular web site. In this example, the user may specify the URL of this web site (i.e., the URL of its home web page). The home web page is then searched for references to other web pages to create an initial set of file references. Subsequently, each web page referred to in the initial set is searched for references to other web pages, thereby creating subsets of file references that are associated with each file reference in the initial web site. The search may continue until no more references are found or until the number of file references exceeds the limit specified by the user or defined programmatically. In yet another embodiment, the conversion interface may enable the user to select a subset of the textual file references contained in the file that is specified by the user. It should be noted that any other user interface techniques known in the art may be used to enable the user to identify file references to be converted.
In one embodiment, basic application 300 identifies hierarchies in the set of textual file references specified by the user using any conversion interface and stores the identified hierarchical structure for the subsequent use. Figures 6A and 6B illustrate exemplary conversion interfaces that enable the user to identify a set of textual file references to be converted into a cohesive scene of visually-linked objects. Referring to Figure 6A, the conversion interface allows the user to enter URLs into text boxes 602 that are set up in a list format. Referring to Figure 6B, a set of textual URLs is already entered into text boxes 602, either manually or loaded from a file (e.g., the Bookmark file or the Favorites file), and the user can select a subset of URLs to be converted by clicking in corresponding check boxes 654.
Returning to Figure 5, at processing block 506, processing logic receives a user request to convert the set of textual file references into a cohesive scene of visually-linked objects. In one embodiment, the user request includes a particular theme (e.g., a planet theme) to be used for the cohesive scene. The theme may be selected from a list of graphical representations of various themes presented to the user. In alternate embodiments, basic application 300 uses a default theme when performing the user request or assigns a certain theme to the user request according to the personal information provided by the user, e.g., user age, area of interest, geographic location, etc.
In one embodiment, the user request identifies visually-linked objects to be used to convert the textual file references. The user may specify a particular set of visually-linked objects from a list of visually-linked object sets displayed to the user, or the user may select (from a list of individual visually-linked objects or a list of visually-linked object sets) a visually-linked object for each textual file reference to be converted.
At processing block 508, processing logic creates a set of visually-linked objects corresponding to the set of textual file references. In one embodiment, basic application 300 uses the objects specified in the client request. In an alternate embodiment, a default set of visually-linked objects may be used. If the user specifies a particular theme (e.g., a planet theme) to be used for the cohesive scene, a default set of visually-linked objects for this particular cohesive scene is used. In yet another embodiment, basic application 300 intelligently assigns visually-linked objects to the set of textual file references. The visually-linked objects may be assigned based on the user personal information, or based on visual association with the content referred to by file references or with the text of file references themselves (e.g., if the textual URL is www.flowers.com, a visually-linked object represented as a rose is selected).
As discussed above, the set of textual file references may contain hierarchical groups. For instance, the Microsoft Internet Explorer web browser allows users to group their Favorites URLs into folders of URLs and folders of folders of URLs. In another example, in which textual file references to web pages associated with a particular web site are converted into visual file references to these web pages, a group may be formed by a subset of web pages that are referred to in a higher-level web page. In one embodiment, the hierarchical structure contained in the textual file references is preserved and incorporated into the created visually-linked objects as illustrated below by Figures 7A - 7D. In one embodiment, textual file references may be associated with importance indicators reflecting the degree of importance of the references to the user. These indicators are utilized when creating and/or placing the visually-linked objects. For example, the size of a visually-linked object may be determined based on the above indicator. It should be noted that any other characteristics contained in the set of textual references may be preserved and incorporated into the created set of visually-linked objects.
At processing block 510, the created set of visually-linked objects is integrated into a cohesive scene. The resulting cohesive scene may be two- dimensional or three-dimensional. In one embodiment, a real world visual metaphor is used to integrate visually-linked objects and repertoire (s) into a cohesive scene. The repertoire and the visually-linked objects may be represented as planets, solar systems, galaxies, clusters, universes, cities, buildings, floors, rooms, etc. In alternate embodiments, any non-real world visual metaphor may be used as discussed in more detail above.
In one embodiment, graphical tools may be used to modify the visually- linked objects. For instance, visual characteristics of the objects may be modified to reflect the importance of each object to the user. In one embodiment, the created cohesive scene of visually-linked objects can be emailed to a different user or saved on server 102 for access by other users.
In one embodiment, the hierarchies contained in the set of textual file references are maintained in the resulting cohesive scene of visually-linked objects. In one embodiment, visually-linked objects and repertoires are represented as naturally occurring hierarchies using the hierarchical structure contained in the set of textual references. For instance, the hierarchies may be represented as land masses, cities, buildings, floors and rooms, or as clusters of people and individual persons.
Figures 7A - 7D illustrate display windows of exemplary cohesive scenes of visually-linked objects in which hierarchical structures of initial sets of textual file references are maintained. Referring to Figure 7 A, an integrated cohesive scene produced by the conversion process is shown, in which visually-linked objects are represented as buildings and repertoires of buildings are represented as cities. Referring to Figure 7B, an integrated cohesive scene produced by the conversion process includes visually-linked objects that are represented as buildings, repertoires of buildings that are represented as cities, and repertoires of cities that are represented as planets. Referring to Figure 7C, in one embodiment, the next hierarchical level is represented as solar systems. Each solar system contains a set of planets, with each planet including a set of cities, and each city including a set of buildings which are each visually-linked objects. In an alternate embodiment, some of the shown planets are visually-linked objects rather than repertoires. Referring to Figure 7D, a different example of an integrated cohesive scene produced by the conversion process is shown. In this exemplary cohesive scene, visually- linked objects are represented as people and repertoires of people are represented as clusters of peoples.
Cohesive scenes of visually-linked objects may be two-dimensional or three-dimensional and may be realistic or fanciful. Each cohesive scene produced by the conversion process provides meaningful graphical representation of the information frequently used by the user.
In some embodiments, drag and drop graphical tools are utilized to add other visually-linked objects to the cohesive scene produced by the conversion process or to a new cohesive scene. In one embodiment, a visually-linked object is selected from a template of visually-linked objects by the user. The graphical tools allow the user to drag the visually-linked object to the existing cohesive scene or to the image of the cohesive scene being created (or to an icon representing the cohesive scene being created) and to drop the visually-linked object into the cohesive scene. These actions result in the automatic integration of the visually-linked object into the cohesive scene. In an alternate embodiment, the graphical tools enable the user to select a textual file reference (e.g., a URL), drag it to a desired object and drop it into the object, thereby converting the textual file reference into a visually-linked object. In yet another embodiment, the graphical tools are used to drag a visually-linked object which is already included in the cohesive scene to a textual file reference, thereby relinking the visually-linked object to a different file address. It should be noted that various other drag and drop techniques to add, modify and delete visually-linked objects and repertoires can be used with the present invention without loss of generality.
Figures 8A - 8E are exemplary user interfaces illustrating a process of adding a visually-linked object to a cohesive scene using drag and drop graphical tools, according to one embodiment of the present invention. Referring to Figure 8A, cohesive scene 800 that is being created and template 802 that includes a list of objects are shown. Cohesive scene 800 includes visually-linked objects that are represented as fruits and repertoires of fruits that are represented as bowls of fruits. Referring to Figure 8B, the user selects desired object 804 by clicking on it. Referring to Figures 8C and 8D, the user drags object 804 to cohesive scene 800 and further to bowl 806. Referring to Figure 8E, the user drops object 804 into bowl 806, thereby adding a visually- linked object to cohesive scene 800. Conversion of Visually-Linked Objects
In one embodiment, a set of visually-linked objects may be automatically converted into a set of textual file references. Figure 9 is a flow diagram of one embodiment of a process for automatically converting a set of visually-linked objects into a set of textual file references. The process is performed by processing logic, which may comprise hardware, software, or a combination of both. Processing logic may be either in the computer system of client 106 or server 102, or partially or entirely in a separate device and /or system(s).
Referring to Figure 9, the process begins with providing a conversion interface that enables a user to identify a set of visually-linked objects to be converted (processing block 904). In some embodiments, the entire cohesive scene of visually-linked objects is converted. In these embodiments, the conversion interface may allow the user to specify the name of the cohesive scene being converted or to point to the cohesive scene being converted. In alternate embodiments, sets of visually-linked objects within one or more cohesive scenes may be converted. In these embodiments, the conversion interface may allow the user to specify the objects individually. It should be noted that any user interface techniques known in the art may be used to enable the user to identify visually-linked objects to be converted.
At processing block 906, processing logic receives a user request to convert the set of visually-linked objects into a set of textual file references. In one embodiment, the user request specifies a desired format for the converted set of textual references. Some examples of formats may include plain text lists of references, Bookmark files, Favorites files, etc. In alternate embodiments, a default format may be used for conversion or basic application 300 may select the format based on relevant user information (e.g., the type of the user's web browser).
At processing block 908, processing logic creates a set of textual file references corresponding to the set of visually-linked objects. In one embodiment, the set of visually-linked objects contains hierarchical groups. In this embodiment, the hierarchical structure contained in the set of visually- linked objects is preserved and incorporated into the created set of textual file references. In addition, other characteristics of the visually-linked objects (e.g., their degree of importance to the user) may be preserved and incorporated into the created set of textual file references.
In an alternate embodiment, automatic conversion between cohesive scenes of visually-linked objects is provided. Figure 10 is a flow diagram of one embodiment of a process for automatically converting a set of visually- linked objects of a first cohesive scene into a set of visually-linked objects of a second cohesive scene. The process is performed by processing logic, which may comprise hardware, software, or a combination of both. Processing logic may be either in the computer system of client 106 or server 102, or partially or entirely in a separate device and /or system(s).
Referring to Figure 10, the process begins with providing a conversion interface which enables a user to identify a set of visually-linked objects to be converted (processing block 1004). In some embodiments, all visually-linked objects of the first cohesive scene are converted. In these embodiments, the conversion interface may allow the user to specify the name of the cohesive scene being converted or to point to the cohesive scene being converted. In alternate embodiments, only a portion of the visually-linked objects of the first cohesive scene may be converted. In these embodiments, the conversion interface may allow the user to specify the objects individually. It should be noted that any user interface techniques known in the art may be used to enable the user to identify visually-linked objects to be converted.
Further, processing logic extracts the set of visually-linked objects specified by the user from the first cohesive scene (processing block 1006) and converts this set into a set of visually-linked objects of a second cohesive scene (processing block 1008). Each of the two cohesive scenes may be two- dimensional or three-dimensional and may be realistic or fanciful, real- world or non-real world. In some embodiments, graphical representations and other attributes pertaining to the visually-linked objects are changed during the conversion process. In one embodiment, the set of visually-linked objects of the first cohesive scene contains hierarchical groups. In this embodiment, the hierarchical structure contained in the set being converted is preserved and incorporated into the created set of visually-linked objects of the second cohesive scene.
The specific arrangements and methods herein are merely illustrative of the principles of this invention. Numerous modifications in form and detail may be made by those skilled in the art without departing from the true spirit and scope of the invention.

Claims

CLAIMS What is claimed is:
1. A method for automatically converting textual file references into visual file references, the method comprising: creating a set of visually-linked objects corresponding to a set of textual file references identified by a user using a conversion interface; and integrating the set of visually-linked objects into a cohesive scene.
2. The method of claim 1 wherein the set of visually-linked objects is integrated using a real world visual metaphor as a cohesive scene.
3. The method of claim 1 wherein the set of visually-linked objects is integrated using a non-real world visual metaphor as a cohesive scene.
4. The method of claim 1 wherein the cohesive scene is two-dimensional.
5. The method of claim 1 wherein the cohesive scene is three-dimensional.
6. The method of claim 1 wherein the conversion interface enables the user to enter the set of textual file references.
7. The method of claim 1 wherein the conversion interface enables the user to specify a file containing a plurality of textual file references.
8. The method of claim 7 wherein the file is a URL of a source web page and the plurality of textual file references are recursively URLs of web pages referred to by the source web page.
9. The method of claim 7 wherein the conversion interface further enables the user to select the set of textual file references from the plurality of textual file references.
10. The method of claim 2 wherein the real world visual metaphor is represented as planets, solar systems, galaxies, clusters, universes, land masses, cities, buildings, floors, and rooms.
11. The method of claim 1 wherein creating the set of visually-linked objects is performed in response to user selection of graphical representations for the set of visually-linked objects.
12. The method of claim 1 wherein the set of visually-linked objects is created using a default set of graphical representations.
13. The method of claim 1 wherein creating the set of visually-linked objects includes selecting graphical representations for the set of visually-linked objects using user personal information.
14. The method of claim 1 wherein creating the set of visually-linked objects includes selecting graphical representations for the set of visually-linked objects using visual association with either content referred to by the textual file references or the textual file references.
15. The method of claim 1 further comprising: preserving a hierarchical structure contained in the set of textual file references when creating the set of corresponding visually-linked objects; and representing the set of visually-linked objects as visual hierarchies using the preserved hierarchical structure.
16. The method of claim 15 wherein the visual hierarchies are represented as planets, solar systems, galaxies, clusters, universes, land masses, cities, buildings, floors, and rooms.
17. The method of claim 1 further comprising adding a visually-linked object to the cohesive scene using drag and drop graphical tools.
18. The method of claim 1 further comprising modifying attributes pertaining to the visually-linked objects within the cohesive scene to reflect user perspective regarding the visually-linked objects.
19. A method for automatically converting visually-linked objects into textual file references, the method comprising: receiving a request to convert a set of visually-linked objects into a set of textual file references; and creating the set of textual file references corresponding to visually- linked objects within the cohesive scene.
20. The method of claim 19 further comprising: preserving a hierarchical structure contained within the cohesive scene of visually-linked objects; and incorporating the preserved hierarchical structure into the set of textual file references.
21. A method for automatically converting between cohesive scenes of visually-linked objects, the method comprising: extracting a set of visually-linked objects identified by a user using a conversion interface from a first cohesive scene of visually-linked objects; and converting the extracted set of visually-linked objects into a set of visually-linked objects of a second cohesive scene.
22. The method of claim 21 further comprising preserving a hierarchical structure contained within the set of visually-linked objects of the first cohesive scene when converting the extracted set of visually-linked objects into the second cohesive scene of visually-linked objects.
23. A system for automatically converting textual file references into visual file references, the system comprising: creating a set of visually-linked objects identified by a user using a conversion interface corresponding to the set of textual file references identified by the user; and means for integrating the set of visually-linked objects into a cohesive scene.
24. A system for automatically converting visual file references into textual file references, the system comprising: means for receiving a request to convert a set of visually-linked objects into a set of textual file references; and means for creating the set of textual file references corresponding to visually-linked objects within the cohesive scene.
25. A system for automatically converting between cohesive scenes of visually-linked objects, the system comprising: means for extracting a set of visually-linked objects identified by a user using a conversion interface from a first cohesive scene of visually-linked objects; and means for converting the extracted set of visually-linked objects into a set of visually-linked objects of a second cohesive scene.
26. A machine-readable medium that provides instructions, which when executed by a processor, cause the processor to perform operations comprising:
creating a set of visually-linked objects identified by a user using a conversion interface corresponding to the set of textual file references identified by the user; and integrating the set of visually-linked objects into a cohesive scene.
27. A machine-readable medium that provides instructions, which when executed by a processor, cause the processor to perform operations comprising: receiving a request to convert a set of visually-linked objects into a set of textual file references; and creating the set of textual file references corresponding visually-linked objects within the cohesive scene.
28. A machine-readable medium that provides instructions, which when executed by a processor, cause the processor to perform operations comprising:
extracting a set of visually-linked objects identified by a user using a conversion interface from a first cohesive scene of visually-linked objects; and converting the extracted set of visually-linked objects into a set of visually-linked objects of a second cohesive scene.
29. An apparatus for automatically converting textual file references into visual file references, the apparatus comprising: a controller to create a set of visually-linked objects identified by a user using a conversion interface corresponding to the set of textual file references identified by the user; and a scene renderer to integrate the set of visually-linked objects into a cohesive scene.
30. The apparatus of claim 29 wherein the set of visually-linked objects is integrated using a real world visual metaphor as a cohesive scene.
31. The apparatus of claim 29 wherein the set of visually-linked objects is integrated using a non-real world visual metaphor as a cohesive scene.
32. The apparatus of claim 29 wherein the cohesive scene is two- dimensional.
33. The apparatus of claim 29 wherein the cohesive scene is three- dimensional.
34. The apparatus of claim 29 wherein the graphical user interface enables the user to enter the set of textual file references.
35. The apparatus of claim 29 wherein the graphical user interface enables the user to specify a file containing a plurality of textual file references.
36. The apparatus of claim 35 wherein the file is a URL of a source web page and the plurality of textual file references are recursively URLs of web pages referred to by the source web page .
37. The apparatus of claim 35 wherein the graphical user interface further enables the user to select the set of textual file references from the plurality of textual file references.
38. The apparatus of claim 30 wherein the real world visual metaphor is represented as planets, solar systems, galaxies, clusters, universe, land masses, cities, building, floors and rooms.
39. The apparatus of claim 29 further comprising a user interface to enable the user to select graphical representations for the set of visually-linked objects.
40. The apparatus of claim 29 wherein the set of visually-linked objects is created using a default set of graphical representations.
41. The apparatus of claim 29 wherein the controller is capable of selecting graphical representations for the set of visually-linked objects using user personal information.
42. The apparatus of claim 29 wherein the controller is capable of selecting graphical representations for the set of visually-linked objects using visual association with content referred to by the textual file references.
43. The apparatus of claim 29 wherein the controller is capable of preserving a hierarchical structure contained in the set of textual file references when creating the set of corresponding visually-linked objects, and the scene renderer is capable of representing the set of visually-linked objects as visual hierarchies using the preserved hierarchical structure.
44. The apparatus of claim 43 wherein the visual hierarchies are represented as planets, solar systems, galaxies, clusters, universes, land masses, cities, buildings, floors, and rooms.
45. The apparatus of claim 29 further comprising drag and drop graphical tools to add a visually-linked object to the cohesive scene.
46. The apparatus of claim 29 further comprising graphical tools to modify attributes pertaining to the visually-linked objects within the cohesive scene to reflect user perspective regarding the visually-linked objects.
47. An apparatus for automatically converting visually-linked objects into textual file references, the apparatus comprising: a resource manager to receive a request to convert a cohesive scene of visually-linked objects into a set of textual file references; and a controller to create the set of textual file references corresponding to visually-linked objects within the cohesive scene.
48. The apparatus of claim 47 wherein the controller is capable of preserving a hierarchical structure contained within the cohesive scene of visually-linked objects, and incorporating the preserved hierarchical structure into the set of textual file references.
49. An apparatus for automatically converting between cohesive scenes of visually-linked objects, the apparatus comprising: a controller to extract a set of visually-linked objects identified by a user using a conversion interface from a first cohesive scene of a visually-linked objects; and a scene renderer to convert the extracted set of visually-linked objects into a set of visually-linked objects of a second cohesive scene.
50. The apparatus of claim 49 wherein the controller is capable to preserve a hierarchical structure contained within the set of visually-linked objects of the first cohesive scene when converting the extracted set of visually- linked objects into the second cohesive scene of visually-linked objects.
PCT/US2000/024067 1999-08-31 2000-08-31 Automatic conversion between sets of text urls and cohesive scenes of visual urls WO2001016694A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU73419/00A AU7341900A (en) 1999-08-31 2000-08-31 Automatic conversion between sets of text urls and cohesive scenes of visual urls

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US15167299P 1999-08-31 1999-08-31
US15214199P 1999-08-31 1999-08-31
US60/151,672 1999-08-31
US60/152,141 1999-08-31
US54043300A 2000-03-31 2000-03-31
US54086000A 2000-03-31 2000-03-31
US09/540,433 2000-03-31
US09/540,860 2000-03-31
US65167100A 2000-08-30 2000-08-30
US09/651,671 2000-08-30

Publications (2)

Publication Number Publication Date
WO2001016694A1 true WO2001016694A1 (en) 2001-03-08
WO2001016694A9 WO2001016694A9 (en) 2001-10-18

Family

ID=27538405

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/024067 WO2001016694A1 (en) 1999-08-31 2000-08-31 Automatic conversion between sets of text urls and cohesive scenes of visual urls

Country Status (2)

Country Link
AU (1) AU7341900A (en)
WO (1) WO2001016694A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7987492B2 (en) 2000-03-09 2011-07-26 Gad Liwerant Sharing a streaming video
US8712850B1 (en) 2012-02-03 2014-04-29 Google Inc. Promoting content
US9304985B1 (en) 2012-02-03 2016-04-05 Google Inc. Promoting content
US9378191B1 (en) 2012-02-03 2016-06-28 Google Inc. Promoting content
EP2983078A4 (en) * 2013-04-04 2016-12-21 Jung Hwan Park Method and apparatus for creating and editing image into which object is inserted
US10225584B2 (en) 1999-08-03 2019-03-05 Videoshare Llc Systems and methods for sharing video with advertisements over a network
US10824313B2 (en) 2013-04-04 2020-11-03 P.J. Factory Co., Ltd. Method and device for creating and editing object-inserted images

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5528735A (en) * 1993-03-23 1996-06-18 Silicon Graphics Inc. Method and apparatus for displaying data within a three-dimensional information landscape
US5835094A (en) * 1996-12-31 1998-11-10 Compaq Computer Corporation Three-dimensional computer environment
US5877775A (en) * 1996-08-08 1999-03-02 Theisen; Karen E. Method of generating a 3-D representation of a hierarchical data structure
US6069630A (en) * 1997-08-22 2000-05-30 International Business Machines Corporation Data processing system and method for creating a link map
US6094196A (en) * 1997-07-03 2000-07-25 International Business Machines Corporation Interaction spheres of three-dimensional objects in three-dimensional workspace displays

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5528735A (en) * 1993-03-23 1996-06-18 Silicon Graphics Inc. Method and apparatus for displaying data within a three-dimensional information landscape
US5877775A (en) * 1996-08-08 1999-03-02 Theisen; Karen E. Method of generating a 3-D representation of a hierarchical data structure
US5835094A (en) * 1996-12-31 1998-11-10 Compaq Computer Corporation Three-dimensional computer environment
US6094196A (en) * 1997-07-03 2000-07-25 International Business Machines Corporation Interaction spheres of three-dimensional objects in three-dimensional workspace displays
US6069630A (en) * 1997-08-22 2000-05-30 International Business Machines Corporation Data processing system and method for creating a link map

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10225584B2 (en) 1999-08-03 2019-03-05 Videoshare Llc Systems and methods for sharing video with advertisements over a network
US10362341B2 (en) 1999-08-03 2019-07-23 Videoshare, Llc Systems and methods for sharing video with advertisements over a network
US10523729B2 (en) 2000-03-09 2019-12-31 Videoshare, Llc Sharing a streaming video
US7987492B2 (en) 2000-03-09 2011-07-26 Gad Liwerant Sharing a streaming video
US10277654B2 (en) 2000-03-09 2019-04-30 Videoshare, Llc Sharing a streaming video
US9378191B1 (en) 2012-02-03 2016-06-28 Google Inc. Promoting content
US10061751B1 (en) 2012-02-03 2018-08-28 Google Llc Promoting content
US9471551B1 (en) 2012-02-03 2016-10-18 Google Inc. Promoting content
US9304985B1 (en) 2012-02-03 2016-04-05 Google Inc. Promoting content
US8712850B1 (en) 2012-02-03 2014-04-29 Google Inc. Promoting content
US10579709B2 (en) 2012-02-03 2020-03-03 Google Llc Promoting content
US10061493B2 (en) 2013-04-04 2018-08-28 Jung Hwan Park Method and device for creating and editing object-inserted images
EP2983078A4 (en) * 2013-04-04 2016-12-21 Jung Hwan Park Method and apparatus for creating and editing image into which object is inserted
US10824313B2 (en) 2013-04-04 2020-11-03 P.J. Factory Co., Ltd. Method and device for creating and editing object-inserted images

Also Published As

Publication number Publication date
WO2001016694A9 (en) 2001-10-18
AU7341900A (en) 2001-03-26

Similar Documents

Publication Publication Date Title
US9569555B2 (en) Method for adding a user-selectable function to a hyperlink
US6487557B1 (en) Network-access management system and method applied to network and computer program product including computer program recorded on storage medium for creating display data
US6243091B1 (en) Global history view
US8156448B2 (en) Site navigation and site navigation data source
US6199098B1 (en) Method and apparatus for providing an expandable, hierarchical index in a hypertextual, client-server environment
Vetter et al. Mosaic and the world wide web
US7188319B2 (en) Displaying graphical information and user selected properties on a computer interface
US5854630A (en) Prospective view for web backtrack
CN101300621B (en) System and method for providing three-dimensional graphical user interface
US7032183B2 (en) Dynamic site browser
JP2003528377A (en) 3D space user interface
JPH11119881A (en) System and method for data processing for production of link map
US20050071776A1 (en) Multifunction hyperlink and methods of producing multifunction hyperlinks
JPH10116221A (en) Structured document on www
WO2008019000A2 (en) Web presence using cards
US6008814A (en) Method and system for providing network navigation aids
JP4386011B2 (en) Web page re-editing method and system
WO2001016694A1 (en) Automatic conversion between sets of text urls and cohesive scenes of visual urls
Crossley Three-dimensional internet developments
US20020046254A1 (en) System, method, and computer program product for an information hub
US8091018B2 (en) Method and system for creating and implementing dynamic graphic media
Hardie The development and present state of web-GIS
WO2001016683A1 (en) Using the placement of visual urls to communicate and assess their relationship to each other
CN1419210A (en) Method for checkikng and viewing archives files by combining with network browser
Larrondo-Petrie et al. A domain analysis of Web browser architectures, languages and features

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: C2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

COP Corrected version of pamphlet

Free format text: PAGES 1/18-18/18, DRAWINGS, REPLACED BY NEW PAGES 1/18-18/18; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP