WO1995029444A1 - Controleur d'acces memoire - Google Patents
Controleur d'acces memoire Download PDFInfo
- Publication number
- WO1995029444A1 WO1995029444A1 PCT/JP1995/000810 JP9500810W WO9529444A1 WO 1995029444 A1 WO1995029444 A1 WO 1995029444A1 JP 9500810 W JP9500810 W JP 9500810W WO 9529444 A1 WO9529444 A1 WO 9529444A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- storage
- information
- storage medium
- access
- processing device
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99953—Recoverability
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99956—File allocation
Definitions
- the present invention relates to an information storage processing device, and is particularly suitable for application to a network storage server based on a client / no server model.
- a jukebox with a built-in magnetic disk and magneto-optical disk is used for a workstation.
- a network file system server Part of the operating system on the workstation that connects to it is a network file system server that contains device drivers, its own file system, and file device management software.
- This network file system server can use the storage file server of the computer connected to the network via the network as if it were the storage of the own computer. Use software that allows you to do this. Specifically, the storage of another computer is mounted on the directory on your computer via the network. This network file system server allows users to connect and use large-capacity network storage to their own computers via the network.
- the first characteristic of this network file system server is that, while utilizing the high speed of the magnetic disk file server, it is possible to connect Lower the unit price.
- all removable storage media in the jukebox (hereinafter referred to as removable media) are managed as one volume. This means that the system collectively manages the volume of removable media as expansion space on the magnetic disk, and users cannot use individual removable media freely.
- the NFS server has a storage configuration in which semiconductor memories, magnetic disks, optical disks, magneto-optical disks, and the like are hierarchized by access speed, and the storage hierarchy management (HSM (Hierarchical CA)). Storage Management)) manages the resources of the server.
- HSM Hierarchical CA
- the file data of each client tends to increase on the online storage, but the files that are accessed infrequently are also kept on the online storage. As a result, the usage efficiency of online storage is reduced.
- the removable storage media is collectively managed by the system, there is a problem that the client cannot explicitly handle the removable storage media of the client in the system.
- the present invention has been made in view of the above points, and an object of the present invention is to propose an information storage processing device that can eliminate user unfairness and improve user convenience. Disclosure of the invention
- the resources (2, 3, 4, 5, 6, 7, 8, 9) of information storage processing are divided and dynamically allocated to the user,
- Various kinds of storage (2, 6, 7, 14) consisting of storage resources in information storage resources (2 to 9) are integrated by a file system and hierarchized according to speed and characteristics.
- the file system is constructed on the hierarchical multi-type storage (2, 6, 7, 14), and the user can distinguish the multi-type storage (2, 6, 7, 14). Or, arbitrary files are accessed without distinguishing between file systems.
- FIG. 1 is a block diagram showing a schematic configuration of an embodiment of an information storage processing device according to the present invention.
- FIG. 2 is a block diagram showing the configuration of the information storage processing device.
- FIG. 3 is a schematic diagram illustrating the division of resources in the information storage processing device.
- FIG. 4 is a schematic diagram for explaining a software management module for divided resources in the information storage processing device.
- FIG. 5 is a schematic diagram for explaining the integration of various types of multiple file systems in the information storage processing device.
- FIG. 6 is a schematic diagram for explaining offline file management in the information storage processing device.
- FIG. 7 is a schematic diagram for explaining online and offline storage media management in the information storage processing device.
- FIG. 8 is a block diagram showing an automatic labeling mechanism in the information storage processing device.
- FIG. 9 is a schematic diagram for explaining automatic adjustment of the online capacity in the information storage processing device.
- FIG. 10 is a schematic diagram used for general description of a multiple file system according to another embodiment.
- FIG. 11 is a block diagram for explaining staging of a file in the information storage processing device.
- FIG. 12 is a schematic diagram for explaining the management of the removable storage in the information storage processing device.
- FIG. 13 is a flowchart illustrating an information storage processing procedure in the information storage processing device.
- FIG. 14 is a flowchart showing the garden manager generation processing procedure of FIG.
- FIG. 15 is a schematic diagram showing the path map table created in FIG.
- FIG. 16 is a flowchart showing the storage manager generation processing procedure of FIG.
- FIG. 17 is a flowchart showing the mount processing procedure of FIG.
- FIG. 18 is a ⁇ -chart showing the read / write access processing procedure of FIG.
- FIG. 19 is a flowchart showing the cache check processing procedure of FIG.
- FIG. 20 is a schematic diagram showing the cache data management list used in FIG.
- Fig. 21 is a flow chart showing the processing procedure of the garden file system (GFS) in Fig. 18.
- GFS garden file system
- FIG. 22 is a flowchart showing the procedure for generating the media manager shown in FIG. 21.
- FIG. 23 is a flowchart showing the offline processing procedure of FIG.
- FIG. 24 is a schematic diagram illustrating the in-media management table of FIG.
- FIG. 25 is a flowchart showing the lead cache processing procedure of FIG.
- FIG. 26 is a chart showing a media access management table used in FIGS. 19 and 25.
- FIG. 27 is a flowchart illustrating the write cache processing procedure of FIG.
- FIG. 28 is a flowchart showing the actual light processing procedure of FIG. Fig. 29 is a flowchart showing the procedure of the media ejection process in Fig. 28.
- FIG. 30 is a flowchart showing the other file system processing procedures in FIG. BEST MODE FOR CARRYING OUT THE INVENTION
- reference numeral 1 denotes an information storage processing device as a whole. It has an autochanger 2 housed in a magneto-optical disk memory 2B, and has a second and a third storage medium as a second and a third storage, ie, a hard disk and a semiconductor random access.
- Operating system, management software, etc. which has a hard disk memory 3 with a memory (RAM) and a semiconductor memory 4, and is stored in an internal memory 6 by a central processing unit (CPU) 5.
- CPU central processing unit
- data can be written to or read from a storage area of a magneto-optical disk and a semiconductor memory device.
- a device called a jig box can be applied as the autochanger 2.
- the information storage processing device 1 includes a plurality of types of storage in which the access time at the time of writing or reading is sequentially increased, that is, the semiconductor memory 4 and the hard disk memory. 3 and the magneto-optical disk memory 2 are managed via the bus 6 as one storage 7 as a whole.
- the CPU 5 transfers data to be written or read to and from the storage 7 via the network interface 8 to a network constituted by, for example, Ethernet, FDDI (fiber distributed data interface), or the like.
- a network constituted by, for example, Ethernet, FDDI (fiber distributed data interface), or the like.
- NFS Network File System
- the information storage processor 1 stores the data from the client of the file system in the magneto-optical disk memory 2B, which is the first storage medium, by the autochanger 2, and then stores the stored optical data.
- the magnetic disk D M0 is taken out to the outside by the attachment / detachment device 2A and stored in the external storage unit by the operator, so that data can be stored outside.
- the CPU 5 assigns the access information assigned to the extracted magneto-optical disk D M0 by the label printer 12 to the label L BL .
- the printed label is supplied to the operator, so that the operator can store the supplied label LBL by attaching the label LBL to the removed magneto-optical disk DM0 .
- the information storage processor 1 receives, from the clients 10, 11, and 12, access information for the data stored in the externally stored magneto-optical disk DM 0 , the access information was notified to the operator, thereby operator to the optical disk D M0 corresponding can be mounted locate easily in O over Tochiwenja second attachment apparatus 2 a.
- the magneto-optical disk is divided as a working set of a plurality of resources, as shown by dividing lines L a, L b, and L c in FIG. 3, and a main body that manages the working set of the divided resources. Dynamically (ie, can be changed over time) to different software (hereafter called gardens).
- the semiconductor Note Li 4 the storage capacity of hard Dodisukume mode Li 3 and magneto-optical disks Note Li 2 B, respectively dividing line L a, by being divided by L b and L c, the semiconductor memory 4 is four AR1 1 to AR1 4 Similarly, the hard disk memory 3 is cut out into four memory area portions AR 21 to AR 24, and the magneto-optical disk memory 2 B is further cut out into four memory area portions AR 31 to AR 34. It is cut out.
- the server manager S VM is a module that manages the entire resources of the information storage device 1, and generates a plurality of gardens GDN1 to GDN3 that manage the working set of the divided resources.
- the resources of the information storage processor 1 are divided and dynamically assigned to the gardens G DN 1, G DN 2,..., G DNK as a single set of resources.
- the first garden G DN 1 has memory areas AR 11, AR 21, and AR 3 of the semiconductor memory 4, the hard disk memory 3, and the magneto-optical disk memory 2 B. 1 is assigned to the second garden G DN 2, and the memory area AR of the semiconductor memory 4, the hard disk memory 3, and the magneto-optical disk memory 2B
- AR 22 and AR 32 are assigned to the Kth garden GDNK, and the memory areas AR 1 K, AR 2 K of semiconductor memory 4, hard disk memory 3 and magneto-optical disk memory 2 B And AR 3 K are assigned.
- One GM is provided for each garden G DN, and the media manager M DM 1
- the MDM 2 is provided for each slot secured as a resource set in the magneto-optical disk memory 2B of the port changer 2.
- the working set of the divided resources has a one-to-one relationship with the assigned gardens GDN1, GDN2, ... GDNK, whereby each garden GDN1, GDN2 ... ... G DNK will have a working set of resources that can be used exclusively with other gardens.
- the information storage processing device 1 of this embodiment has the capacity and / or capacity of a storage resource of a storage 7 comprising a semiconductor memory 4, a hard disk memory 3 and a magneto-optical disk memory 2B of an autochanger 2.
- a storage 7 comprising a semiconductor memory 4, a hard disk memory 3 and a magneto-optical disk memory 2B of an autochanger 2.
- a small-capacity high-speed storage such as a semiconductor memory 4 is arranged as the storage resource most frequently accessed by the client of the information storage processing device 1, and the storage frequently accessed secondly.
- a medium-capacity medium-speed storage such as hard disk memory 3 is allocated as a resource, and a large-capacity low-speed storage such as magneto-optical disk memory 2B of auto changer 2 is used as the third most frequently accessed storage resource. Deploy storage.
- the layering based on the access speed and / or the capacity of the storage resource is applied to the working set unit of the resource, which is the resource division area described above with reference to FIG. Therefore, the management of gardens GDN1, GDN2, which is the main body that manages the resource set, is managed by the garden manager GDM, which exists solely in each garden for the working set of its own resources. This is done as shown in FIG.
- each garden GDN 1, GDN 2... In a single king set of resources managed by GDNK, includes a semiconductor memory 4, a hard disk memory 3, and a magneto-optical disk memory 2 B of a smart changer 2.
- a file system SFS is introduced, and Frame for integration The set is constructed by the storage manager SGM that exists only in each garden GDN. Therefore, the server file system SFS has a file management structure unique to the information storage processing device 1.
- the working set of storage resources split by the Server Manager SVM is also dynamically allocated (ie, so that it can change over time) to the Garden GDN generated by the Server Manager SVM, and thus Storage Manager A file system can be built on hierarchical and multi-type storage that is performed by SGM.
- the working set of resources is dynamically allocated to the gardens G DN1, G
- the DN2GDNK stratifies storage resources by devising how to call the garden manager GDM in each garden.
- the garden manager GDM calls the storage manager SGM, and the called storage manager SGM constructs the server file system SFS on the hierarchical storage.
- the server file system SFS has a tree structure including a root a and leaves b, c, and d.
- the nodes e, f, and d of each file are divided into a working set of hierarchical resources by the storage manager SGM, that is, the division of the semiconductor memory 4, the hard disk memory 3, and the magneto-optical disk memory 2B. It is physically placed as online storage in the allocated memory area.
- the client file system CFS is built on the storage that has the clients for each client, such as PC clients 10 and 11 and workstation clients 12.
- the server file system SFS constructed on the storage 7 of the information storage processing device 1 is connected to the client file system CFS.
- the server file system SFS can be imported as a part of the client file system CFS via the network interface 8 (Fig. 2). This is because the file tree structure of the client file system CFS has an expanded tree structure including the file tree structure of the server file system SFS by mounting the node g. It means that you have
- the client file system CFS is extended to the server file system SFS, and the user (client in the client server model) can access the server file system SFS as a part of the client file system CFS. Therefore, the user of the information storage processor 1 can use the file in the server file system SFS through the client file system CFS without being aware of the physical storage location of the storage in which the file is hierarchized. Can be accessed.
- the tree structure of the server file system SFS shown in Fig. 5 is constructed in storage 7 (hereinafter referred to as online storage) whose access is controlled while always connected to the CPU 5.
- the photochanger 2 which constitutes a part of the storage 7 has an on-line which is constituted by a magneto-optical disk as a storage medium housed therein.
- Storage unit ⁇ has an NS and an off-line storage unit OFS composed of a magneto-optical disk as a removable storage medium externally stored via the detachable device 2A, and has an on-line Storage unit Magneto-optical disk DM that constitutes NS.
- Each of the nodes of the tree structure of the server file system SFS is constructed in ⁇ DM04, and the magneto-optical disk constituting the off-line storage unit OFS has the same tree-structured offline file as the server file system SFS.
- the systems OFFS 1 and OFFS 2 are configured.
- Nodes j and k of the offline file systems OFFS 1 and OFFS 2 are connected to nodes f and d of the server file system SFS, and thus the offline file systems OFFS 1 and FFS 2 are server files System sfs By functioning as a part, it will function as a part of the client file system CFS, and as a result, the client file system CFS will be extended accordingly.
- the client file system CFS is built on the storage of the client computer, and a part of the node of the server file system SFS is online storage. This is built on the removable media of the magneto-optical disks D MD1 to D M04 in the part 0 NS (Fig. 6) .
- the node a of the server file system SFS is placed on the magneto-optical disk D M01. to the node b, and c are optical magnetic disk D M02, node f is the magneto-optical disk D M03, is a situation where node d is arranged on the magneto-optical de Isku D M04.
- an offline file system 0 FFS 1 connected via a node j to the node f of the server file system SFS is constructed.
- the offline file system OFFS 2 is connected to the magneto-optical disk D M011 in the offline storage ⁇ FS area and connected to the server file system SFS node d via the node k. .
- two virtual storage media VM 1 and VM 2 are installed in the online storage ONS, and the first virtual storage media VM 1 is installed in the online storage section ONS as a server file.
- Means for connecting between the magneto-optical disk D MD3 on which the node f of the system SFS is located and the external magneto-optical disk D M011 on which the node j of the offline file system OFFS 1 is located provide.
- the second virtual storage medium VM 2 is connected to the magneto-optical disk D MO4 on which the server file system SFS node d is located on the online storage unit 0 NS and the offline file system OFFS 1. It provides a means for connecting with an external magnetic memory DM012 in which the node k is arranged.
- Virtual storage Meda VM as the contents of, built offline Nsu your storage unit 0 FS magneto-optical disk D M011 connected to Roh one de f present on the optical magnetic disk D M03 of online Nsu your storage unit ON S Stores the directory information of the offline file system OFFS 1 (which does not have a file entity).
- the virtual storage media VM 2 is the online Nsu your storage unit 0 NS of the magneto-optical de Isku D M04 existing on Roh one de offline Nsu your storage unit 0 FS magneto-optical disk D M012 connected to d Stores directory information (it does not have a file entity) of the offline file system OFFS 2 that is being constructed.
- the directory information of the file system built on the media of the OFS storage section OFS is managed as virtual storage media on the online storage section 0 NS.
- the file system space can be expanded from online space to offline space.
- the server file system SFS in the online space and the offline file systems FFS 1 and 0 FFS 2 in the offline space exist as if they were logically on the online storage unit as a whole. It can be handled like a file system. Therefore, a user using the information storage processing device 1 can treat an offline file existing on the offline storage unit 0 FS as if it were an offline file existing on the online storage unit 0 NS. it can.
- the user manages the storage media on the online storage section ONS where the server file system SFS is located and the offline storage media on the offline storage section 0FS. This includes the operation of the client at Server Dell).
- Client A, Client B, and Client C are Like the click Lai Ann bets 6, as storage Meda of online Nsu your storage unit 0 NS the client file system CFS shown in FIG. 5 are arranged, the magneto-optical disc D M021, D M0 22 and D M023, having a D MO i and D M02S and D M026.
- the magneto-optical disk 0 ⁇ 011 and 0 1 ⁇ 01 2
- the storage medium of the offline storage unit 0FS in which the offline file systems OFFS 1 and OFFS 2 in Fig. 5 are arranged is used.
- Cry Ann preparative a and B has an external storage which do that the magneto-optical disk 0 ( ⁇ 31 and 0 1 ⁇ 32 and 0 "033.
- the magneto-optical disk D M032 is connected to the online storage section 0 NS from the online storage section ONS to the offline storage section OFS for the client B.
- virtual storage MED IA ⁇ VM 22 as a result of already removed, as well as present as information representative of the unloading operation of the storage Meda to offline Nsu your storage unit 0 FS direction, the client a and B.
- virtual storage media VM 21 and VM 23 provided to insert the magneto-optical disks D M031 and D M033 in the direction from the storage storage unit 0 FS to the online storage unit 0 NS. .
- clients A and C will be used to take out the magneto-optical disks D M022 and D M02S in the direction from the on-line storage unit 0 NS to the off-line storage unit 0 FFS.
- virtual storage MED IA ⁇ VM 31 and VM 32 are present.
- the connection information between the nodes j (FIG. 5) arranged in 22 is also written to the virtual storage medium formed in the online storage unit 0 NS.
- Onrai down magneto-optical disk D M022 is removed from the online Nsu your storage unit ON S by click Lai Ann preparative A, offline Nsu preparative Les temporary section OFS of offline storage to represent as a virtual storage media ⁇ VM 31 It is managed by client A as media.
- For clients B and nuclear translocation, and virtual storage media VM. 22 of online Nsu your storage unit ON S the same state as described above for the relationship between the offline Nsu your storage unit 0 FS of the magneto-optical disk D M032 I do.
- the logical connection relationship between the server file system SFS located on the online storage media and the offline file system located on the offline storage media 0 FFS 1 and 0 FFS 2
- the correlation between the online media f, the virtual storage media, and the offline storage media is also maintained and managed on the online media, so that the media from online to offline can be managed. It is possible to manage online storage media and offline storage media in the discharge direction.
- Client B replaces offline storage media D M033 with offline storage media.
- the virtual storage medium VM 23 is replaced with the entity of the storage medium DMO 33 , and the offline file systems OFFS 1 and 0 FFS 2 arranged in the storage medium D M0 33 are switched to the online storage.
- Department Server file system located on ONS Becomes part of SFS.
- the online storage medium and the intelligent storage medium can be managed in the direction from the offline to the online storage media.
- the label printer 13 constitutes an automatic labeling mechanism together with the attaching / detaching device 2A of the autochanger 2, as shown in FIG.
- the autochanger 2 is connected to the central processing unit (CPU) 21 via the internal bus 23, the I0 controller 24, and the bus 25 according to the software module arranged in the memory 22.
- CPU central processing unit
- the robotic 26 and the magneto-optical recording / reproducing drive 27 to control the robotic by specifying a plurality of magneto-optical disks contained in a rim bubble in the cartridge 28 26 and a magneto-optical recording / reproducing drive 27 constitute a magneto-optical disk memory 2B for writing or reading entity information, and by controlling the body 26, the client can insert and eject media 3 1
- the magneto-optical disk 30 inserted into the cartridge 28 is housed in the cartridge 28 or taken out to the outside.
- the software module of each manager described above with reference to FIG. 4 is arranged in the memory 22 and executed by the CPU 21.
- the server file system SFS in FIG. 5 is stored in a magneto-optical disk as removable storage media in the cartridge 28.
- the operation of the software module and the hardware when the client accesses the file system described above in Fig. 5 is based on the working set of the resources owned by the Garden GDN and built on it.
- the server file system SFS and the offline file systems OFFS l and OFFS 2 arranged on an offline storage medium will be described as examples. If a client makes an access request to the node f of the server file system SFS (Fig. 5), the garden manager GDM manages the file system that it owns and manages.
- the file is found, and the management information of the file is passed to the storage manager SGM (Fig. 4
- the storage manager SGM uses the cartridge assigned to the magneto-optical disk where the file exists).
- 28 Reads the slot number of 8 from the file management information and calls the corresponding media manager MDM, which is called by the IZO controller based on the information stored in the file storage medium. 24, Controls mouth mouth 26 via bus 25, corresponding to stored in cartridge 28 By moving the that storage medium into the drive 2 7 accesses the node f requested by the client.
- the magneto-optical disk memory 2B terminates the access operation in response to the access request from the client.
- the garden manager GDM obtains the client information from the management information of the file system held and managed by itself. It finds the file requested by the host and passes the management information for that file to the storage manager SGM. The storage manager SGM recognizes from the file management information that the file is located on the offline media. This is executed by the storage manager SGM examining the information about the virtual media stored in the memory 22 as described above with reference to FIG.
- the storage manager SGM determines the media information (VM 21 ) on the virtual storage medium (VM 21 ). Electronic label) to the Garden Manager GDM.
- Garden Manager GDM communicates the storage media information of the electronic label to the client.
- the client can use the offline media based on the stored media information. From the storage 0 FS, the corresponding magneto-optical disk D M031 as an offline storage medium is searched for from the external storage unit, and the online storage unit 0 NS is replaced with the virtual storage medium VM 21 of the media to be inserted into the medium.
- the garden manager GDM updates the management information by using the magneto-optical disk DM031 , which has been managed as a virtual storage medium, as a medium of the actual online storage ONS.
- Be sampled, single di manager S GM in order to move the inserted magneto-optical de Lee disk DM03, to scan 1 ⁇ Tsu bets are empty carts
- Li Tsu di 2 8 calls the media manager M DM .
- the media manager MDM moves the magneto-optical disk D M031 that has just been inserted into the specified slot of the cartridge 28 . Thereafter, the node k is accessed in the same procedure as the access of the node f described above.
- the label printer has a dot impact printer installed inside the autochanger 2 and this is connected to the internal bus 23 using internal wiring, so that the CPU 21 of the autochanger 2 can be used.
- the label printer I3 can be controlled.
- the label is attached to the surface of the media as visual information, so that the management of the offline storage media by the client becomes easy, and the offline by the clan agent.
- the transfer operation of the online storage media to the online storage media can be easily and reliably executed.
- a single set of resources managed by the garden GDN 1 is assigned to client A, which constitutes a storage resource of the online storage unit 0 NS.
- Semiconductor memory 4, hard disk memory 3, and magneto-optical For the storage 2B and the magneto-optical disk in the external storage that constitutes the offline storage unit 0FS, the storage capacity of the garden GDN1 is the band-shaped part representing the garden GDN1 and the semiconductor memory. 4. It can be expressed as a value corresponding to the area of intersection with the hard disk memory 3, the magneto-optical disk memory 2B, and the band that represents the storage area of the external storage.
- the storage capacity of the garden 4, GDN1, GDN2, and GDN3 memory 4, the hard disk memory 3, the magneto-optical disk memory 2B, and the storage capacity of the external storage are displayed (QA1, QA2, and QA3, respectively).
- QA4, (QB1, QB2 and QB3, and QB4) and (QC1, QC2 and QC3, and QC4) are displayed (QA1, QA2, and QA3, respectively).
- the area ratios of the capacity indications QA1, QB1, and QC1 of the gardens GDN1, GDN2, and GDN3 are different because the gardens GDN1, GDN2, GDN3 It means that the capacity of the semiconductor memory 4 allocated as the working set of the resources of the same is not uniform. This is also similar to the method described in Section I regarding the division of storage resources between the hard disk memory 3 and the magneto-optical disk memory 2B.
- each of the gardens GDN1, GDN2, and GDN3 is a storage of the semiconductor memory 4, hard disk memory 3, and magneto-optical disk memory 2B in the working set of its own resources. Tiers based on access time and storage capacity, and tiered storage resources (QA1, QA2, QA3), (QB1, QB2, QB3), (QC1, QC2, QC3), respectively.
- the server file system SFS described in Fig. 5 is constructed.
- the offline file systems OFFSl and OFFS2 described in Fig. 5 are constructed.
- the storage layering and file system construction are performed by a software module consisting of the garden manager GDM, storage manager SGM, and media manager MDM of each garden GDN1, GDN2s GDN3. Further The software modules of the garden manager GDM, storage manager SGM, and media manager MDM of each of the gardens GDN1, GDN2, and GDN3 are chronologically hierarchical storage resources (QA1, QA2). , QA3), (QB1, QB2, QB3), (QC1, QC2, QC3) change the capacity of the online storage. This is done according to the access frequency and file size of the files in the server file system SFS by each client.
- the storage resources QA 1, QB QC 1 of the semiconductor memory 4 which is a single set of resources of the gardens G DN 1, G DN 2, and G DN 3, and the storage of the hard disk memory 3
- Resources QA2, QB2, QC2 capacity and magneto-optical disk memory 2B storage resources QA3, QB3, QC3 capacity of each garden GDN1, GDN2s GDN3 client By dynamically changing (over time) according to the access characteristics of files, resources can be allocated fairly and efficiently to each client.
- the information storage processor 1 automatically adjusts the online storage capacity to allocate resources to each client fairly and efficiently, and also as described above with reference to FIG.
- the online storage capacity itself can be dynamically increased.
- a server file system SFS is arranged on the online storage medium in the online storage section NS of FIG. 7, that is, on the magneto-optical disks DM021- DM026 .
- Off-line storage media in the offline storage section OFS in FIG. 7, that is, the magneto-optical disks D M031 to D M033 are provided with off-line file systems OFFS 1 and 0 FFS 2. Therefore, in Fig. 9, the server file system SFS is placed on the working set QA3, QB3, and QC3 of the magneto-optical disk storage resources of the magneto-optical disk memory 2B as an online storage medium. And the offline file system ⁇ FFS 1 and 0 FFS 2 is the working set QA 4 and QB 4 s of magneto-optical disk storage resources as offline storage media. Located on QC 4.
- the client took out the online media off-line, but the garden managers GDM1, GDN2, and GDM3 of each garden GDN1, GDN2, and GDN3
- the infrequently-accessed files built on the capacity QA3, QB3, and QC3 of the magneto-optical disk memory 2B of the autochanger 2 by the software modules of the storage manager SGM and the media manager MDM Is automatically selected, and this is set as a candidate for the offline file system OFFS 1 and OFFS 2.
- the file system space is extended to offline, and the online and offline storage media can be managed, so that the online storage for the client can be realized.
- Offline files that do not fit in the storage can be manipulated online.
- online storage capacity can be dynamically increased by expanding client-driven online and offline management.
- the online storage capacity is automatically adjusted for each garden GDN based on online and offline media management, maintaining the integrity of the file system space extended to offline. By doing so, it is possible to provide each client having the garden GDN with a file capable of expanding the storage capacity without practical limitation.
- step SP1 the CPU 5 of the information storage processing device 1 starts initialization by the main routine RT0 in FIG. 13, and in step SP1, a garden generation request is sent from the network 9 to the information storage processing device 1 from any client. Waits for the arrival, and when a positive result is obtained, a garden manager is generated in step SP2. At this time, the garden manager executes the management processing procedure shown in Fig. 14. That is, the garden manager creates a path map table as shown in Fig. 15 in step SP11 of Fig. 14 and then proceeds to step SP12. In step S13, the storage manager shown in FIG. 4 is generated, and the process ends in step SP13.
- the path map table TBL 1 is provided for each file system, and includes the server file system SFS, the online file systems OFFS 1 and OFFS 2 shown in FIG. 5 or FIG. 10, and the storage file system SAFS and It has information about each node constituting the SBFS and is stored in the hard disk memory 3.
- the path map table TBL1 stores a file system type as a header for an entry.
- this file system type there are other file systems such as the garden file system according to the present invention and the UNIX user file system.
- Each entry of the path map table TBL 1 is composed of a plurality of entry tables that represent "normal file”, "directory”, or “mount point” as a file type.
- “File” is stored as the file type
- the directory or mount point entryable TBL 3 as shown in FIG. In addition to "directory” or “mount point", it stores "path name” and "pointer to other path map tables" as other information.
- the CPU 5 when the garden manager GDM of the gardens GDN1 to GDNK in FIG. 4 is formed, the CPU 5 generates a storage manager in step SP12.
- the storage manager enters a management processing routine RT2 as shown in Fig. 16 and waits for an access request to arrive at step SP2I. 22. It is determined whether the access request is a mount request or not.
- the CPU 5 moves to a mount processing subroutine RT3 and executes a mount processing procedure as shown in FIG.
- the CPU 5 first proceeds to step SP31, where the file system of the path map table of the mount destination (for example, the nodes j and k of the offline file systems OFF1 and OFF2 in FIG. 5). After specifying the type and setting the file system, in the next step SP32, set the mount point (TBL3) for the file type of the path map table (Fig. 15), and then from step SP33. Return to the storage manager processing routine (Fig. 16).
- step SP22 if a negative result is obtained in step SP22, this means that the access request is not a mount process, that is, a read process or a write process. Moves to the read / write access processing subroutine RT4.
- the CPU 5 Upon entering the read / write access processing subroutine RT4, the CPU 5 firstly enters the file designated by the client by the path map table (FIG. 15) in step SP41 (ie, the file) as shown in FIG. System type After researching the file type (Fig. 15 (A)), the file or the mount point (Fig. 15 (B) or (C)), the cache check subroutine RT5 is executed.
- the cache check subroutine RT5 executes a process of checking whether or not the file data to be stored in the magneto-optical disk memory 2B has been read to the hard disk memory 3.
- the CPU 5 determines whether or not the current access request is a read access in step SP51, as shown in FIG. 19, and if it is a read access, as shown in FIG. Determines in step SP52 whether or not the queue is present in the cache data management list.
- the cache data management list is stored in the hard disk memory 3, and as shown in FIG. 20, the number of cache blocks CB1, CB corresponding to all the slots of the magneto-optical disk memory 2B 2, each has a mouthpiece C
- This cache data management list is managed by the LRU (least-recently-used) method, and the most recently accessed block is placed at the head of the queue, that is, as the leftmost block CB1 in FIG. It is listed so that it can flow.
- LRU least-recently-used
- step SP52 If an affirmative result is obtained in step SP52, the CPU 5 proceeds to step SP53 to execute a queuing process at the head of the read cache management list for the found queue and to execute the queuing process. After reading the file data of the block that has been subjected to the writing process, the process returns to the read / write access processing routine RT4 (FIG. 18) from step SP59.
- step SP52 when a negative result is obtained in step SP52, the CPU 5 proceeds to step SP55 and performs a cache miss processing, so that the fact that it was a cache miss is stored in the internal memory 6. After recording, the process moves to step SP54 and ends the process. Further, if a negative result is obtained in step SP51 described above, this means that the current access request is in the write mode, and the CPU 5 proceeds to step SP56 and proceeds to step SP56. Once the file data to be stored is written to the hard disk memory 3 and the fact that the access request is a write access request is stored in the internal memory 6, the process proceeds to step SP54 described above, and the process is performed. finish.
- the CPU 5 After the cache check processing is completed, the CPU 5 returns to the read / write access processing subroutine RT4 (FIG. 18) and determines whether or not the cache has been performed in step SP42.
- step SP 43 the storage management processing routine (FIG. 16).
- step SP 42 the CPU 5 proceeds to step SP 44 and checks the above-mentioned step SP 21 from the file system type in the path map table (FIG. 15). It is determined whether the access request arrived in ( Figure 16) is file data managed by the garden file system (GFS). If a positive result is obtained, the processing of the garden file system processing routine RT6 is performed. to go into.
- GFS garden file system
- the CPU 5 determines whether or not the disk has been accessed for the first time in step SP61, as shown in FIG. 21, and when a positive result is obtained, the CPU 5 proceeds to step SP6.
- the media manager shown in Fig. 4 is generated, and the process returns to the read / write access processing routine RT4 (Fig. 18) from step SP63.
- the CPU 5 executes the reading process of the media access management table (FIG. 26) in step SP71, and then, in step SP72, the relevant media manager is offline. It is determined whether or not there is, and when a positive result is obtained, the processing of the offline processing subroutine RT8 is performed, and then the process returns to step SP71.
- the CPU 5 determines in step SP72 that the media access management It is determined whether or not there is a media number stored in the TBL 31 that matches the media number of the magneto-optical disk in which the file to be accessed is stored.
- the media number of the magneto-optical disk storing the file to be accessed can be known from the storage media number in the file storage information of the path map table TBL1.
- step SP81 a process of outputting the media label of the in-media management table TB11 (FIG. 24) to the client is executed.
- the in-media management table TBL 11 is provided for each storage medium (that is, the magneto-optical disk 2B). As shown in FIG. 24 (A), this table TBL 11 includes a “media label” It contains management information about the “file number” and “node pointer” of the file stored in the storage media.
- a node table TBL 12 as shown in FIG. 24 (B) is stored at the position indicated by the node pointer of each file number in the in-media management table TBL 11 of each storage medium.
- “owner”, “permission”, “access time”, “change time”, “single block pointer” (as shown in Fig. 24 (C), contains data that is a file entity) ), "Double block pointer"
- step SP81 the CPU 5 inserts the user into the detacher 2A of the photo changer 2 of the designated media in the next step SP82.
- the flow returns to step SP71 of the media manager processing routine RT7 (FIG. 22) via step SP83.
- the user can determine the target storage medium based on the label attached to the medium. If a negative result is obtained in step SP72 of the media manager processing routine RT7 (FIG. 22), this means that the file requested for access is not offline, that is, the media is not online (the media is inserted in the slot).
- step SP73 the CPU 5 moves to step SP73 and proceeds to step SP73, where the i-th node indicated by the i-th node in the media management table (Fig. 24) has the file number. After reading the node into the table TBL12, it is determined in step SP74 whether a read command is necessary.
- step SP74 the CPU 5 executes the processing of the read cache processing routine RT9.
- the CPU 5 Upon entering the read cache processing routine RT9, the CPU 5 executes a process of removing the block connected to the end of the queue of the cache data management list in step SP91 as shown in FIG. After executing the processing for a plurality of data as required, in step SP92, the processing of reading file data from the block indicated by the block pointer of the i-th node table (read ahead) is executed. At step SP93, the queue containing the file data just read into the cache data management list is connected to the head of the queue, and at the next step SP94, the media access management table TBL31 (Fig. 26) is accessed. After executing the update processing of the count data and the staging information, the media manager subroutine RT is executed via step SP95. Return to 7 (Fig. 22).
- the media access management table TBL 31 stores “access count” (access history, that is, the number of accesses) for each slot number of the entire storage. It stores “staging information” (for example, information indicating an access time) and “media n Z information” and the CPU 5 of the information stores the staging information and the access point information.
- access count access history, that is, the number of accesses
- scheduling information for example, information indicating an access time
- media n Z information for example, information indicating an access time
- the update process in step SP94 is executed using.
- the CPU 5 Upon entering the write cache subroutine RT10, as shown in FIG. 27, the CPU 5 removes the block connected to the end of the cache data management list (FIG. 20) at step SP101 as shown in FIG. The processing is executed (for a plurality of pieces if necessary), and in the next write preparation routine RT 11, the block pointer of the i-th node table TBL 12 (FIG. 24 (B)) of the file concerned Prepare to write the file data to the block indicated by.
- the CPU 5 determines whether or not the medium specified in step SP111 has a free space as shown in FIG.
- This determination is to confirm whether or not the file data to be written can be written to the magneto-optical disk due to the remaining space on the specified medium, that is, the magneto-optical disk.
- a check is performed using the data stored for each slot number as the remaining capacity data of the media access management table TBL31 (FIG. 26).
- step SP111 If a positive result is obtained in step SP111, this means that the file data to be written by the current access request is in a state where it can be written to the magneto-optical disk without any loss, and at this time, the CPU 5 Moving to step SP112, the corresponding node table TBL12 of the media management table (Fig. 24) is updated, and the entry table of the path map table TBL1 (Fig. 15) is updated. After updating TBL2 or TBL3, the process returns to the write cache processing routine RT10 (FIG. 27) in step SP113.
- step SP111 if a negative result is obtained in step SP111 above, this means that the designated storage medium, that is, the magneto-optical disc, has room for storing the information data requested to be accessed.
- CPU 5 goes to step SP114 and empties the magneto-optical disk memory 2B in the autochanger. Determine if any media is present.
- the CPU 5 determines from the management information of the media access management table TBL31 (Fig. 26) that the slot (garden number) for which the remaining capacity data is large and the garden is not allocated. To search for a slot number that does not exist.
- step SP114 If an affirmative result is obtained in step SP114, this means that a free medium has been found in the autochanger 2, and the CPU 5 proceeds to step SP115 and proceeds to step SP115.
- step SP 116 After executing processing for creating a management table in the media management table TB L 11 (FIG. 24), information about the storage medium is executed, and in step SP 116, the media management table TBL 11 (FIG. 24) ), And the entry table TBL2 or TBL3 of the path map table TBL1 is updated, thus accessing the media available in the autochanger 2.
- the preparation for writing the requested information data is completed.
- the CPU 5 returns to the above-described write cache processing routine; T10 (FIG. 27) through step SP113.
- step SP114 If a negative result is obtained in step SP114, this means that there is no empty storage medium in the autochanger 2, and at this time, the CPU 5 proceeds to step SP117. Then, it is determined whether or not the media access management table TBL 31 has a free space.
- the determination in step SP117 is a process of searching for a slot number for which no data is stored in the media number column in the media access management table TBL31 (FIG. 26). If a positive result is obtained in the step of executing the step, the storage medium, that is, the slot in which the magneto-optical disk is not inserted, is in the slot of the slot number in which the media number is not stored. At this time, the CPU 5 moves to step SP118 and sends a blank medium to the client, that is, a magneto-optical disk on which no information is written, with the slot number of the corresponding slot number. Information that should be inserted into the network After outputting to the client via the interface 8, in step SP119, the system waits for a new magneto-optical disk to be inserted.
- step SP119 After updating the node table TBL12 of 1 and the entry table TBL2 or TBL3 of the path map table TBL1, update the write cache processing routine RT10 from step SP113. Return to (Fig. 27).
- step SP117 if a negative result is obtained in step SP117 described above, this means that there is no free storage medium in the autochanger, and at this time, the CPU 5
- a media ejection process such as pulling out a part of the magneto-optical disk in the autochanger 2 is performed, thereby creating an empty slot in the automatic changer 2.
- the processing returns to the write cache processing routine RT10 (Fig. 27) through the above steps SP118, SP119, SP116, and SP113. Execute.
- the CPU 5 enters the media discharge processing subroutine RT12 in step SP121, as shown in FIG. 29, from the media access count and staging information in the media access management table TBL (FIG. 26).
- the storage medium with the least access in other words, the storage medium with low frequency
- the CPU 5 executes the above-described step SP 1 2 1 Then, after selecting the next least accessed storage medium, the processing of step SP122 is executed for the storage medium.
- step SP122 If a negative result is obtained in step SP122, this means that the selected This means that the storage media can be discharged, and at this time, the CPU 5 proceeds to step SP123 and looks at the cache data management list (FIG. 20), and the CPU 5 executes the processing in the storage media. After the file data has been flushed, the discharge / print processing is performed from the next step SP124.
- the CPU 5 transfers the file data of the half-cache block stored on the hard disk to the magneto-optical disk memory. Executes flash processing such as writing to the storage media corresponding to.
- step SP124 the CPU 5 determines whether or not the media label has already been written to the storage medium of the corresponding slot number in the in-media management table TBL11 (FIG. 24).
- a media label is created in step SP125, and the media label is labeled in step SP126 through the printer I / F on the label printer 13 (Fig. 2).
- Print on This allows the client to attach this label to the cartridge of the magneto-optical disc so that it can be distinguished from other magneto-optical discs even when it goes offline.
- step SP127 a media label is set in the media label field of the in-media management table TBL11 (Fig. 24), and then the media discharge processing subroutine RTI is performed via step SP128. 2 (Fig. 29) is completed.
- step SP 1 24 if an affirmative result is obtained in step SP 1 24 described above, this means that the media label has already been written to the in-media management table TBL 11 (FIG. This means that a label is already attached to the cartridge of the magneto-optical disk.
- the CPU 5 jumps from step SP125 to SP127 and returns to the write preparation processing routine RT11 (FIG. 28) described above from step SP128.
- step SP102 a process of allocating a cache block from the cache data management list for data for which write data is requested, connecting the cache block to the head of the queue, and executing the next step SP102.
- step 03 the staging information of the media access management table TBL 31 (FIG. 26) is updated to terminate the write cache processing subroutine RT10.
- step SP10 the above-described media manager generation processing subroutine RT7 is started.
- the garden file system (GFS) processing routine RT 6 (FIG. 21) is terminated via step SP 75, and the Back to de la I bet access processing routine RT 4.
- step SP61 of the garden file system (GFS) processing routine RT6 (FIG. 21) to determine that the disk access is not the first time
- the CPU 5 executes the media management.
- step SP63 the procedure returns to the above-mentioned read Z write access processing routine RT4 (FIG. 18).
- step SP43 the storage manager processing routine RT2 (FIG. 1) is executed. Return to 6).
- step SP 14 1 when entering the other file system processing subroutine RT 14, the CPU 5 determines in step SP 14 1 that the path of the file system in the garden file system (GFS) path map table TBL 1 is entered as shown in FIG. Change to first name
- step SP 14 3 the process proceeds to step SP 14 3 to determine whether the file requested to be accessed from the media access management table TBL 3 1 relates to an offline file. Make a judgment.
- GFS garden file system
- step SP144 if a negative result is obtained in step SP144, this means that online processing needs to be performed, and at this time, the CPU 5 executes the above read / write from step SP144. Return to the access processing subroutine RT 4 (FIG. 18).
- the CPU 5 has completed the processing of the read / write access processing subroutine RT4, and at this time, the CPU 5 executes the above-described storage management via step SP43.
- the CPU 5 waits for a new access request to be generated at step SP21.
- the CPU 5 performs the operations described above with reference to FIGS. It can be executed by the processing procedure described above with reference to FIGS.
- FIG. 10 shows another embodiment of the present invention.
- the information storage processing device 1 is connected to a node b of the server file system SFS and is configured by a write-once optical disk memory.
- the storage A file system SAFS node h built on the storage By mounting the storage A file system SAFS node h built on the storage, the storage A file The file in the SAFS is handled as a part of the server file system SFS. It has been done.
- optical disk memory is used for the node c of the server file system SFS.
- the files in the storage B file SBFS can be used as part of the server file system SFS. It is made to handle.
- the information storage processing device 1 in addition to the magneto-optical disk storage, other types of storage, that is, the write-once disk memory and the optical Users can mount any file system built on different types of storage by mounting file systems configured on storage consisting of memory. It can be accessed through the client file system CFS without being aware of the problem. That is, the nodes b, c, d, and f of the tree structure in the server file system SFS are mounted on a file system mount having another file management structure including the server file system SFS and the client file system CFS.
- the storage manager SGM called from the garden manager GDM performs the monitoring.
- the storage manager SGM has a module that understands the file system structure to be combined, so that various file systems built on various types of storage can be stored in the server file system SFS. It can handle file systems combined as an extension of the server file system SFS.
- the storage A file system SAFS is built on storage such as a write-once optical disk, and the storage A file system SAFS is installed on the node b in the server file system SFS.
- the node h By mounting the node h inside, the files in the storage A file system SAFS can be handled as part of the server file system SFS.
- the storage B file system SBFS node i By mounting the storage B file system SBFS node i on the node in the server file system SFS, the storage constructed on storage such as an optical disk.
- the storage B file system SBFS can be treated as an extension of the server file system SFS.
- the information storage processing device 1 can be constructed on various types of storage.
- file systems can be integrated, and the client does not need to be aware of storage, and through the client file system CFS, as an extension of the client file system CFS, the server file system SFS, Access files in multiple file systems, such as storage A file system SAFS and storage B file system SBFS
- FIG. 11 shows another embodiment of the present invention.
- the information storage processing device in the case of FIG. 11 includes a client storage unit CLS to an offline storage unit 0. Up to FS are layered by storage access speed.
- the diagram is equivalent to Fig. 9, but the feature here is that each storage is treated as a cache. Closest to the client (storage with the fastest access speed to the client)
- Client storage CLS is the primary cache, and semiconductors in the order of higher access speed
- the memory 4 is called a secondary cache
- the disk memory 3 is called a tertiary cache
- the magneto-optical disk memory 2B of the autochanger 2 is called a quaternary cache.
- This is an extension of the concept equivalent to a primary cache and a secondary cache for the main memory (semiconductor memory) of a microprocessor to storage including offline.
- the client that has built a file system such as the server file system SFS and the offline file system OFFS l OFFS 2 on the layered storage
- the file is accessed, but in the case of Fig. 11, the storage manager S GM, which is the only one in each garden G DN, automatically downloads the file according to the frequency of file access.
- a high-speed cache such as CLS or semiconductor memory 4 (eventually staging to the primary cache) or to a low-speed cache such as autochanger 2. More frequently (eventually offline storage OFS), which allows for frequently accessed files It is possible to improve the accession Sesupa-performance, hierarchical
- the used storage can be used effectively.
- the files in the server file system SFS accessed by the client are placed in the magneto-optical disk memory 2B of the autochanger 2 in FIG.
- the file moves due to staging between the caches over time, but the storage manager SGM uses the client to access the file, the currently staged storage and the cache. Collection of statistical information such as the service life of the equipment.
- the user of the information storage processing device 1 can analyze and analyze the operating status of the information storage processing device 1 and the usage status of the resources of the information storage processing device 1 used in the application. Further, based on the analysis of the statistical information by the user and the analysis result, it is possible to tune the distribution and processing speed of the resources of the application and information storage processing device 1.
- this statistical information is managed by the garden manager GDM, which is unique to each garden GDN, in the information storage and processing device 1, and is created by the client by applying online and offline media management.
- Useless files on the online storage that have been accessed very infrequently are automatically organized as offline files, and the above procedure is used to bring the online media on which the files are placed offline. Can be discharged.
- the process of organizing the files as offline files can be realized by managing the slot in the magneto-optical disk as the rim-able storage in the automatic changer 2 as shown in FIG. Fig. 12 shows the capacity of the magneto-optical disk memory 2B (cartridge 28 (Fig. 8) in the storage resources QA3, QB3, and QC3 of each garden GDN1, GDN2, and GDN3 in Fig. 9. ) With a capacity proportional to the number of slots in the parentheses).
- Clients A, B and C with gardens GDN1, GDN2 and GDN3 are currently slotted (SL1, SL2), (SL3-SL6) and (SL7-SL9). ), And a magneto-optical disc is inserted in all slots as a removable storage medium.
- the maximum number of slots of the auto changer 2 having the magneto-optical disk memory 2B is 8 slots of the slots SL1 to SL8, and there is no empty slot at present.
- the server manager S VM responds to the access request of the client C ⁇ offline storage medium, that is, the magneto-optical disk DMC 2 by the server manager S VM.
- the GDM GDM which has online media on which the infrequently accessed super file system SFS is located, and automatically migrates the relevant online media to offline media.
- 'garden GDN 1 of Cry N'an bets A moves to the outside as offline storage Meda D M0 magneto-optical disk as online storage Meda to have the slot SL 2.
- the offline storage medium DM042 can be transferred online to the garden GDN3 of the client C, and can function as the online storage medium of the slot SL9.
- client A's garden GDN 1 has only one slot SL 1 and client C's garden GDN 3 has three slots SL 7, SL 8, and SL 9. Changes dynamically.
- the number of slots of the magneto-optical disk autochanger 2 that the gardens GDN 1 and GDN 3 of the clients A and C have can be adjusted according to the file access frequency and the file size of the client. Dynamic changes allow frequently accessed client files to remain online.
- the slot of the autochanger 2 of the magneto-optical disk of the gardens GDN1 and GDN3 of the clients A and C is dynamically changed, but the garden GDN2 of the client B is changed. This does not apply to the slot owned by the client, but is assigned by the user as a slot that can be used exclusively by the garden GDN2 of client B.
- a magneto-optical disk as a removable storage medium is inserted in slot SL 3 and SL 4 of garden GDN 2 of client B, but slot SL 5 and slot SL 5 SL 6 is in an empty slot state where no removable storage media is inserted.
- the garden GDN 2 of the client B is not allocated to the garden GDN 1 and GDN 3 of the other clients A and C, and the garden GDN 2 of the client B is not allocated to the slot SL 5. Occupies the SL-6.
- the client B can exclusively use the slots SL5 and SL6 as necessary, regardless of the frequency of file access.
- a magneto-optical disk memory 2B having a semiconductor memory 4, a disk memory 3, and an attaching / detaching device 2A is used as storage with the access speed and the storage capacity as hierarchical conditions.
- Hierarchical information storage processing device 1 has been described, but the storage and hierarchical conditions are not limited to this, and write-once optical disks that can be written once and can be read several times, read-only Storage such as an optical disk and a tape streamer for sequential access may be applied, and hierarchies may be formed based on the characteristics and characteristics of the applied storage.
- a device configuration having a storage hierarchy including an offline storage that cannot be accessed online, and a storage and file management including an offline realized by software.
- constructing a system and providing it as storage, including offline storage media it is possible to eliminate unfairness to users and to improve user-friendliness and store data without practical limitations.
- An information storage processing device whose capacity can be increased can be realized.
- the system when the client accesses a file that does not exist online, the system can notify the client of the introduction of the offline media in which the file exists.
- the media information consisting of an electronic label is automatically written on the media, and at the same time, by attaching a visually identifiable label, the user can more easily and surely use the offline media. Can manage and go online.
- the on-line storage capacity can be increased by a factor of 3 to 10 by automatically ejecting a file having a low access frequency to the offline.
- useless files of the client can be automatically arranged.
- the directories in the offline media that are automatically dumped to the offline media are managed as virtual media online, and can be transferred online on demand.
- by managing file media including offline media and efficient use of online storage capacity it is possible to realize practically infinite capacity files.
- the file system unique to each storage is mounted in the internal unique file system. it can. This allows clients to access various file systems unique to each storage as file systems unique to each storage.
- the user can operate the server based on the statistical information. Analyze and analyze the status and usage status of server resources handled in the aggregation. In addition, application and server performance can be tuned based on analysis and analysis results.
- each file is accessed according to the access frequency and size of the file.
- the system automatically reserves and opens the number of magneto-optical disk autochanger slots allocated to clients, automatically adjusting the online capacity of each client, Frequently accessed client files can be kept online.
- the client can lock the slot for exclusive use, so that the client can use it exclusively for a specific application regardless of the frequency of file access. It can also be shared with other clients that share the same application such as a database.
- the information storage processing device can be used by various clients requiring a large-capacity storage device, and can be used for storing electronic publishing data, securities data, catalog publishing data, and the like.
- the information storage processing device can be used for image processing of documents, for example, for storing a copy of a document of a government office or relying on drawings related to patents.
- the information storage processing device can be used for a task of creating game software.
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/557,124 US5784646A (en) | 1994-04-25 | 1995-04-25 | Hierarchical data storage processing apparatus for partitioning resource across the storage hierarchy |
EP95916045A EP0706129A4 (en) | 1994-04-25 | 1995-04-25 | INFORMATION STORAGE PROCESSOR |
JP52752995A JP3796551B2 (ja) | 1994-04-25 | 1995-04-25 | 情報記憶処理装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP11052894 | 1994-04-25 | ||
JP6/110528 | 1994-04-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1995029444A1 true WO1995029444A1 (fr) | 1995-11-02 |
Family
ID=14538098
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP1995/000810 WO1995029444A1 (fr) | 1994-04-25 | 1995-04-25 | Controleur d'acces memoire |
Country Status (6)
Country | Link |
---|---|
US (2) | US5784646A (ja) |
EP (1) | EP0706129A4 (ja) |
JP (1) | JP3796551B2 (ja) |
KR (1) | KR100380878B1 (ja) |
CN (1) | CN1093289C (ja) |
WO (1) | WO1995029444A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007110931A1 (ja) * | 2006-03-28 | 2007-10-04 | Fujitsu Limited | 名前空間複製プログラム、名前空間複製装置、名前空間複製方法 |
CN102333123A (zh) * | 2011-10-08 | 2012-01-25 | 北京星网锐捷网络技术有限公司 | 文件存储方法、设备、查找方法、设备和网络设备 |
Families Citing this family (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW344178B (en) * | 1996-12-16 | 1998-11-01 | Toshiba Co Ltd | Information presentation device and method |
US5960446A (en) * | 1997-07-11 | 1999-09-28 | International Business Machines Corporation | Parallel file system and method with allocation map |
US6366953B2 (en) * | 1997-08-06 | 2002-04-02 | Sony Corporation | System and method for recording a compressed audio program distributed from an information center |
JPH11110140A (ja) * | 1997-10-03 | 1999-04-23 | Matsushita Electric Ind Co Ltd | データ記録装置および方法 |
US6205476B1 (en) * | 1998-05-05 | 2001-03-20 | International Business Machines Corporation | Client—server system with central application management allowing an administrator to configure end user applications by executing them in the context of users and groups |
US7165152B2 (en) * | 1998-06-30 | 2007-01-16 | Emc Corporation | Method and apparatus for managing access to storage devices in a storage system with access control |
WO2000004483A2 (en) | 1998-07-15 | 2000-01-27 | Imation Corp. | Hierarchical data storage management |
US6751604B2 (en) * | 1999-01-06 | 2004-06-15 | Hewlett-Packard Development Company, L.P. | Method of displaying temporal and storage media relationships of file names protected on removable storage media |
JP3601995B2 (ja) * | 1999-03-01 | 2004-12-15 | パイオニア株式会社 | 情報記録媒体のストッカー及びチェンジャー |
US6324062B1 (en) | 1999-04-02 | 2001-11-27 | Unisys Corporation | Modular packaging configuration and system and method of use for a computer system adapted for operating multiple operating systems in different partitions |
JP4151197B2 (ja) * | 1999-09-06 | 2008-09-17 | ソニー株式会社 | 記録再生装置及び記録再生方法 |
US8024419B2 (en) * | 2000-05-12 | 2011-09-20 | Sony Corporation | Method and system for remote access of personal music |
US6829678B1 (en) * | 2000-07-18 | 2004-12-07 | International Business Machines Corporation | System for determining the order and frequency in which space is allocated on individual storage devices |
US6981005B1 (en) * | 2000-08-24 | 2005-12-27 | Microsoft Corporation | Partial migration of an object to another storage location in a computer system |
FR2816090B1 (fr) * | 2000-10-26 | 2003-01-10 | Schlumberger Systems & Service | Dispositif de partage de fichiers dans un dispositif a circuit integre |
JP4622101B2 (ja) * | 2000-12-27 | 2011-02-02 | ソニー株式会社 | 情報処理装置、情報処理装置の情報処理方法および情報処理システム |
US8458754B2 (en) | 2001-01-22 | 2013-06-04 | Sony Computer Entertainment Inc. | Method and system for providing instant start multimedia content |
US6990667B2 (en) | 2001-01-29 | 2006-01-24 | Adaptec, Inc. | Server-independent object positioning for load balancing drives and servers |
US20020116283A1 (en) * | 2001-02-20 | 2002-08-22 | Masayuki Chatani | System and method for transfer of disc ownership based on disc and user identification |
US7228342B2 (en) | 2001-02-20 | 2007-06-05 | Sony Computer Entertainment America Inc. | System for utilizing an incentive point system based on disc and user identification |
US6901446B2 (en) * | 2001-02-28 | 2005-05-31 | Microsoft Corp. | System and method for describing and automatically managing resources |
US7315887B1 (en) * | 2001-04-11 | 2008-01-01 | Alcatel Lucent | Facilitating integration of communications network equipment inventory management |
US6950824B1 (en) * | 2001-05-30 | 2005-09-27 | Cryptek, Inc. | Virtual data labeling and policy manager system and method |
US6931411B1 (en) * | 2001-05-30 | 2005-08-16 | Cryptek, Inc. | Virtual data labeling system and method |
US7200609B2 (en) * | 2001-07-19 | 2007-04-03 | Emc Corporation | Attribute based resource allocation |
US6775745B1 (en) * | 2001-09-07 | 2004-08-10 | Roxio, Inc. | Method and apparatus for hybrid data caching mechanism |
US7136883B2 (en) * | 2001-09-08 | 2006-11-14 | Siemens Medial Solutions Health Services Corporation | System for managing object storage and retrieval in partitioned storage media |
JP4168626B2 (ja) * | 2001-12-06 | 2008-10-22 | 株式会社日立製作所 | 記憶装置間のファイル移行方法 |
US7024427B2 (en) * | 2001-12-19 | 2006-04-04 | Emc Corporation | Virtual file system |
US20040083370A1 (en) * | 2002-09-13 | 2004-04-29 | Sun Microsystems, Inc., A Delaware Corporation | Rights maintenance in a rights locker system for digital content access control |
US7913312B2 (en) * | 2002-09-13 | 2011-03-22 | Oracle America, Inc. | Embedded content requests in a rights locker system for digital content access control |
US9489150B2 (en) | 2003-08-14 | 2016-11-08 | Dell International L.L.C. | System and method for transferring data between different raid data storage types for current data and replay data |
WO2005017737A2 (en) * | 2003-08-14 | 2005-02-24 | Compellent Technologies | Virtual disk drive system and method |
US20050086430A1 (en) * | 2003-10-17 | 2005-04-21 | International Business Machines Corporation | Method, system, and program for designating a storage group preference order |
US7814131B1 (en) * | 2004-02-02 | 2010-10-12 | Network Appliance, Inc. | Aliasing of exported paths in a storage system |
US7177883B2 (en) * | 2004-07-15 | 2007-02-13 | Hitachi, Ltd. | Method and apparatus for hierarchical storage management based on data value and user interest |
US7788299B2 (en) * | 2004-11-03 | 2010-08-31 | Spectra Logic Corporation | File formatting on a non-tape media operable with a streaming protocol |
JP2006302052A (ja) * | 2005-04-22 | 2006-11-02 | Hitachi Ltd | ストレージシステムおよびデジタル放送システム |
US20070083482A1 (en) * | 2005-10-08 | 2007-04-12 | Unmesh Rathi | Multiple quality of service file system |
JP2007148982A (ja) * | 2005-11-30 | 2007-06-14 | Hitachi Ltd | ストレージシステム及びその管理方法 |
KR100803598B1 (ko) * | 2005-12-28 | 2008-02-19 | 삼성전자주식회사 | 화상 형성 장치에 연결된 외부 저장매체의 이미지 파일들을관리하는 방법 및 장치 |
US20070261045A1 (en) * | 2006-05-05 | 2007-11-08 | Dell Products L.P. | Method and system of configuring a directory service for installing software applications |
US8095576B2 (en) * | 2006-11-06 | 2012-01-10 | Panasonic Corporation | Recording device |
US20080147974A1 (en) * | 2006-12-18 | 2008-06-19 | Yahoo! Inc. | Multi-level caching system |
US8996409B2 (en) | 2007-06-06 | 2015-03-31 | Sony Computer Entertainment Inc. | Management of online trading services using mediated communications |
US9483405B2 (en) | 2007-09-20 | 2016-11-01 | Sony Interactive Entertainment Inc. | Simplified run-time program translation for emulating complex processor pipelines |
KR20090045980A (ko) * | 2007-11-05 | 2009-05-11 | 엘지전자 주식회사 | 광디스크 드라이브 및 이를 이용한 광고 및 서비스 시스템 |
US7836030B2 (en) * | 2007-11-13 | 2010-11-16 | International Business Machines Corporation | Data library optimization |
US8447421B2 (en) * | 2008-08-19 | 2013-05-21 | Sony Computer Entertainment Inc. | Traffic-based media selection |
US8290604B2 (en) * | 2008-08-19 | 2012-10-16 | Sony Computer Entertainment America Llc | Audience-condition based media selection |
US10325266B2 (en) | 2009-05-28 | 2019-06-18 | Sony Interactive Entertainment America Llc | Rewarding classes of purchasers |
US20110016182A1 (en) | 2009-07-20 | 2011-01-20 | Adam Harris | Managing Gifts of Digital Media |
US8874628B1 (en) * | 2009-10-15 | 2014-10-28 | Symantec Corporation | Systems and methods for projecting hierarchical storage management functions |
US8433759B2 (en) | 2010-05-24 | 2013-04-30 | Sony Computer Entertainment America Llc | Direction-conscious information sharing |
US8504487B2 (en) | 2010-09-21 | 2013-08-06 | Sony Computer Entertainment America Llc | Evolution of a user interface based on learned idiosyncrasies and collected data of a user |
US8484219B2 (en) | 2010-09-21 | 2013-07-09 | Sony Computer Entertainment America Llc | Developing a knowledge base associated with a user that facilitates evolution of an intelligent user interface |
US8762668B2 (en) * | 2010-11-18 | 2014-06-24 | Hitachi, Ltd. | Multipath switching over multiple storage systems |
US9105178B2 (en) | 2012-12-03 | 2015-08-11 | Sony Computer Entertainment Inc. | Remote dynamic configuration of telemetry reporting through regular expressions |
US10891370B2 (en) * | 2016-11-23 | 2021-01-12 | Blackberry Limited | Path-based access control for message-based operating systems |
US10423334B2 (en) | 2017-01-03 | 2019-09-24 | International Business Machines Corporation | Predetermined placement for tape cartridges in an automated data storage library |
WO2020180045A1 (en) * | 2019-03-07 | 2020-09-10 | Samsung Electronics Co., Ltd. | Electronic device and method for utilizing memory space thereof |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH03171238A (ja) * | 1989-08-29 | 1991-07-24 | Microsoft Corp | 設置可能なファイルシステムにおいてダイナミックボリュームトラッキングを行う方法及び装置 |
JPH0659957A (ja) * | 1991-06-27 | 1994-03-04 | Digital Equip Corp <Dec> | データを記憶するファイル・システム及び記憶空間を割り当てる方法 |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4207609A (en) * | 1978-05-08 | 1980-06-10 | International Business Machines Corporation | Method and means for path independent device reservation and reconnection in a multi-CPU and shared device access system |
US4396984A (en) * | 1981-03-06 | 1983-08-02 | International Business Machines Corporation | Peripheral systems employing multipathing, path and access grouping |
US4455605A (en) * | 1981-07-23 | 1984-06-19 | International Business Machines Corporation | Method for establishing variable path group associations and affiliations between "non-static" MP systems and shared devices |
US4974156A (en) * | 1988-05-05 | 1990-11-27 | International Business Machines | Multi-level peripheral data storage hierarchy with independent access to all levels of the hierarchy |
US4987533A (en) * | 1988-05-05 | 1991-01-22 | International Business Machines Corporation | Method of managing data in a data storage hierarchy and a data storage hierarchy therefor with removal of the least recently mounted medium |
JPH0215482A (ja) * | 1988-07-04 | 1990-01-19 | Naotake Asano | フロツピーデイスクラベル印字装置 |
EP0389151A3 (en) * | 1989-03-22 | 1992-06-03 | International Business Machines Corporation | System and method for partitioned cache memory management |
US5214768A (en) * | 1989-11-01 | 1993-05-25 | E-Systems, Inc. | Mass data storage library |
US5239650A (en) * | 1990-05-21 | 1993-08-24 | International Business Machines Corporation | Preemptive demount in an automated storage library |
US5121483A (en) * | 1990-05-21 | 1992-06-09 | International Business Machines Corporation | Virtual drives in an automated storage library |
US5050229A (en) * | 1990-06-05 | 1991-09-17 | Eastman Kodak Company | Method and apparatus for thinning alphanumeric characters for optical character recognition |
US5459848A (en) * | 1991-03-07 | 1995-10-17 | Fujitsu Limited | Library apparatus with access frequency based addressing |
JPH06309200A (ja) * | 1991-04-10 | 1994-11-04 | Internatl Business Mach Corp <Ibm> | ボリュームからオブジェクトを読取る方法、並びに階層式記憶システム及び情報処理システム |
JPH0540582A (ja) * | 1991-08-07 | 1993-02-19 | Shikoku Nippon Denki Software Kk | フアイル処理装置 |
JPH05189286A (ja) * | 1992-01-09 | 1993-07-30 | Hitachi Ltd | ディスクキャッシュ制御システムおよび制御方法 |
JPH05274200A (ja) * | 1992-03-25 | 1993-10-22 | Fujitsu Ltd | パス名検索におけるネームキャッシュ機構のパージ制御方法 |
US5418971A (en) * | 1992-04-20 | 1995-05-23 | International Business Machines Corporation | System and method for ordering commands in an automatic volume placement library |
JP3543974B2 (ja) * | 1992-05-29 | 2004-07-21 | キヤノン株式会社 | 情報処理装置および情報処理方法 |
JPH06110809A (ja) * | 1992-09-30 | 1994-04-22 | Toshiba Corp | ネットワークファイルシステムのマウント方式 |
US5572694A (en) * | 1992-11-25 | 1996-11-05 | Fujitsu Limited | Virtual system for detecting access paths belonging to same group from plurality of access paths to reach device designated by command with reference to table |
US5479581A (en) * | 1993-12-30 | 1995-12-26 | Storage Technology Corporation | Multiple library media exchange system and method |
-
1995
- 1995-04-25 WO PCT/JP1995/000810 patent/WO1995029444A1/ja not_active Application Discontinuation
- 1995-04-25 US US08/557,124 patent/US5784646A/en not_active Expired - Lifetime
- 1995-04-25 CN CN95190342A patent/CN1093289C/zh not_active Expired - Fee Related
- 1995-04-25 KR KR1019950705925A patent/KR100380878B1/ko not_active IP Right Cessation
- 1995-04-25 EP EP95916045A patent/EP0706129A4/en not_active Withdrawn
- 1995-04-25 JP JP52752995A patent/JP3796551B2/ja not_active Expired - Fee Related
-
1998
- 1998-07-20 US US09/119,506 patent/US6085262A/en not_active Expired - Lifetime
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH03171238A (ja) * | 1989-08-29 | 1991-07-24 | Microsoft Corp | 設置可能なファイルシステムにおいてダイナミックボリュームトラッキングを行う方法及び装置 |
JPH0659957A (ja) * | 1991-06-27 | 1994-03-04 | Digital Equip Corp <Dec> | データを記憶するファイル・システム及び記憶空間を割り当てる方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP0706129A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007110931A1 (ja) * | 2006-03-28 | 2007-10-04 | Fujitsu Limited | 名前空間複製プログラム、名前空間複製装置、名前空間複製方法 |
JP4699516B2 (ja) * | 2006-03-28 | 2011-06-15 | 富士通株式会社 | 名前空間複製プログラム、名前空間複製装置、名前空間複製方法 |
CN102333123A (zh) * | 2011-10-08 | 2012-01-25 | 北京星网锐捷网络技术有限公司 | 文件存储方法、设备、查找方法、设备和网络设备 |
Also Published As
Publication number | Publication date |
---|---|
US6085262A (en) | 2000-07-04 |
KR100380878B1 (ko) | 2003-08-02 |
US5784646A (en) | 1998-07-21 |
KR960703480A (ko) | 1996-08-17 |
EP0706129A1 (en) | 1996-04-10 |
EP0706129A4 (en) | 1999-09-22 |
CN1127560A (zh) | 1996-07-24 |
CN1093289C (zh) | 2002-10-23 |
JP3796551B2 (ja) | 2006-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO1995029444A1 (fr) | Controleur d'acces memoire | |
JP4771615B2 (ja) | 仮想記憶システム | |
US5805864A (en) | Virtual integrated cartridge loader for virtual tape storage system | |
EP1384135B1 (en) | Logical view and access to physical storage in modular data and storage management system | |
US7246207B2 (en) | System and method for dynamically performing storage operations in a computer network | |
JP3630408B2 (ja) | データ記憶装置ライブラリ | |
US7206905B2 (en) | Storage system and method of configuring the storage system | |
US8572330B2 (en) | Systems and methods for granular resource management in a storage network | |
US7409509B2 (en) | Dynamic storage device pooling in a computer system | |
US6604165B1 (en) | Library control device for logically dividing and controlling library device and method thereof | |
US20020091828A1 (en) | Computer system and a method of assigning a storage device to a computer | |
WO2000002125A2 (en) | Method for selectively storing redundant copies of virtual volume data on physical data storage cartridges | |
WO2005010757A1 (ja) | ファイル管理方法及び情報処理装置 | |
US5940849A (en) | Information memory apparatus and library apparatus using a single magnetic tape shared with a plurality of tasks | |
US6260006B1 (en) | System and method for multi-volume tape library | |
US5787446A (en) | Sub-volume with floating storage space | |
AU673021B2 (en) | Management apparatus for volume-medium correspondence information for use in dual file system | |
JPH0667811A (ja) | 多重化ディスク制御装置 | |
JPH07271524A (ja) | データ格納装置 | |
JPH0934791A (ja) | 情報記憶装置 | |
JPH08110839A (ja) | 集合型データ処理装置 | |
JPH09265416A (ja) | 階層的情報管理方法及びその実施装置 | |
JPH10222930A (ja) | 光学的記憶システム及びコピー処理プログラムを記録したコンピュータ読み取り可能な記録媒体 | |
JPH10124388A (ja) | 記憶制御装置 | |
JPH09330263A (ja) | データアーカイブシステム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 95190342.X Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN JP KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): DE FR GB |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1995916045 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 08557124 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1019950705925 Country of ref document: KR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWP | Wipo information: published in national office |
Ref document number: 1995916045 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1995916045 Country of ref document: EP |