A solid state drive (SSD) or high-performance storage that does not require a physical operation has been developed with the recent advancement of technology.
With the emergence of various technologies, such as information lifetime management (ILM), in which new information is stored on a high-speed disk and old information is stored on a low-speed, low-price disk, hierarchization, and virtualization, it has been generally accepted that various kinds of disks are mixed in a storage apparatus. For this reason, for example, a fast accessible storage such as SSD and a hard disk drive (HDD) having a low access speed may often coexist in a storage apparatus.
FIG. 13 is a diagram illustrating a storage system 101 according to the related art.
In the storage system 101 as illustrated in FIG. 13, for example, a plurality of storage apparatuses 110 and 120 are communicably connected to a server 102 via paths 130-1 to 130-6.
The server 102 may be an information processing apparatus which writes or reads data to or from a volume of the connected storage apparatuses 110 and 120. For example, the server 102 performs a data access request for a read or write command to the volume of the storage apparatuses 110 and 120. The storage apparatuses 110 and 120 perform data access to the volume depending on the data access request and respond to the server 102. The server 2 includes, for example, a central processing unit (CPU), a random access memory (RAM), and a read-only memory (ROM), which are not illustrated. Further, the server 102 includes a multi-path driver unit 103, logical unit number (LUN) information 107-1 to 107-6, and host bus adaptors (HBA) 109-1 to 109-6.
The multi-path driver unit 103 switches an access path to retry input/output (IO) when an IO timeout occurs, under a multi-path environment in which a plurality of paths to the storage apparatuses 110 and 120 are present. The IO timeout represents an exceeding of a timeout time (referred to as a timeout value), which is the maximum wait time from when the IO command is issued until a response to the IO command is returned.
The LUN information 107-1 to 107-6 is information about the LUNs of the storage apparatuses 110 and 120 which are connected to the server 102. For each LUN, for example, the disk type of the LUN (e.g., “SSD”), or apparatus identification information (e.g., “A4000-6A0298”), are stored in the LUN information 107-1 to 107-6.
The HBAs 109-1 to 109-6 are adaptors which connect the server 102 to external devices such as, for example, the storage device. Examples of the HBAs 109-1 to 109-6 may include an SCSI adaptor, a fiber channel (FC) adaptor, and a serial ATA adaptor. Alternatively, the HBAs 109-1 to 109-6 may be a device which connects apparatuses based on IDE, Ethernet (registered mark), FireWire (registered mark), universal serial bus (USB), or the like.
The storage apparatus 110 provides a storage area for the server 102 and is, for example, an RAID apparatus. In an example in FIG. 13, the storage apparatus 110 has “A4000” as an apparatus name and “6A0298” as an ID. The storage apparatus 110 includes controller modules (CM) 111-1 and 111-2 and disks 113-1 to 113-4.
The CM 111-1 includes, for example, a CPU, a memory, and a cache memory, which are not illustrated, in addition to channel adapters 112-1 and 112-2, and performs IO processing.
The CM 111-2 includes, for example, a CPU, a memory, and a cache memory, which are not illustrated, in addition to CAs 112-3 and 112-4, and performs IO processing.
In this configuration, the CM 111-1 and the CM 111-2 have the same configuration. Further, the CA 112-1 to CA 112-4 have the same configuration.
The CAs 112-1 to 112-4 are interface controllers communicably connected to the server 102 and may be, for example, fiber channel adapters. The CAs 112-1, 112-2, and 112-4 are respectively connected to the HBAs 109-1 to 109-3 of the server via of the respective paths 130-1 to 130-3.
The paths 130-1 to 130-3 connect the server 102 to the storage apparatus 110, and may be, for example, fiber channels.
The disks 113-1 to 113-4 are storage apparatuses able to store information and include, for example, a near-line drive (hereinafter, referred to as a nearline) 113-1, HDDs 113-2 and 113-3, and an SSD 113-4. Strictly speaking, the SSD is not a disk, but for convenience of explanation, the SSD is also handled as a disk, like the nearline or the HDD.
The nearline 113-1 is a storage drive which uses a disc, to which a magnetic substance has been applied, as a recording medium and moves a magnetic head to read and write information from and to a disc, which rotating at high speed. The nearline 113-1 generally has a capacity larger than the capacity of the HDDs 113-2 and 113-3, but has a reduced access speed.
The HDDs 113-2 and 113-3 are each a storage drive which uses a disc to which a magnetic substance has been applied as a recording medium and moves a magnetic head to read and write information from and to the disc, which is rotating at high speed.
The SSD 113-4 is a storage drive which uses a semiconductor device memory as a storage apparatus and is referred to as a silicon disk drive or a semiconductor disk drive. Since an SSD 33 does not generally have a head seek time accompanied by the movement of the magnetic head such as the nearline 113-1 and the HDDs 113-2 and 113-3, the SSD 33 may perform random access at a higher speed than the nearline 113-1 and the HDDs 113-2 and 113-3. However, since the SSD 33 uses semiconductor memory, the SSD 33 is generally more expensive than the nearline 113-1 or the HDDs 113-2 and 113-3.
The storage apparatus 120 provides a storage area for the server 102 and may be, for example, an RAID apparatus. In the example of FIG. 13, the storage apparatus 120 has “A3000” as an apparatus name and “4A0290” as an ID. The storage apparatus 120 includes CMs 121-1 and 121-2 and disks 123-1 and 123-2.
The CM 121-1 includes, for example, a CPU, a memory, and a cache memory, which are not illustrated, in addition to the CAs 122-1 and 122-2, and performs IO processing.
The CM 121-2 includes, for example, a CPU, a memory, and a cache memory, which are not illustrated, in addition to the CAs 122-3 and 122-4, and performs the IO processing.
The CAs 122-1 to 122-4 are each an interface controller communicably connected to the server 102, and may be, for example, a fiber channel adapter. The CAs 122-1, 122-2, and 122-4 are respectively connected to the HBAs 109-4 to 109-6 of the server via of the respective paths 130-4 to 130-6.
Here, the CM 121-1 and the CM 121-2 have the same configuration. Further, the CAs 122-1 to CA 122-4 have the same configuration.
The paths 130-4 to 130-6 are each a path which connects the server 102 to the storage apparatus 120 and may be, for example, an FC.
The disks 123-1 and 123-2 are each a storage apparatus able to store information, and may be, for example, the HDDs 123-1 and 123-2.
The HDDs 123-1 and 123-2 are each a storage drive which uses a disc to which a magnetic substance has been applied as a recording medium and moves a magnetic head to read and write information from and on the disc, which is rotating at high speed.
In the storage system 101, the IO access is generated for the storage apparatuses 110 and 120 from the server 102.
In this configuration, the server 102 side sets a timeout time (hereinafter, the timeout time is referred to as “IO timeout time”) which is a maximum time to wait for a response from the storage apparatuses 110 and 120 after the server 102 issues an IO request.
When there is no response from the storage apparatuses 110 and 120 even after the time set in the server 102 side passes the IO timeout time due to, for example, failure of the storage apparatuses 110 and 120, the server 102 assumes that a timeout error has occurred and performs error processing.
The latency from after the server 102 issues the IO to the storage apparatuses 110 and 120 until the storage apparatuses 110 and 120 reply becomes, for example, the total sum of IO request transmission time, disk seek time, read/write time, IO transmission time and the like.
When the server 102 sets the IO timeout time to be short, the error detection or the path switching time when the storage apparatuses 110 and 120 fail may be short. However, when the IO timeout time is short, a timeout will unfortunately be detected when a latency value becomes longer than usual due to, for example, a temporary increase in the load of the storage apparatuses 110 and 120 even when the storage apparatuses 110 and 120 are operating normally.
The server 102 may set one IO timeout time. For example, in the Solaris (registered mark) standard driver, the default value of the IO timeout time is 60 seconds. To change the value, a user rewrites the definition and restarts the server 102.
As an example of the prior art, see for example Japanese Patent Application Laid-Open No. 2006-235843.
Since the SSD 113-4 has a short access time, the SSD 113-4 may set the IO timeout time to a short value.
However, as described above, there is one set IO timeout value in the storage apparatuses 110 and 120 in the related art. Therefore, when the same IO timeout value is used for both the faster disk 113 and the slower disk 113, and the IO timeout time is set to be less than 60 seconds to cooperate with the SSD, an IO timeout may frequently occur in the low-speed disk.
Even though there is a path for a switching destination of the IO, since path switching is performed after waiting a given time until the timeout, switching the path to another path and then immediately reissuing an IO command may not be possible.
An objective of one aspect of the present disclosure is to shorten a response time at the time of IO issuance for a storage apparatus.
Actions and effects that are derived by each component illustrated in the embodiments for carrying out the disclosure to be described below and cannot be obtained by the related art, may be considered as other objectives of the present application without being limited to the above objective.