DECUS.PA2 V1.01 7-jan-87 THIS FILE CONTAINS THE ORIGINAL DECUS PAPER IN THE APPROPRIATE FORMAT FOR PUTTING ON THE BIG SHEETS TO SEND FOR INCLUSION IN THE PROCEEDINGS FROM THE 1987 FALL SYMPOSIUM. % to edit this file, set wrap 52. = EREAL WORLD DISK COMPARISONSR = Robert C. Peckham = Computer Programming Services, Glendale, CA =EABSTRACTR M Many computer users are interested in the actual data transfer rates achieved when real controllers and disks operate with a real operating system, doing real data transfers, as compared with the data transfer rates claimed in manufacturers' literature. Test programs were written to exercise the various operational parameters of a disk, while doing the type of transfers that might be observed in real-world applications. These test programs were run on a wide variety of disks by approximately twenty DEC end-user sites. The test programs were run on disks ranging from RX01 through the more common cartridge disks, on to some relatively large and exotic Winchester and memory disks, and even on an Ethernet virtual disk. The results are presented in tabular form so that direct comparison is possible. The results of this project are very interesting to those interested in real-world disk performance. X =INTRODUCTION M The authors and, we discovered, a significant number of other people, were interested in the actual data transfer rates achieved when real controllers and disks operate through a real operating system, doing real data transfers. This paper presents the results of a group of test programs which were run on numerous disk and controller combinations. In disk-based operating systems, which all PDP-11 and VAX systems are, the performance of the system disk, and any auxiliary disks, has a major impact on system performance. Most sites have no realistic way to compare the price/ performance characteristics of one disk against another, or the absolute perform- ance of any disk in their system, before purchase. Many sites have had the disappointing experience of purchasing a disk based on salesmens' claims, or based on printed "performance specifications", and subsequently discovering, to their chagrin, that those specifications meant relatively little in a "real world" environment. The test sequence which produced the results reported in this paper consisted of a group of programs which created and manipulated files with a variety of file layouts, with the primary measurement being the elapsed time for the test. The process of getting data to and from a disk involves both hardware and software. When a user program requests a disk operation, the operating system fields the request and eventually issues the necessary commands to the disk controller. The controller translates these commands into hardware instructions to the disk drive, which causes data to be read from the disk, transferred back through the controller, and eventually into memory, where it is available to the user program. The first step is for the operating system to handle the request. The system may have to load the device handler, do a context switch, swap jobs, or do other housekeeping before it can actually start issuing commands to the disk controller. The time consumed in this process is called "system latency". Once commands are issued to the controller, the controller may have processing to do before it starts sending instructions to the disk. This delay is "controller latency". Much of the time, a disk transfer will require that the disk head be moved to a track other than the current one. There is a significant delay involved in this process, since the head positioning system is an electro-mechanical device and responds relatively slowly. This "seek delay" is usually quite large and is often the overriding factor in disk access time. Once the head has been moved to the proper track, the system must wait until the desired sector moves under the head. This "rotational latency" is basically a function of the rotational speed of the disk. Finally, the disk subsystem is ready to transfer data into memory. The upper limit on this process, or the "peak transfer rate", is the speed at which bit cells pass under the disk head. This maximum rate may be degraded by delays caused by the buss DMA transfer system (i.e., if the buss cannot keep up with data coming off the disk). It is common to use techniques such as interleaving to ensure that as few disk rotations as possible are required to read the data once the beginning of the desired data has been found. In most cases, the data transfer rate is not a major contributor to the time it takes to access the desired data. There are other places where time can be lost in this process, which might be categorized as "overhead". Interrupts that interfere with the system's disk-handling software may cause delays that result in extra rotational latencies. Interrupts and higher priority DMA devices may cause the controller-to-memory data transfer to fall behind the disk-to-controller transfer. If this delay is large enough, further rotational latencies may occur. Depending on the operating system file structure, more than one seek and read may be required to service a single user program data request. Those systems that scatter portions of files around the disk may need to "collect" the scattered data needed by the user program, and they will probably need to read and update the various bookkeeping data areas to keep track of which disk areas are in use and which are available. Because we wanted "real world" results, all disk activity in the test programs run by the various sites occurred through the normal operating system I/O system. Therefore, the various latencies discussed above are included in the test program run times. The test programs were written in FORTRAN. This means that the FORTRAN run-time library overhead is included, which would be typical of any system using higher order languages. X =RT-11 OPERATING SYSTEM M The host operating system for the study was RT-11, single-job monitor, Version 5.0. RT-11 is characterized by fairly low I/O system overhead; however, most of the tests used named files on the test device. This means that an RT-11 file system was created on the disk being tested. The overhead involved in opening and closing files is included in the test results. Since directory processing is a significant part of most disk activity, a brief description of the RT-11 file system is in order. An RT-11 directory is kept in the low numbered blocks of a disk, beginning in block 6. (Blocks 0 through 5 contain boot and identification information.) The directory is allocated (at disk initialization time) in two-block chunks, called segments, up to a maximum of 31 (decimal) segments. The number of directory segments is either user-specified or is determined by the number of total blocks available on the device. Each directory segment may contain up to seventy-two file entries. RT-11 files are allocated as contiguous blocks from the available "empty" space on a disk. Each file requires one directory entry. In addition, each "empty" area requires a directory entry so that at any time the full disk space is described in the directory, either as allocated to files or as "empty". The segments in the directory are connected together in a forwardly linked list, with files that have lower starting block numbers appearing in the directory before files with higher block numbers. Since the directory is organized by block number, a directory search by file name (the usual operation) is performed sequentially from the front of the directory until either the file or the end of the directory is found. The RT-11 directory processing time costs should be affected primarily by two different hardware factors. The location of the directory in the low numbered blocks of the disk means that when a directory operation is needed, a seek will usually be required. In addition, since "new" files tend to be at the opposite end (from the directory) of the "used" portion of the disk, this seek will be longer than the "average" based simply on the disk usage pattern. The second component, once the directory has been read, will be the CPU time involved in searching through the directory. In an attempt to make the effect of the directory organization on the study as uniform as possible, the tests were always run on an "empty" (i.e., freshly initialized) disk, with the file creation and manipulation performed by the test programs and distributed command files. X =LIMITS ON THE APPLICABILITY M This study was aimed at attempting to measure disk sub-system performance in the "real world". Since the authors use RT-11, the "real world" in this case was defined to be RT-11. In general, we were not trying to measure the maximum speed a disk could provide, nor were we trying to find optimum disk access techniques. The objective in the design of the test programs used in this study was to reproduce many of the circumstances encountered in the use of a disk in a normal interactive RT-11 environment. "Real Time" RT-11 is a good operating system environment for doing disk benchmarking. The way a system is used at a site will affect hardware selection decisions. In particular, the results of this study do not necessarily apply to all RT-11 environments. The authors are primarily interested in general purpose systems used for program development, word processing, and business support, with some multi-user activities. This type of environment tends to have many fairly small files, accessed in a more-or-less random fashion. The test programs used were somewhat skewed to this type of disk use and are not necessarily a good guide to other environments. X =FACTORS NOT CONSIDERED M Disk speed is, obviously, not the only consideration when purchasing mass storage. Besides the major factors of capacity needs and the pocketbook, other items to consider are: maintenance (who, how, and at what cost?); DEC compatibility (emulation?, special drivers?); file backup (on what, how long it takes?); reliability, and vendor "track record". X =TEST PROGRAMS M This study used eleven programs that, combined with several different arrangements of data files, were designed to measure various aspects of disk performance with the primary "result" measure being the wall clock time required to complete the test(s). TEST1: Created, wrote, and closed 150 one-block files. Because the actual data transfered to the files was short, this test was primarily a seek test, particularly between the disk directory area and the data area. Actual data written was 76,800 bytes. TEST2: Created, wrote, and closed one 150-block file. This test was a measure of data transfer rate on a medium size file. Actual data written was 76,800 bytes. TEST3: Created, wrote, and closed 300 one-block files. This test was similar to TEST1. TEST3 created data files for use by later test programs. Actual data written was 153,600 bytes. TEST4: Using the 300 one-block files created by TEST3, this program pseudo-randomly selected a file, opened, read, and closed the file, modified one data element of the file, then opened, wrote, and closed the same file, until all 300 files had been processed. Actual data read and written was 307,200 bytes. This test primarily measured random block latency. The large number of directory operations means that the directory processing portion of the operating system was exercised. An effective data and directory caching scheme would speed up this process considerably by reducing the many seeks involved. (Tests 4, 4A, 4B, 5 and 5A were written anticipating that any effective directory and/or data caching scheme would significantly reduce run time.) TEST4A: Performed the same operations as TEST4, but the "write" sequence was performed on the null device, NL:. Asymmetry in "read" and "write" operations may be apparent when compared to TEST4. Actual data read and written was 307,200 bytes. TEST4B: This test was similar to TEST4, except that the files were opened, read and closed only. Actual data read was 153,600 bytes. This test looked for symmetry in read and write operations and was very seek-intensive. A large directory caching system would help significantly for this test. TEST5: Using the 300 one-block files originally created by TEST3, this test sequentially opened, read and closed a file, then opened, wrote, and closed the file on NL:. Actual data read and "written" was 307,200 bytes. This test was intended to look at the sequential performance of the disk. The large number of directory operations caused a large number of seeks and exercised the directory processing software. It was anticipated that a disk caching system would excel on this test. TEST5A: Similar to TEST5, this program sequentially opened, read, and closed each of the 300 one-block files. Actual data read was 153,600 bytes. ALTERNATE TESTS 4, 4A, 4B, 5, 5A: Two command files, SPACE2 and SPACE8, were used to insert two dummy files (2000-block and 8000-block, respec- tively) into the midst of the 300 data files in an attempt to increase required seek distances. Tests 4, 4A, 4B, 5 and 5A were run both before and after this spacing-out of the data files. TEST6: Created, wrote (sequentially), and closed one 300-block file. Actual data written was 153,600 bytes. The run time result is useful for comparison purposes, and the 300 block data file was also needed for TEST7. TEST7: Read the 300-block file left by TEST6, then wrote the file on NL:. Actual data read was "written" was 307,200 bytes. TEST8: Created, wrote, and closed one 800-block file. Actual data written was 409,600 bytes. This test was to compare data rates, writing a large formatted data file. TEST9: Opened, read, and closed the 800-block file from TEST8. Acutal data read was 409,600 bytes. Compare TEST8 and TEST9 results to look at read/ write symmetry. TEST10: Created, wrote, and closed 1000 five-block files. Actual data written was 2,560,000 bytes. This test stressed the disk subsystem's sequential and random access capability. The transfer speed of the device should have been a relatively minor component. TEST11: Read either the whole disk, or 32000 blocks (whichever was smaller), sequentially in 16384 byte "chunks". Compute byte transfer rate and report as "Figure of Merit" (FOM). This test was designed to get a good idea of the disk's maximum effective read data transfer rate. The large size of the data buffer minimized the operating system/device handler overhead and allowed the device to operate at high transfer rates efficiently. Because the reads were performed at the device level (i.e., little system interference), there is no directory processing overhead. The sequential nature of the reads reduced the latency to single track seeks and the irreducible rotational latency of the device. It turns out that the buffer size used in this test has a measurable effect on the result found. For most disks, the optimal buffer size would be as large as possible, but one that is an even multiple of the number of disk blocks per disk track. The number used (32 512-byte blocks) is non-optimal for many devices, but its large size tends to reduce this effect. The results gathered in this study are summarized in Appendices A through I. Appendix A is the manufacturers' specification and rating for each of the various systems and disks tested, if available. The study participants supplied this data. Appendix B is a summary of the various systems tested. Systems without a disk indicated used RT-11's VM: memory disk. Appendix C is the VM: results which is useful for normalizing memory, CPU, and buss speed. Appendix D is the results of the programs run on floppy disks. Note the almost total CPU independence. Appendix E is the results for the smaller cartridge and Winchester disks. Appendix F is the results for the larger cartridge and Winchester disks. Appendix G is the results for disks not directly comparable due to CPU or buss type. Appendix H is an operating system comparison. It was put in because the authors found it very interesting. Appendix I is the results for "strange" pseudo disks. Note the outstanding performance of Ethernet and bulk semi "disks". Note that bulk semi performance is very dependent on the type of buss. X =DISCUSSION M Not surprisingly, the disks with the fastest seek speed generally did the best in the tests. This confirms "theoretical" expectations, and corresponds to the authors' subjective experience. An aspect of seek performance that can be important is the way the blocks of a disk are organized. In systems that emulate DEC controllers by partitioning a large disk into several smaller drives, the partitioning map can be important. For example, the three system combinations labled I1, I2, and I3 (in Appendix B) have essentially the same hardware performance specifications, but the controller used in I1 partitions its disk's logical devices so that each platter in the drive is a logical device, while the controllers in I2 and I3 use groups of contiguous cylinders for each logical device. The effect of the second method is that the blocks of a logical device are closer to each other than with the first method, resulting in better performance. For the larger disks, several of the tests (4 thru 5a) were run several times with a reorganization of the disk between test series. The reorganization placed two large files in the midst of the test data files, which was intended to extend the seek travel distance for many of the seeks. In terms of measuring the difference between disk subsystems, this did not produce very interesting results! The relative performance of the various disks remained essentially unchanged, which indicates that none of the disks had a hardware caching system. These 'spaced-out' results were omitted except in the operating system appendix (H) where the effect of software caching was obvious. While this "spacing-out" procedure produced no meaningful comparison data, it did turn up a useful anomaly that can be seen in Appendix H. In almost all of the sub-systems the performance results after the SPACE2 command file were better than the results before SPACE2. The results after the SPACE8 command file were slower than after the SPACE2 results, but faster than initial ones. This is because the disk directory was squeezed after the two large "spacer" files were added. The reduction in directory processing after adding the two 2000 block files more than made up for the increased seek travel. The increase in run time with two 8000 block space files replacing the two 2000 block files is consistent with increased seek travel and vividly demonstrates the importance of apple to apple disk comparisons. Another interesting observation is the performance of the DEC RA-80 system on an 11/24 (system N1). For TEST11, which is basically a maximum speed sequential read, the RA-80 showed a remarkable transfer speed, but on the other tests, involving more random access, the results for the RA-80 were more mundane and placed the system with the rest of the "high end" pack. The CPU's used for the tests were 11/2, 11/03, 11/23, 11/23+, 11/24, 11/34, and 11/73. All but the 11/24 and the 11/34 were Q-bus systems. CPU speed did make a difference, although there were relatively few examples of the same disk sub-system with different CPU and buss types. This does not provide much opportunity for direct comparisons. The two Unibus systems performed well, but since there was no Q-bus system using the same disk it was not possible to make any direct comparisons. The exception to this was the Dataram bulk-semi which gave outstanding performance on both the Q-bus and U-bus. We note that it was three times as out- standing on the U-bus which does say something about bus speed! The TEST11 Figure of Merit (FOM) was actually a somewhat idealized maximum data rate (bytes/sec) obtained by bypassing the RT "High level I/O facilities" and using very large buffers. In general, however, the FOM was MUCH lower than published disk performance specifications. Some of the tests that manipulated numerous small formatted files produced average data rates of less than 5000 bytes/sec, even with the fastest disks. Appendix H shows the results of running the test programs on various operating systems and CPU's while keeping the disk system the same. As one might expect, the more sophisticated systems tend to have lower transfer rates. There was not much difference between the RT-11 Single-Job Monitor and the Foreground/Background Monitor. The XM system and TSX-Plus had significant overhead penalty. A very interesting result is shown by the TSX-PLUS with disk caching column. For most of the tests, caching produces a significant improvement over TSX-PLUS without caching. In most cases the caching system allows TSX-PLUS to beat the Single Job system, but in the raw throughput case of TEST11, the caching system overhead dramatically impedes performance. The results of these disk comparisons demonstrate that the performance figures quoted by manufacturers are not useable for calculating the actual performance of a disk-based system which does disk I/O. On the other hand, the test results do correlate relatively well from a disk-to-disk comparison standpoint. Although cost was not discussed in this paper, the authors think it is worth noting that, to a large extent, the performance observed in these disk comparisons correlates relatively well with the cost of the disks and disk controllers exercised during this test program. The authors wish to thank all of the DEC sites and personnel who volunteered to run these test programs. Running the programs required the exclusive use of the system, one or more initialized volumes, and several hours of work on the part of the participants. Thus, it represented a considerable amount of time and trouble, and the authors could not have done this work without the active and enthusiastic participation of the people involved. X =1984 CONCLUSIONS M 1. We did not test a disk with a hardware-caching system. The TSX+ software caching performance indicates that an effective hardware- caching system would be of significant value for many I/O loads. 2. The CPU and operating system can be more important than the disk sub-system. Depending on the operating system, a faster disk system may provide very little improvement. 3. The type of operating system affects the apparent disk performance. The more complex and capable operating systems provided lower data transfer rates than the RT-SJ monitor in our tests. DEC sells RT as a "fast" operating system with low overhead compared to RSX and RSTS. 4. The variations we found suggest that disk system selection on any basis but a "test drive" is fairly risky. Vendor-published performance data is not a good indicator of actual disk performance in an operating computer system. X =1987 CONCLUSIONS M 1. Disk controllers with cache did produce very significant improvements in system performance where the disk subsystem was seek and/or rotational latency bound. 2. A site's unique hardware can have a significant effect on performance. 3. A "test drive" is still recommended. 4. In many "real world" situations, a "memory" disk is of little or no benefit over a disk with a "caching" controller. 5. Controller cache is better than system data cache. X N