[Awesome Performance] FIO: Flexible I/O tester for disk performance

FIO

Fio was originally written to save me the hassle of writing special test case programs when I wanted to test a specific workload, either for performance reasons or to find/reproduce a bug. The process of writing such a test app can be tiresome, especially if you have to do it often. Hence I needed a tool that would be able to simulate a given I/O workload without resorting to writing a tailored test case again and again.

A test work load is difficult to define, though. There can be any number of processes or threads involved, and they can each be using their own way of generating I/O. You could have someone dirtying large amounts of memory in an memory mapped file, or maybe several threads issuing reads using asynchronous I/O. fio needed to be flexible enough to simulate both of these cases, and many more.

Fio spawns a number of threads or processes doing a particular type of I/O action as specified by the user. fio takes a number of global parameters, each inherited by the thread unless otherwise parameters given to them overriding that setting is given. The typical use of fio is to write a job file matching the I/O load one wants to simulate.

Installation

1
2
3
4
5
# CentOS
$ yum install -y fio

# Debian, Ubuntu
$ apt install -y fio

Usages

Running fio

Running fio is normally the easiest part - you just give it the job file (or job files) as parameters:

1
$ fio [options] [jobfile] ...

and it will start doing what the jobfile tells it to do. You can give more than one job file on the command line, fio will serialize the running of those files. Internally that is the same as using the stonewall parameter described in the parameter section.

If the job file contains only one job, you may as well just give the parameters on the command line. The command line parameters are identical to the job parameters, with a few extra that control global parameters. For example, for the job file parameter iodepth=2, the mirror command line option would be --iodepth 2 or --iodepth=2. You can also use the command line for giving more than one job entry. For each --name option that fio sees, it will start a new job with that name. Command line entries following a --name entry will apply to that job, until there are no more entries or a new --name entry is seen. This is similar to the job file options, where each option applies to the current job until a new [] job entry is seen.

fio does not need to run as root, except if the files or devices specified in the job section requires that. Some other options may also be restricted, such as memory locking, I/O scheduler switching, and decreasing the nice value.

If jobfile is specified as -, the job file will be read from standard input.

See 1.9. How fio works | 1. fio - Flexible I/O tester rev. 3.27 — fio 3.27 documentation - https://fio.readthedocs.io/en/latest/fio_doc.html#how-fio-works to learn more.

Interpreting the output

Some important metrics within the output.

  • bw

    Bandwidth statistics based on samples. Same names as the xlat stats, but also includes the number of samples taken (samples) and an approximate percentage of total aggregate bandwidth this thread received in its group (per). This last value is only really useful if the threads in this group are on the same disk, since they are then competing for disk access.

  • iops

    IOPS statistics based on samples. Same names as bw.

See 1.14. Interpreting the output | 1. fio - Flexible I/O tester rev. 3.27 — fio 3.27 documentation - https://fio.readthedocs.io/en/latest/fio_doc.html#interpreting-the-output to learn more.

Test disk device

Create or edit fio.conf.disk configuration file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# fio.conf.disk

[global]
ioengine=libaio
iodepth=128
time_based
direct=1
thread=1
group_reporting
randrepeat=0
norandommap
numjobs=32
timeout=6000
runtime=120

[randread-4k]
rw=randread
bs=4k
filename=/dev/vdb
rwmixread=100
stonewall

[randwrite-4k]
rw=randwrite
bs=4k
filename=/dev/vdb
stonewall

[read-512k]
rw=read
bs=512k
filename=/dev/vdb
stonewall

[write-512k]
rw=write
bs=512k
filename=/dev/vdb
stonewall

Run fio fio.conf.disk to test disk performance.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
$ fio fio.conf.disk
randread-4k: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
...
randwrite-4k: (g=1): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
...
read-512k: (g=2): rw=read, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128
...
write-512k: (g=3): rw=write, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128
...
fio-3.19
Starting 128 threads
Jobs: 32 (f=32): [_(96),W(32)][4.0%][w=222MiB/s][w=443 IOPS][eta 03h:12m:00s]
randread-4k: (groupid=0, jobs=32): err= 0: pid=3426725: Wed Nov 17 20:56:13 2021
read: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(5581MiB/120025msec)
slat (nsec): min=1283, max=228640k, avg=2685615.46, stdev=6012190.22
clat (msec): min=10, max=1288, avg=341.29, stdev=118.00
lat (msec): min=10, max=1289, avg=343.98, stdev=118.80
clat percentiles (msec):
| 1.00th=[ 70], 5.00th=[ 180], 10.00th=[ 211], 20.00th=[ 251],
| 30.00th=[ 279], 40.00th=[ 300], 50.00th=[ 330], 60.00th=[ 359],
| 70.00th=[ 388], 80.00th=[ 430], 90.00th=[ 489], 95.00th=[ 550],
| 99.00th=[ 684], 99.50th=[ 751], 99.90th=[ 919], 99.95th=[ 1003],
| 99.99th=[ 1133]
bw ( KiB/s): min=14374, max=118348, per=99.78%, avg=47509.90, stdev=473.74, samples=7648
iops : min= 3584, max=29578, avg=11876.11, stdev=118.43, samples=7648
lat (msec) : 20=0.01%, 50=0.34%, 100=1.16%, 250=18.68%, 500=70.66%
lat (msec) : 750=8.67%, 1000=0.43%, 2000=0.05%
cpu : usr=0.06%, sys=0.25%, ctx=1264741, majf=0, minf=4128
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=1428809,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
randwrite-4k: (groupid=1, jobs=32): err= 0: pid=3433113: Wed Nov 17 20:56:13 2021
write: IOPS=11.8k, BW=46.2MiB/s (48.4MB/s)(5546MiB/120027msec); 0 zone resets
slat (nsec): min=1190, max=253603k, avg=2702464.17, stdev=6005163.86
clat (msec): min=10, max=1312, avg=342.81, stdev=117.60
lat (msec): min=10, max=1322, avg=345.52, stdev=118.39
clat percentiles (msec):
| 1.00th=[ 110], 5.00th=[ 180], 10.00th=[ 211], 20.00th=[ 251],
| 30.00th=[ 279], 40.00th=[ 300], 50.00th=[ 330], 60.00th=[ 359],
| 70.00th=[ 388], 80.00th=[ 430], 90.00th=[ 493], 95.00th=[ 550],
| 99.00th=[ 684], 99.50th=[ 743], 99.90th=[ 961], 99.95th=[ 1062],
| 99.99th=[ 1167]
bw ( KiB/s): min=14798, max=106798, per=99.86%, avg=47246.76, stdev=464.02, samples=7648
iops : min= 3694, max=26697, avg=11809.98, stdev=116.01, samples=7648
lat (msec) : 20=0.01%, 50=0.15%, 100=0.68%, 250=19.28%, 500=70.51%
lat (msec) : 750=8.90%, 1000=0.40%, 2000=0.08%
cpu : usr=0.08%, sys=0.26%, ctx=1269905, majf=0, minf=32
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=0,1419651,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
read-512k: (groupid=2, jobs=32): err= 0: pid=3439498: Wed Nov 17 20:56:13 2021
read: IOPS=443, BW=222MiB/s (233MB/s)(26.1GiB/120314msec)
slat (usec): min=4, max=1521.5k, avg=71910.72, stdev=86011.65
clat (msec): min=228, max=15650, avg=8887.05, stdev=1962.66
lat (msec): min=267, max=15959, avg=8958.96, stdev=1967.34
clat percentiles (msec):
| 1.00th=[ 1435], 5.00th=[ 5873], 10.00th=[ 7013], 20.00th=[ 7819],
| 30.00th=[ 8221], 40.00th=[ 8658], 50.00th=[ 8926], 60.00th=[ 9329],
| 70.00th=[ 9731], 80.00th=[10268], 90.00th=[10939], 95.00th=[11745],
| 99.00th=[13355], 99.50th=[13892], 99.90th=[14697], 99.95th=[15100],
| 99.99th=[15368]
bw ( KiB/s): min=32703, max=648006, per=99.86%, avg=226951.41, stdev=3447.82, samples=7109
iops : min= 45, max= 1262, avg=440.60, stdev= 6.74, samples=7109
lat (msec) : 250=0.02%, 500=0.23%, 750=0.18%, 1000=0.21%, 2000=0.82%
lat (msec) : >=2000=98.53%
cpu : usr=0.01%, sys=0.06%, ctx=110802, majf=0, minf=519723
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=53407,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
write-512k: (groupid=3, jobs=32): err= 0: pid=3445908: Wed Nov 17 20:56:13 2021
write: IOPS=443, BW=222MiB/s (232MB/s)(26.0GiB/120361msec); 0 zone resets
slat (usec): min=16, max=1017.5k, avg=72029.58, stdev=71276.32
clat (msec): min=34, max=13151, avg=8858.97, stdev=1733.50
lat (msec): min=34, max=13301, avg=8931.01, stdev=1737.43
clat percentiles (msec):
| 1.00th=[ 1045], 5.00th=[ 5738], 10.00th=[ 7617], 20.00th=[ 8221],
| 30.00th=[ 8557], 40.00th=[ 8792], 50.00th=[ 9060], 60.00th=[ 9329],
| 70.00th=[ 9597], 80.00th=[ 9866], 90.00th=[10402], 95.00th=[10805],
| 99.00th=[11745], 99.50th=[11879], 99.90th=[12684], 99.95th=[12953],
| 99.99th=[13087]
bw ( KiB/s): min=36806, max=606274, per=100.00%, avg=227128.25, stdev=2963.38, samples=7087
iops : min= 60, max= 1177, avg=438.87, stdev= 5.80, samples=7087
lat (msec) : 50=0.04%, 100=0.08%, 250=0.11%, 500=0.28%, 750=0.23%
lat (msec) : 1000=0.22%, 2000=0.82%, >=2000=98.22%
cpu : usr=0.06%, sys=0.05%, ctx=118979, majf=0, minf=32
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=0,53323,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
READ: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=5581MiB (5852MB), run=120025-120025msec

Run status group 1 (all jobs):
WRITE: bw=46.2MiB/s (48.4MB/s), 46.2MiB/s-46.2MiB/s (48.4MB/s-48.4MB/s), io=5546MiB (5815MB), run=120027-120027msec

Run status group 2 (all jobs):
READ: bw=222MiB/s (233MB/s), 222MiB/s-222MiB/s (233MB/s-233MB/s), io=26.1GiB (28.0GB), run=120314-120314msec

Run status group 3 (all jobs):
WRITE: bw=222MiB/s (232MB/s), 222MiB/s-222MiB/s (232MB/s-232MB/s), io=26.0GiB (27.0GB), run=120361-120361msec

Disk stats (read/write):
vdb: ios=1554368/1554036, merge=2825/305, ticks=60894783/60980848, in_queue=121875631, util=99.56%

Test disk file

Create or edit fio.conf.file configuration file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# fio.conf.file

[global]
ioengine=libaio
iodepth=128
time_based
direct=1
thread=1
group_reporting
randrepeat=0
norandommap
numjobs=32
timeout=6000
runtime=120
size=5G

[randread-4k]
rw=randread
bs=4k
filename=/root/test.img
rwmixread=100
stonewall

[randwrite-4k]
rw=randwrite
bs=4k
filename=/root/test.img
stonewall

[read-512k]
rw=read
bs=512k
filename=/root/test.img
stonewall

[write-512k]
rw=write
bs=512k
filename=/root/test.img
stonewall

Run fio fio.conf.file to test disk performace.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
$ fio fio.conf.file
randread-4k: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
...
randwrite-4k: (g=1): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
...
read-512k: (g=2): rw=read, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128
...
write-512k: (g=3): rw=write, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128
...
fio-3.19
Starting 128 threads
randread-4k: Laying out IO file (1 file / 5120MiB)
Jobs: 30 (f=30): [_(96),W(12),_(1),W(6),_(1),W(12)][37.3%][w=360MiB/s][w=720 IOPS][eta 13m:30s]
randread-4k: (groupid=0, jobs=32): err= 0: pid=1535608: Thu Nov 18 09:11:15 2021
read: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(5564MiB/120025msec)
slat (nsec): min=1534, max=168704k, avg=2694411.30, stdev=4645972.12
clat (msec): min=10, max=1069, avg=342.33, stdev=74.36
lat (msec): min=17, max=1118, avg=345.03, stdev=74.85
clat percentiles (msec):
| 1.00th=[ 107], 5.00th=[ 241], 10.00th=[ 262], 20.00th=[ 288],
| 30.00th=[ 309], 40.00th=[ 321], 50.00th=[ 342], 60.00th=[ 359],
| 70.00th=[ 380], 80.00th=[ 401], 90.00th=[ 430], 95.00th=[ 460],
| 99.00th=[ 531], 99.50th=[ 567], 99.90th=[ 718], 99.95th=[ 776],
| 99.99th=[ 902]
bw ( KiB/s): min=24114, max=117969, per=99.85%, avg=47395.97, stdev=305.48, samples=7648
iops : min= 6026, max=29481, avg=11848.53, stdev=76.36, samples=7648
lat (msec) : 20=0.01%, 50=0.04%, 100=0.89%, 250=5.86%, 500=91.40%
lat (msec) : 750=1.73%, 1000=0.07%, 2000=0.01%
cpu : usr=0.06%, sys=0.27%, ctx=1296227, majf=0, minf=4128
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=1424347,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
randwrite-4k: (groupid=1, jobs=32): err= 0: pid=1541968: Thu Nov 18 09:11:15 2021
write: IOPS=11.8k, BW=46.0MiB/s (48.3MB/s)(5527MiB/120027msec); 0 zone resets
slat (nsec): min=1603, max=169727k, avg=2712193.76, stdev=4974963.08
clat (usec): min=501, max=921119, avg=344609.13, stdev=67128.86
lat (usec): min=504, max=929592, avg=347321.59, stdev=67565.75
clat percentiles (msec):
| 1.00th=[ 178], 5.00th=[ 249], 10.00th=[ 271], 20.00th=[ 292],
| 30.00th=[ 309], 40.00th=[ 330], 50.00th=[ 342], 60.00th=[ 359],
| 70.00th=[ 380], 80.00th=[ 401], 90.00th=[ 430], 95.00th=[ 451],
| 99.00th=[ 510], 99.50th=[ 542], 99.90th=[ 676], 99.95th=[ 751],
| 99.99th=[ 852]
bw ( KiB/s): min=25024, max=86996, per=99.89%, avg=47100.88, stdev=261.59, samples=7648
iops : min= 6255, max=21748, avg=11775.01, stdev=65.40, samples=7648
lat (usec) : 750=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.09%
lat (msec) : 100=0.23%, 250=4.98%, 500=93.49%, 750=1.14%, 1000=0.05%
cpu : usr=0.07%, sys=0.28%, ctx=1285046, majf=0, minf=32
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=0,1414991,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
read-512k: (groupid=2, jobs=32): err= 0: pid=1548302: Thu Nov 18 09:11:15 2021
read: IOPS=443, BW=222MiB/s (233MB/s)(26.1GiB/120317msec)
slat (usec): min=9, max=930894, avg=71954.82, stdev=63916.45
clat (msec): min=121, max=13298, avg=8889.65, stdev=1711.42
lat (msec): min=259, max=13401, avg=8961.61, stdev=1714.61
clat percentiles (msec):
| 1.00th=[ 1401], 5.00th=[ 6141], 10.00th=[ 7483], 20.00th=[ 8154],
| 30.00th=[ 8490], 40.00th=[ 8792], 50.00th=[ 9060], 60.00th=[ 9329],
| 70.00th=[ 9597], 80.00th=[10000], 90.00th=[10537], 95.00th=[11073],
| 99.00th=[11745], 99.50th=[12013], 99.90th=[12818], 99.95th=[12953],
| 99.99th=[13221]
bw ( KiB/s): min=35829, max=499679, per=99.12%, avg=225160.35, stdev=2695.71, samples=7169
iops : min= 67, max= 975, avg=438.92, stdev= 5.27, samples=7169
lat (msec) : 250=0.03%, 500=0.21%, 750=0.22%, 1000=0.19%, 2000=0.84%
lat (msec) : >=2000=98.51%
cpu : usr=0.01%, sys=0.07%, ctx=121043, majf=0, minf=524320
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=53381,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
write-512k: (groupid=3, jobs=32): err= 0: pid=1550227: Thu Nov 18 09:11:15 2021
write: IOPS=442, BW=221MiB/s (232MB/s)(26.0GiB/120384msec); 0 zone resets
slat (usec): min=16, max=1135.3k, avg=72030.60, stdev=59737.86
clat (msec): min=79, max=14388, avg=8853.39, stdev=1881.03
lat (msec): min=79, max=14496, avg=8925.42, stdev=1887.27
clat percentiles (msec):
| 1.00th=[ 1003], 5.00th=[ 5537], 10.00th=[ 7416], 20.00th=[ 8020],
| 30.00th=[ 8423], 40.00th=[ 8792], 50.00th=[ 9060], 60.00th=[ 9329],
| 70.00th=[ 9597], 80.00th=[10000], 90.00th=[10671], 95.00th=[11208],
| 99.00th=[13221], 99.50th=[13624], 99.90th=[14160], 99.95th=[14160],
| 99.99th=[14295]
bw ( KiB/s): min=40950, max=624837, per=99.90%, avg=226555.80, stdev=2902.55, samples=7107
iops : min= 78, max= 1219, avg=441.72, stdev= 5.67, samples=7107
lat (msec) : 100=0.05%, 250=0.16%, 500=0.33%, 750=0.25%, 1000=0.22%
lat (msec) : 2000=0.84%, >=2000=98.16%
cpu : usr=0.06%, sys=0.05%, ctx=122810, majf=0, minf=32
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=0,53322,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
READ: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=5564MiB (5834MB), run=120025-120025msec

Run status group 1 (all jobs):
WRITE: bw=46.0MiB/s (48.3MB/s), 46.0MiB/s-46.0MiB/s (48.3MB/s-48.3MB/s), io=5527MiB (5796MB), run=120027-120027msec

Run status group 2 (all jobs):
READ: bw=222MiB/s (233MB/s), 222MiB/s-222MiB/s (233MB/s-233MB/s), io=26.1GiB (27.0GB), run=120317-120317msec

Run status group 3 (all jobs):
WRITE: bw=221MiB/s (232MB/s), 221MiB/s-221MiB/s (232MB/s-232MB/s), io=26.0GiB (27.0GB), run=120384-120384msec

Disk stats (read/write):
vda: ios=1560424/1561420, merge=620/2869, ticks=60567966/61121545, in_queue=121689511, util=99.22%

FAQs

you need to specify size=

1
2
write-512k: you need to specify size=
fio: pid=0, err=22/file:filesetup.c:1057, func=total_file_size, error=Invalid argument

Specify size in the configuration file to fix that issue.

1
2
3
[global]
# ...
size=5G

References

[1] axboe/fio: Flexible I/O Tester - https://github.com/axboe/fio

[2] Welcome to FIO’s documentation! — fio 3.27 documentation - https://fio.readthedocs.io/en/latest/index.html