![]() The last are small 4 KB random read/writes, but with are 32 deep queue. It has many options, so i made a short script to run reproducible tests:įio -name =job-w -rw =write -size =2G -ioengine =libaio -iodepth =4 -bs =128k -direct =1 -filename =bench.file -output-format =normal,terse -output = $OUT/fio-write.logįio -name =job-r -rw = read -size =2G -ioengine =libaio -iodepth =4 -bs =128K -direct =1 -filename =bench.file -output-format =normal,terse -output = $OUT/fio-read.logįio -name =job-randw -rw =randwrite -size =2G -ioengine =libaio -iodepth =32 -bs =4k -direct =1 -filename =bench.file -output-format =normal,terse -output = $OUT/fio-randwrite.logįio -name =job-randr -rw =randread -size =2G -ioengine =libaio -iodepth =32 -bs =4K -direct =1 -filename =bench.file -output-format =normal,terse -output = $OUT/fio-randread.logįirst two are classic read/write sequential tests, with 128 KB block size an a queue depth of 4. ![]() The main test block was done with the flexible I/O tester (fio), written by Jens Axboe (current maintainer of the Linux block layer). Mount -t cifs -o username=jk,password=xyz,uid=jk,gid=jk //nas-server/media /media/mountpoint Test Methodology IDmapping on the client can be simply done as mount option, i used as complete mount command: It uses AES128-CCM then (visible in smbstatus). Smb encrypt = required on the server globally. Encryption is disabled by default, for the encrypted test i set The setup is mostly done with installing, creating the user DB, adding a share to smb.conf and starting the smb service. Mount.nfs4 -v nas-server:/mnt/share /media/mountpointīut getting encryption to work can be a nightmare, first setting up kerberos is more complicated than other solutions and then dealing with idmap on both server an client(s)…Īfter that you can choose from different levels, i set sec=krb5p to encrypt all traffic for this test (most secure, slowest). I used these options on the server: (rw,async,all_squash,anonuid=1000,anongid=1000) The plaintext setup is also easy, specify the exports, start the server and open the ports. ![]() Relevant package/version: Linux Kernel 5.2.8 Sshfs -o Ciphers=aes128-ctr -o Compression=no -o ServerAliveCountMax=2 -o ServerAliveInterval=15 /media/mountpoint NFSv4 Then i added some mount options (suggested here) for convenience and ended with: As second test i did choose AES128, because it is the most popular cipher, disabling encryption is not possible (without patching ssh). OpenSSH is probably running anyway on all servers, so this is by far the simplest setup: just install sshfs (fuse based) on the clients and mount it.Īlso it is per default encrypted with ChaCha20-Poly1305. ![]() Relevant package/version: OpenSSH_8.0p1, OpenSSL 1.1.1c, sshfs 3.5.2 The client is a quadcore desktop machine running Arch Linux, so this should not be a bottleneck. I installed Fedora 30 Server on it and updated it to the latest software versions.Įverything was tested over a local Gigabit Ethernet Network. It also does not support the AES-NI extensions (which would increase the AES performance noticeably) the encryption happens completely in software.Īs storage two HDDs in BTRFS RAID1 were used, it does not make a difference though, because the tests are staged to hit almost always the cache on the server, so only the protocol performance counts. ![]() The hardware side of the server is based on an Dell mainboard with an Intel i3-3220, so a fairly old 2 core / 4 threads CPU. The main reason for this post is that i could not find a proper test that includes SSHFS. I have run sequential and random benchmarks and tests with rsync. This is a performance comparison of the the three most useful protocols for networks file shares on Linux with the latest software. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |