AWS Logo

NFS Performance on AWS

Last year, I wrote a blog about the performance of various NFS Solutions in AWS. Earlier this year, Amazon announced its own NFS Solution called Elastic File System, “EFS”. I wanted to test EFS the same way and measure its performance compared to the other NFS Solutions I had tested and used in the past. These tests hopefully are helpful for AWS managed services providers looking to optimize their disk performance.

The Contenders


GlusterFS is a simple-to-use NFS service we have utilized successfully for numerous projects. Overall, Gluster provides excellent reliability but slows down occasionally when dealing with large numbers of files in a single directory, 10,000+, or with large files, 100MB+.

For this test, Gluster will be running on two t2.small servers. Each server will have a 100GB General Purpose SSD EBS volumes for data. The file system will be mounted with GlusterFS.


To utilize EFS, login into the AWS Command Console and create a file system that can be mounted on multiple EC2 Instances (allowing simultaneous access). Provide the EFS with a VPC ID and a name.

When using EFS, there is nothing to configure, which increases the simplicity factor. I will be mounting the file system with NFS4.

SoftNAS Cloud

SoftNAS Cloud is a software solution that runs on an EC2 Instance. It is configured very close to traditional NAS Hardware, except it uses EBS Volumes and S3 instead of Hard Drives.

SoftNAS Cloud will be running on an m3.medium instance with two 100GB General Purpose SSD drive in a RAID0. The file system will be mounted with NFS4. We also enabled Pre-Warm for the volumes and read cache on the Ephemeral SSD.

The Test

We will be running the test from a t2.small instance. We will run the command ‘iozone -T -t16 -r64k -s<Size> -S20480 -I -i0 -i1 -i2’. Size will be changed to four different sizes: 512k, 2m ,8m, and 512m.

  • -i0 - write/read tests
  • -i1 - rewrite/reread tests
  • -i2 - random write/random read tests
  • -T - posix threads.
  • -t16 - The number of threads to run.
  • -r64k - Record lengtio.
  • -s512m - file size.
  • -S20480 - Cache Size
  • -I - direct IO if possible. (Avoid disk cache.)

Each test is performed against sixteen files simultaneously. The data presented below uses the average kBps.

The Data





Random Write

Random Write





Random Read

Random Read

The Results

Gluster had great read speed with small files because of built-in caching, but dropped off when the files got larger than the cache limit.

EFS had good consistent speed across the test.

SoftNAS Cloud had excellent read and write speed through all the sizes. The high throughput for reading was largely because SoftNAS Cloud uses Ephemeral Storage as a read cache.


At Scale:

The data above gives you a good idea of a base throughput for these systems, but I wanted to try a larger scale test.

I increased the test instance size to an m3.xlarge and ran iozone for a 2MB file (our average size) and increased the number of threads to 256.

I left all EFS and SoftNAS Cloud settings the same.

This shows that both EFS and SoftNAS Cloud perform well at scale.



This time (not mentioned in the last blog post) I thought I would include a breakdown of cost for each solution. Costs are shown for a month of 24x7 usage.


Item Price
t2.small * 2 $58
100GB General SSD EBS * 2 $20
Total $78



Item Price
EFS Storage 100GB $30
Total $30



Item Price
m3.medium with SoftNAS License * 2 $249
100GB General SSD EBS * 4 $40
Total $289


Note: SoftNAS has published a blog providing a detailed pricing comparison between these options.

EFS is priced for only what you use. This gives it a very low cost to entry.  

With Gluster and SoftNAS Cloud, the majority of the cost is in the EC2 instances. This means two things.

  1. Since EBS is $.10 per GB ($.20 in a High Availability Setup) for General Purpose SSD that means it grows at 2/3rd the rate of EFS. At a certain point, it will become cheaper.

  2. These prices are without reservations. If you were to reserve the EC2 Instance you can save considerably on that expense.

Estimated Monthly Cost


So what conclusion do I draw from all of this data?

I think EFS will start to replace our use of Gluster in the AWS environments but for clients outside of AWS, Gluster is a useful tool.

EFS is extremely fast and easy to setup. It had consistent and reliable performance. Once it is released this will probably become the default for most of our clients.

SoftNAS Cloud had by far the best performance. Its cost is higher for low capacities, but will be important when latency is paramount.  At higher capacities the benefits of SoftNAS and lower overall cost are a clear winner,  I would consider SoftNAS Cloud.


Date posted: July 27, 2015


I see that your comparison makes SoftNAS look like the best performance. But, when looking at your pricing comparison you are all over the board on the pricing comparison. None of the 3 solutions are configured with apples to apples. they each have different instance sizes and different drive sizes? Why the differences in configs?

Thank you for the feed back. 

I tried to be as apple to apples as possible. The test machine for all the test were the same across the board for each test. The differences were mostly based on the technology that we used. 

  • EFS doesn't use a host machine. 
  • Gluster used two t2.smalls because I knew that we would not max out the system resources. 
  • I consulted with SoftNAS on their setup and an m3.medium is the smallest the recommended (though it would run on smaller).

The differences in backend was one of the reasons that I included a cost analysis. With most  clients they either want low cost or fast response. This gives a good base line for both. 

I hope that answers your questions sufficently. 

If SoftNAS recommends m3.medium. One should have compared between Gluster with m3.medium also.

The fact that that m3s weren't used for both Gluster and SoftNAS completely skews these results to the meaningless.

These all seem like great solutions that would assist with my ability to cluster before I am able to make major code changes. My question, while I'm waiting for EFS preview access is, are there any issues with NFL based reliability across any of these platforms? Single master (with standby replication) app server with stability is worth more than clustered app servers with questionable stability.... But if stability can be reliable, I'm ready to make a big change to our infrastructure!

EFS is still in preview mode, Is it safe to use it in production mode.

No. Nothing with "preview", "beta", "nightly", etc. is a production-ready solution. Would you have issues if using the EFS preview on production? We are talking about Amazon, so it's not likely. Would I recommend using EFS on production? Nope.

I wonder what the release-delay with EFS is all about.
Been in preview forever at this point...

Thanks for the update, very timely from my POV as I'm looking into options on AWS with regard to a NAS service.

We have run gluster for 12 months (migrating from a big name filer solution when we joined AWS), with generally acceptable uptime and performance. But there is a fair sized management overhead and it's missing some features we'd like. We have an added challenge around the legacy code and OS that we need to run (for the time being at least) making EFS more problematic....

I wonder to what extend this can be used as a benchmark of an enterprise workload were we speak in several TB or PB scale and large scale users.

A working set of 100GB with a few mb/s is may be not that representative for customer to make such a decision.

What were the network connections with these storage components ?

Since they all are TCP based , I wouldn't be surprised if the raw underlying performance wasn't any better than what iperf or netperf showed ; Have you ever used a file system test called "bonnie" ?

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • You can enable syntax highlighting of source code with the following tags: <code>, <blockcode>, <cpp>, <java>, <php>. The supported tag styles are: <foo>, [foo].
  • Web page addresses and email addresses turn into links automatically.
  • Lines and paragraphs break automatically.

Metal Toad is an Advanced AWS Consulting Partner. Learn more about our AWS Managed Services

About the Author

Nathan Wilkerson, VP of Engineering

Nathan started building computers, programming and networking with a home IPX network at age 13. Since then he has had a love of all things computer; working in programming, system administration, devops, and Cloud Computing. Over the years he's enriched his knowledge of computers with hands on experience and earning his AWS Certified Solutions Architect – Professional.

Recently, Nathan has transitioned to a Cloud Operations Manager role. He helps clients and internal teams interface with the Cloud Team using the best practices of Kanban to ensure a speedy response and resolution to tickets.

Have questions?