There is an interesting thread over on the AVS forums discussing the pros and cons of unRAID, FlexRAID and SnapRAID. They all have their advantages and disadvantages, and the following post bullet points them nicely.
- unRAID and FlexRAID offer real-time parity
- both only offer a single parity drive solution at this stage (both have plans for dual parity setups in real-time but who gets there first is anyone’s guess)
- FlexRAID real-time is not as stable as unRAID for realtime parity (it does not handle anything that does not pre-allocate, and it hates Teracopy – author is aware of this issue)
- Only unRAID offers simulated drive failures — all other solutions won’t offer up your lost files until you do a full repair
- FlexRAID and unRAID again both offer this functionality (optional in flexraid, but mandatory for realtime raid)
- unRAID allows you to both view and operate on the individual drives that comprise the array without impacting realtime parity. Others only allow this functionality in Snapshot mode.
- FlexRAID and unRAID handle sharing via their interfaces
- SnapRAID etc. does sharing via the underlying OS
- FlexRAID is the fastest for realtime
- unRAID can incorporate a cache drive
- I’m not sure of what the speed is like comparing snapshot parity between FlexRAID and SnapRAID (but am perfectly fine with the performance of SnapRAID)
- unRAID wins this by a country mile, the community is the most active and very helpful
- SnapRAID and FlexRAID are also helpful, but suffer from lack of community participation
- FlexRAID has no limit on how many parity drives you can employ
- SnapRAID is limited to 2, but have plans for 3 drives in the future (probably distant future)
- unRAID does not have this feature
- SnapRAID does checks on the block level
- FlexRAID does checks on the file level
Thanks to hdkhang for summarising.
There’s been a bit of confusion recently about FlexRAID and NZFS. Brahim, the main developer, is aware and tried to explain the differences:
FlexRAID is a concept and not a product in itself. The core essence of that concept is flexibility and solving many of the shortcomings that plague current storage solutions.
RAID-F, also known as RAID over Filesystem, provides data protection and data pooling over existing file systems It does that by overlaying its own lightweight and unifying filesystem on top of any file system that the user’s OS can operate over.
NZFS (“Not ZFS”) borrows a number of concepts from ZFS and its RAID suite. NZFS has two modes of operation: RAID under file system and RAID within file system.
In essence, FlexRAID will provide:
- RAID under filesystem (NZFS)
- RAID within filesystem (NZFS)
- RAID over filesystem (RAID-F)
The current FlexRAID implementation of storage pooling has several key advantages over everything else on the market or planned including:
- Better power saving features (only the disk where the data resides needs to be active)
- Support for drives with existing data (FlexRAID never format any drive)
- A drive taken from a FlexRAID pool is fully readable outside of the pool and on any other computer
- Snapshot RAID when real-time parity synchronization is not necessary
- Real-time RAID
- Ability to restore specific files instead of the whole disk
- Support for network drives in the storage pool
- Disk spanning for better protection level and utilization
- Multiple RAID engines including support for RAID∞
NZFS takes a different approach and provides pooling below the filesystem and each NZFS storage pool will need to be formatted with your preferred filesystem be it NTFS, FAT, EXT, etc.
It’s called NZFS because it is not ZFS but will bring many of the ZFS features such as checksum, ZIL, de-dup, copy-on-write, etc. to both Windows and Linux. And, it will be powered by the FlexRAID’s RAID∞ engine.
Brahim has added a blog post explaining NZFS (Next-Generation Zion File System) and going into the the typical data storage problem: Optimal Capacity vs Optimal Performance vs Optimal Protection
NZFS can deal with the above limitations and “NZFS is implemented as a two part series:
- A completely independent RAID system that works with any file system (use your favorite file system on top of it)
- An optional File System designed to take greater advantage of the RAID system and provide advanced features such dedup, copy-on-write, checksuming, self-healing, etc. The file system component is optional because there are existing file systems such ReFS that provide some of the features the NZFS file system provides or the user might just not need those extra features. NZFS does not try to put you into a box unlike ZFS with its RAIDz system.”
The RAID system in NZFS has been designed in such a way that it can implement all standard RAIDs and many non-standard RAIDs, and on top of this, it supports Transparent RAID (tRAID). Transparent RAID is a better version of unRAID that runs on any modern version of Windows and Linux.
Brahim has also included a clear diagram showing how NZFS supercedes unRAID.
Read the whole post here: A first look at NZFS and replacing unRAID with NZFS’s Transparent RAID (tRAID)
Most RAID-class NASes have supported iSCSI for some time and iSCSI (Internet Small Computer System Interface) has been around for awhile and was developed as a SAN (Storage Area Network) protocol.
You can think of iSCSI as a way to provide computers with the illusion of large volumes of direct-attached storage, while the storage actually sits in a NAS or usually larger storage farm somewhere on the network. The QNAP diagram below illustrates the concept.
smallnetbuilder has a nice explanation of how iSCSI performance compares to SMB performance. You will be surprised about the results: NAS Too Slow? Try iSCSI – Setup-more, Features, Performance.
Chris Mason has released experimental Btrfs extensions which enable the file system to natively support RAID 5 and 6 in addition to the existing RAID 0 and 1 support. Mason is lead developer of the file system, which has long been included in the Linux kernel but is still marked as experimental.
In an announcement regarding the new feature, he includes benchmark results obtained using two fast systems containing flash storage. In some of these tests, native Btrfs RAID runs two to three times faster than a multiple device (MD) created using mdadm. Mason addressed the MD array directly in some tests and set up a Btrfs partition on it in others. (via)