QTS Hero... ZFS? What? When?

Interested in our products? Post your questions here. Let us answer before you buy.
Post Reply
P3R
Guru
Posts: 12375
Joined: Sat Dec 29, 2007 1:39 am
Location: Stockholm, Sweden (UTC+01:00)

Re: QTS Hero... ZFS? What? When?

Post by P3R » Fri May 22, 2020 3:42 am

QNAPDanielFL wrote:
Fri May 22, 2020 3:22 am
In the future you should be able to buy a license for most current NAS to switch your current NAS to QuTS Hero.
Do you expect that future to be Q2 as stated or do you expect a delay? If a delay, what would be the new expected release schedule?
RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!

A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.

All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!

Jazz59
Starting out
Posts: 21
Joined: Wed Aug 22, 2012 4:55 pm

Re: QTS Hero... ZFS? What? When?

Post by Jazz59 » Fri May 22, 2020 6:29 am

Hi Daniel,

Will there be a trial period before buying the license? Because it may be that we are disappointed with the performance if the hardware is not powerful enough for its needs. In this case, we will return to QTS and therefore no use for the QuTS hero license.

Jazz59 8)
Qnap TVS-472XT 32GB | NIC QXG-10G2SF-CX4 | 3 HDD Seagate 6TB | 2 SSD NVMe Samsung 970 Evo 1TB | Drive QDA-A2MAR | 2 SSD Samsung 860 Evo 500GB

QNAPDanielFL
Easy as a breeze
Posts: 252
Joined: Fri Mar 31, 2017 7:09 am

Re: QTS Hero... ZFS? What? When?

Post by QNAPDanielFL » Fri May 22, 2020 6:44 am

I don't have the answer to either of those questions right now. I have not heard there is a delay but I can't say for sure. As for a trial, I don't know. But since switching to QuTS Hero requires a reinitialization, switching for a trial and then switching back can be problematic if you have a lot of data on the NAS.

QNAPDanielFL
Easy as a breeze
Posts: 252
Joined: Fri Mar 31, 2017 7:09 am

Re: QTS Hero... ZFS? What? When?

Post by QNAPDanielFL » Fri May 22, 2020 6:48 am

Trexx wrote:
Fri May 22, 2020 3:26 am
QNAPDanielFL wrote:
Fri May 22, 2020 3:22 am
We now have some NAS that come with QuTS Hero included. Those don't charge extra for QuTS Hero because it is already included.

In the future you should be able to buy a license for most current NAS to switch your current NAS to QuTS Hero.

I do expect a charge for this but it should only be a 1-time fee.
Hi Daniel,

Will that license be transferable. Someone buys ZFS for their "backup" NAS, and finds that it isn't beefy enough to truly benefit from ZFS and wants to move that license to their primary NAS. Will that be possible (similar to some existing QNAP licenses where it is tied to active licenses used vs. specific HW device)?
Once the upgrade to QuTS Hero for the NAS you already own is released, I should have that answer. But until it is released I would not be confident I had the final answer.

User avatar
Moogle Stiltzkin
Ask me anything
Posts: 9389
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: QTS Hero... ZFS? What? When?

Post by Moogle Stiltzkin » Fri May 22, 2020 9:02 am

QNAPDanielFL wrote:
Fri May 22, 2020 6:44 am
I don't have the answer to either of those questions right now. I have not heard there is a delay but I can't say for sure. As for a trial, I don't know. But since switching to QuTS Hero requires a reinitialization, switching for a trial and then switching back can be problematic if you have a lot of data on the NAS.
good point. although i am also wondering, if we are on a trial quts, and the trial expires, how does that work? i expect users would have to reinitialize to go back to a usable qts regular.

But what if users purchased a quts license? can they go from quts trial to quts license without having to reinitialize? Or will they have to reinstall using quts non trial firmware then enter the license with that version? i don't think was explained yet.

If you can answer that would be great, if not that is fine. But eventually people will want to know how this process will work :'


i also like trex am wondering about the license registered to a device, can it be unregistered from the NAS it was initially registered to but at a later time, so that it can be registered to another nas if later we decide to purchase a replacement model, so we want to recycle the existing license for use, instead of keeping the license on the older model that got replaced :' otherwise we may end up having to purchase another (aka multiple) quts license, not too keen on that :S
QNAPDanielFL wrote:
Fri May 22, 2020 6:48 am
Once the upgrade to QuTS Hero for the NAS you already own is released, I should have that answer. But until it is released I would not be confident I had the final answer.
ok will be waiting for that answer :(
NAS
[Main Server] QNAP TS-877 w. 4tb [ 3x HGST Deskstar NAS (HDN724040ALE640) & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A w. 5x 2TB Samsung F3 (HD203WI) EXT4 Raid5
[Backup] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) single disks.
[^] QNAP TS-659 Pro II
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-228
[^] QNAP TS-128
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Asus AC68U Router|100dl/50ul MBPS FTTH Internet | Win10, WC PC-Intel i7 920 Ivy bridge desktop (1x 512gb Samsung 850 Pro SSD + 1x 4tb HGST Ultrastar 7K4000)


Guides & articles
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin

User avatar
Moogle Stiltzkin
Ask me anything
Posts: 9389
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: QTS Hero... ZFS? What? When?

Post by Moogle Stiltzkin » Fri May 29, 2020 1:11 pm

P3R wrote:
Wed May 20, 2020 3:54 am
Trexx wrote:
Wed May 20, 2020 2:38 am
Think about autopilot Sonarr, Radarr, Plex DVR, NZBGet, etc. all running in the background all without necessarily real-time/continuous user interaction.
Yes there are several services installed but for most users, they're not very taxing or always actively in use. Most users are still limited by an internet connection with (relatively) low bandwidth so for them it will be that the main sequential loads are interrupted by occasional short bursts of random access.

The above is nowhere close to having 8 concurrent streams, each pushing as much as possible.
Even the “how do I backup/restore my apps/settings/etc.” will be ugly.
Indeed.

And then the shock for some when they realize that RAID migration and adding disks to a RAID, that they've taken for granted since many years, doesn't work anymore.
found this article
Summary
So I hope this example clearly illustrates the issue at hand. With ZFS, you either need to buy all storage upfront or you will lose hard drives to redundancy you don't need, reducing the maximum storage capacity of your NAS.

You have to decide what your needs are. ZFS is an awesome file system that offers you way better data integrity protection than other file system + RAID solution combination.

But implementing ZFS has a certain 'cost'. You must decide if ZFS is worth it for you.
https://louwrentius.com/the-hidden-cost ... e-nas.html


i think that covers most of the downsides? but i could be wrong. but it's at least somewhere others can reference just in case zfs is not for their needs based on the caveats :'


and then you got linus torvald's zfs hate comments :(
Linus Torvald: Don't use ZFS. It’s that simple. It was always more of a buzzword than anything else, I feel, and the licensing issues just make it a non-starter for me.

The benchmarks I’ve seen do not make ZFS look all that great. And as far as I can tell, it has no real maintenance behind it either any more, so from a long-term stability standpoint, why would you ever want to use it in the first place?
phoenix-
2020-01-09

There’s two separate versions of ZFS: Oracle ZFS and OpenZFS. They share a common ancestor (ZFSv28 from SUN), but they are no longer compatible, and should not be confused with each other.

Oracle ZFS is highly proprietary, only runs on Oracle Solaris, and requires big payments from Oracle to access. They actively maintain it, and there’s even some new features that have been added over the years (like native encryption).

OpenZFS is actively maintained via the OpenZFS repo, and is used by dozens of hardware and software companies around the world. Recently, the OpenZFS repo was rebased on the ZFS-on-Linux repo (previously, it used the Illumos repo). There’s a lot of development happening with OpenZFS, and a lot of features have been added since the split with ZFSv28. Yes, it’s licensed under the CDDL which means it will never be integrated directly into the Linux kernel source tree, but it’s part of the source trees for various other OSes (FreeBSD, Illumos, OpenSolaris, Delphix, TruOS) and hardware storage appliance vendors. And the ZFS-on-Linux version is actively maintained and works great on multiple Linux distros.

So, in this instance, Linus is talking out of his **, and really needs to shut up, lock himself in a room, and actually research what, exactly, ZFS, OpenZFS, and ZFS-on-Linux really are. He’s shot himself in the foot before, but this is beyond sad to see, and so very, very, very wrong that’s it’s embarrassing to the whole Linux dev community.

Maybe he should sit in on one of the monthly OpenZFS Dev Meetings. He’d learn a lot!
sj87-

Everything else revolved around licensing issues and how that is simply a no go for the kernel and therefore he won’t care about ZFS in the role of a kernel maintainer. And this probably explains also why his knowledge about ZFS’s current state wasn’t… well… up to date. He knows he cannot use it and hence it isn’t something he keeps an eye on.
Alfman-

I also think it’s fair to say torvalds speaking ill of ZFS’s technical merits in actuality stems from his prejudice against its ownership & incompatible license rather than because ZFS doesn’t have merit for linux. It clearly does and I think if he were being honest he would have to concede this. Ultimately I agree with his decision not to merge it, but it would have been nice if it could be merged.
https://www.osnews.com/story/131149/lin ... t-use-zfs/



Zfs RaidZ expansion feature in development?

Matthew Ahrens
Jun 4, 2019

Alpha preview release of RAIDZ expansion is now available. Testing appreciated (on disposable pools only!)
https://twitter.com/OpenZFS/status/9210 ... 44448?s=09
Auzy-

The big drawcard for ZFS was meant to be RAID-Z. The big issue I found however was that you couldn’t simply couldn’t expand the raid’s. If you had a NAS with 4HDD’s in RAID 5 for instance, you couldn’t turn it into a 5HDD RAID5. I guess it makes sense for datacentres and servers, but impractical for home NAS’s and such.

The copy on write functionality is also available now on other filesystems too, so its not as unique as it used to be. It’s a pity, because if you could expand the RAID’s within it (and Oracle didn’t buy Sun). , I think it would have easily have dominated the competition.
p13.-
RAID-Z expansion works now, and it’s being adopted left and right. I believe freenas now offers it by default.
Also, one killer ZFS feature for virtualization … zvols.

Personally, i don’t mind btrfs at all. I’ve had my share of troubles with it in the past, but it has gotten much better,
However … there is a lot of functionality that ZFS has that just isn’t present in btrfs. The main ones for me are … raid is broken (that is kind of a big deal!) and no raw device (zvol-like) functionality.
cb88-
2020-01-10

Raid-Z expansion does not work. You can add new vdevs but not expand existing raidz vdevs perhaps that is what you are thinking of. Nobody has commited any work on raidz expansion in months leaving it roughly in alpha state.
Lennie-
I use ZFS only on servers for backup storage, etc.

RAID-Z expansion not great, but is OK, choose your vdevs wisely. 😉

As I understand it someone is working on improving it in part.

It will not be great either, but improved.
https://www.osnews.com/story/131149/lin ... t-use-zfs/



ZFS RAIDZ stripe width, or: How I Learned to Stop Worrying and Love RAIDZ
By: Matthew Ahrens
JUN 05, 2014

TL;DR: Choose a RAID-Z stripe width based on your IOPS needs and the amount of space you are willing to devote to parity information. If you need more IOPS, use fewer disks per stripe. If you need more usable space, use more disks per stripe. Trying to optimize your RAID-Z stripe width based on exact numbers is irrelevant in nearly all cases.

For best performance on random IOPS, use a small number of disks in each RAID-Z group. E.g, 3-wide RAIDZ1, 6-wide RAIDZ2, or 9-wide RAIDZ3 (all of which use a..." of total storage for parity, in the ideal case of using large blocks). This is because RAID-Z spreads each logical block across all the devices (similar to RAID-3, in contrast with RAID-4/5/6). For even better performance, consider using mirroring.

For best reliability, use more parity (e.g. RAIDZ3 instead of RAIDZ1), and architect your groups to match your storage hardware. E.g, if you have 10 shelves of 24 disks each, you could use 24 RAIDZ3 groups, each with 10 disks - one from each shelf. This can tolerate any 3 whole shelves dying (or any 1 whole shelf dying plus any 2 other disks dying).

For best space efficiency, use a large number of disks in each RAID-Z group. Wider stripes never hurts space efficiency. (In certain exceptional cases, use at least 5, 6, or 11 disks (for RAIDZ-1, 2, or 3 respectively) - see below for more details.) When trading off between these concerns, it is useful to know how much it helps to vary the parameters.

For performance on random IOPS, each RAID-Z group has approximately the performance of a single disk in the group. To double your write IOPS, you would need to halve the number of disks in the RAID-Z group. To double your read IOPS, you would need to halve the number of "data" disks in the RAID-Z group (e.g. with RAIDZ-2, go from 12 to 7 disks). Note that streaming read performance is independent of RAIDZ configuration, because only the data is read. Streaming write performance is proportional to space efficiency.

For space efficiency, typically doubling the number of "data" disks will halve the amount of parity per MB of data (e.g. with RAIDZ-2, going from 7 to 12 disks will reduce the amount of parity information from 40% to 20%).

RAID-Z block layout RAID-Z parity information is associated with each block, rather than with specific stripes as with RAID-4/5/6. Take for example a 5-wide RAIDZ-1. A 3-sector block will use one sector of parity plus 3 sectors of data (e.g. the yellow block at left in row 2). A 11-sector block will use 1 parity + 4 data + 1 parity + 4 data + 1 parity + 3 data (e.g. the blue block at left in rows 9-12). Note that if there are several blocks sharing what would traditionally be thought of as a single "stripe", there will be multiple parity blocks in the "stripe". RAID-Z also requires that each allocation be a multiple of (p+1), so that when it is freed it does not leave a free segment which is too small to be used (i.e. too small to fit even a single sector of data plus p parity sectors - e.g. the light blue block at left in rows 8-9 with 1 parity + 2 data + 1 padding). Therefore, RAID-Z requires a bit more space for parity and overhead than RAID-4/5/6.

A misunderstanding of this overhead, has caused some people to recommend using "(2^n)+p" disks, where p is the number of parity "disks" (i.e. 2 for RAIDZ-2), and n is an integer. These people would claim that for example, a 9-wide (2^3+1) RAIDZ1 is better than 8-wide or 10-wide. This is not generally true. The primary flaw with this recommendation is that it assumes that you are using small blocks whose size is a power of 2. While some workloads (e.g. databases) do use 4KB or 8KB logical block sizes (i.e. recordsize=4K or 8K), these workloads benefit greatly from compression. At Delphix, we store Oracle, MS SQL Server, and PostgreSQL databases with LZ4 compression and typically see a 2-3x compression ratio. This compression is more beneficial than any RAID-Z sizing. Due to compression, the physical (allocated) block sizes are not powers of two, they are odd sizes like 3.5KB or 6KB. This means that we can not rely on any exact fit of (compressed) block size to the RAID-Z group width.

To help understand where these (generally incorrect) recommendations come from, and what the hypothetical benefit would be if you were to use recordsize=8K and compression=off with various RAID-Z group widths, I have created a spreadsheet which shows how much space is used for parity+padding given various block sizes and RAID-Z group widths, for RAIDZ1, 2, or 3. You can see that there are a few cases where, if setting a small recordsize with 512b sector disks and not using compression, using (2^n+p) disks uses substantially less space than one less disk. However, more disks in the RAID-Z group is never worse for space efficiency.


To summarize: Use RAID-Z. Not too wide. Enable compression. Further reading on RAID-Z:

Jeff Bonwick on the design of RAID-Z (2005)
Adam Leventhal on the math behind double-parity RAIDZ2 (2006), and the need for RAIDZ3 (2009).
https://www.delphix.com/blog/delphix-en ... love-raidz

MY ANACONDA DON'T WANT NONE UNLESS IT'S GOT CKSUMS HON —
ZFS versus RAID: Eight Ironwolf disks, two filesystems, one winner
We exhaustively tested ZFS and RAID performance on our Storage Hot Rod server.
JIM SALTER - 5/18/2020

Conclusions
If you're looking for raw, unbridled performance it's hard to argue against a properly-tuned pool of ZFS mirrors. RAID10 is the fastest per-disk conventional RAID topology in all metrics, and ZFS mirrors beat it resoundingly—sometimes by an order of magnitude—in every category tested, with the sole exception of 4KiB uncached reads.

ZFS' implementation of striped parity arrays—the RAIDz vdev type—are a bit more of a mixed bag. Although RAIDz2 decisively outperforms RAID6 on writes, it underperforms it significantly on 1MiB reads. If you're implementing a striped parity array, 1MiB is hopefully the blocksize you're targeting in the first place, since those arrays are particularly awful with small blocksizes.

When you add in the wealth of additional features ZFS offers—incredibly fast replication, per-dataset tuning, automatic data healing, high-performance inline compression, instant formatting, dynamic quota application, and more—we think it's difficult to justify any other choice for most general-purpose server applications.

ZFS still has more performance options to offer—we haven't yet covered the support vdev classes, LOG, CACHE, and SPECIAL. We'll cover those—and perhaps experiment with recordsize larger than 1MiB—in another fundamentals of storage chapter soon.
https://arstechnica.com/gadgets/2020/05 ... ne-winner/
joffe Smack-Fu Master, in training
MAY 18, 2020

Performance is all well and good, and ZFS ranges from "good enough" to "outstanding" in that category.

What makes this comparison apples to oranges is that ZFS has self-healing properties and on-disk consistency at all times. The hardware RAID configuration will happily corrupt your data without even knowing it.

Presumably, one of the main reasons anyone would consider a redundant array of disks is to be able to keep their files if a drive fails. If you care about your data, you should also consider the value of copy-on-write, checksums, and self-healing.
NAS
[Main Server] QNAP TS-877 w. 4tb [ 3x HGST Deskstar NAS (HDN724040ALE640) & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A w. 5x 2TB Samsung F3 (HD203WI) EXT4 Raid5
[Backup] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) single disks.
[^] QNAP TS-659 Pro II
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-228
[^] QNAP TS-128
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Asus AC68U Router|100dl/50ul MBPS FTTH Internet | Win10, WC PC-Intel i7 920 Ivy bridge desktop (1x 512gb Samsung 850 Pro SSD + 1x 4tb HGST Ultrastar 7K4000)


Guides & articles
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin

User avatar
Moogle Stiltzkin
Ask me anything
Posts: 9389
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: QTS Hero... ZFS? What? When?

Post by Moogle Stiltzkin » Fri May 29, 2020 2:38 pm

Return to RAID: The Ars readers “What If?” edition
Readers requested RAID retests. Redundant? Ridiculous!
JIM SALTER - 5/27/2020

In earlier coverage pitting ZFS against Linux kernel RAID, some readers had some concerns that we had missed some tricks for mdraid tuning. In particular, Louwrentius wanted us to retest mdadm with bitmaps disabled, and targetnovember thought that perhaps XFS might outperform ext4.

Write intent bitmaps are an mdraid feature that allows disks that have dropped off and re-entered the array to resync rather than rebuild from scratch. The "age" of the bitmap on the returning disk is used to determine what data has been written in its absence—which allows it to be updated with the new data only, rather than rebuilt from scratch.

XFS and ext4 are simply two different filesystems. Ext4 is the default root filesystem on most distributions, and XFS is an enterprise heavy-hitter most commonly seen in arrays in the hundreds or even thousands of tebibytes. We tested both this time, with bitmap support disabled.

Running the entire panoply of tests we used in earlier articles isn't trivial—the full suite, which tests a wide range of topologies, blocksizes, process numbers and I/O types, takes around 18 hours to complete. But we found the time to run some tests against the heavyweight topologies—that is to say, the ones with all eight disks active.

A note on today's results
The framework we used for the ZFS testing automatically destroys, builds, formats, and mounts arrays as well as running the actual tests. Our original mdadm tests were run individually and manually. To make sure we had the best apples-to-apples experience, we adapted the framework to function with mdadm.

During this adaptation, we discovered a problem with our 4KiB asynchronous write test. For ZFS, we used --numjobs=8 --iodepth=8 --size=512M. This creates eight separate files of 512MiB apiece, for the eight separate fio processes to work with. Unfortunately, this filesize is just small enough for mdraid to decide to commit the entire test in a single sequential batch, rather than actually doing 4GiB worth of random writes.

In order to get mdadm to cooperate, we needed to adjust upwards until we reached --size=2G—at which point mdadm's write throughput plummeted to less than 20 percent of its "burst" throughput when using smaller files. Unfortunately, this also extends the 4KiB asynchronous write test duration enormously—and even fio's --time_based option doesn't help, since in the first few hundred milliseconds, mdraid has already accepted the entire workload into its write buffer.

Since our test results would otherwise be from slightly different fio configurations, we ran new tests for both ZFS and mdraid with default bitmaps enabled, in addition to the new --bitmap none and XFS filesystem tests.

RAIDz2 vs mdraid6
Although we're only testing eight-disk wide configurations today, we are testing both striped parity and striped mirror configurations. First, we'll compare our parity options—ZFS RAIDz2 and Linux mdraid6.

Conclusions
While disabling bitmap support does have some impact on mdraid6's and mdraid10's write performance, it's not night and day in our testing and does not materially alter either topology's relationship to its closest ZFS equivalent.

We don't recommend disabling bitmaps whether you care about that performance relationship to ZFS or not. Safety features are important, and mdraid is a little more fragile without bitmaps. There is an option for "external" bitmaps, which can be stored on a fast SSD, but we don't recommend that, either—we've seen quite a few complaints about problems with corrupt external bitmaps.

If your big criterion is performance, we can't recommend XFS over ext4, either. XFS trailed ext4 in nearly every test, sometimes significantly. Administrators with massive arrays—hundreds of tebibytes or more—may have other, more stability- and testing-related reasons to choose XFS. But hobbyists with a few disks are well served with either and, it seems, can get a little more performance out of ext4.
full article
https://arstechnica.com/gadgets/2020/05 ... f-edition/
NAS
[Main Server] QNAP TS-877 w. 4tb [ 3x HGST Deskstar NAS (HDN724040ALE640) & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A w. 5x 2TB Samsung F3 (HD203WI) EXT4 Raid5
[Backup] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) single disks.
[^] QNAP TS-659 Pro II
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-228
[^] QNAP TS-128
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Asus AC68U Router|100dl/50ul MBPS FTTH Internet | Win10, WC PC-Intel i7 920 Ivy bridge desktop (1x 512gb Samsung 850 Pro SSD + 1x 4tb HGST Ultrastar 7K4000)


Guides & articles
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin

User avatar
Moogle Stiltzkin
Ask me anything
Posts: 9389
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: QTS Hero... ZFS? What? When?

Post by Moogle Stiltzkin » Sat May 30, 2020 11:37 am

latest release date news
QNAPDaniel

[+5]QNAP OFFICIAL SUPPORT
15 days ago

Our NAS with QuTS Hero built in have been released. But we don't yet have the ability to upgrade to QuTS Hero for our NAS that don't already come with it. I expect that to be available in about 1-2 months, but I don't know for sure.
sauce
https://www.reddit.com/r/qnap/comments/ ... e/fqm3tlp/
NAS
[Main Server] QNAP TS-877 w. 4tb [ 3x HGST Deskstar NAS (HDN724040ALE640) & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A w. 5x 2TB Samsung F3 (HD203WI) EXT4 Raid5
[Backup] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) single disks.
[^] QNAP TS-659 Pro II
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-228
[^] QNAP TS-128
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Asus AC68U Router|100dl/50ul MBPS FTTH Internet | Win10, WC PC-Intel i7 920 Ivy bridge desktop (1x 512gb Samsung 850 Pro SSD + 1x 4tb HGST Ultrastar 7K4000)


Guides & articles
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin

P3R
Guru
Posts: 12375
Joined: Sat Dec 29, 2007 1:39 am
Location: Stockholm, Sweden (UTC+01:00)

Re: QTS Hero... ZFS? What? When?

Post by P3R » Sat May 30, 2020 8:07 pm

Still only testing 8 parallell sessions so an extremely random load not typical for home and SMB usage. Home and SMB would typically be one or few sessions going full blast at a time and/or multiple low-bandwidth streams (streaming media and internet-throttled downloads) where RAIDZ1 (RAID 5), RAIDZ2 (RAID 6) or RAIDZ3 would make more suitable disk configurations.
RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!

A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.

All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!

User avatar
Moogle Stiltzkin
Ask me anything
Posts: 9389
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: QTS Hero... ZFS? What? When?

Post by Moogle Stiltzkin » Mon Jun 01, 2020 3:11 am

FreeNAS System built on Consumer Hardware & NON-ECC Memory Follow Up after running For Two Years
https://www.youtube.com/watch?v=V6rUHSUe_PU

zfs with non ecc ram scenario over a 2 year period hm :'


Why Scrubbing ZFS Without ECC RAM Probably Won't Corrupt Everything
https://www.youtube.com/watch?v=52x4PSxbjUg


Ryzen: Finding & Running 2666+ ECC. Or Build our own ECC? (17 Jul 2018 )
https://www.youtube.com/watch?v=1NxSZil8KS8

by Dr. Ian Cutress on May 4, 2020

An ECC Question
Also, to address the issue of ECC on AMD's Ryzen and Threadripper platforms. ECC lies in this region of 'it kind of works' but isn't validated. There is what's called 'unofficial support', which is different to 'official qualification'. Technically, none of the Ryzen and Threadripper CPUs are 'officially qualified' for ECC, however most of them (if not all) will exhibit unofficial support. This means that it might work, but AMD won't give you assistance for it. There are two caveats to this:

First, it requires motherboard support. Some vendors are designing their boards with ECC support, and some will formally qualify supporting ECC. Note that even if the vendor lists official support, you are in 'unofficial support' from AMD's perspective.


Secondly, there's the 'is it working' question. Sure you can have a CPU that unofficially supports ECC, and ECC memory in a motherboard that 'officially supports' ECC, and there are tools in the OS to determine that all the parts of the chain support it. But the next question is if it actually works - some software only checks the 'does it support ECC' flag, rather than actually testing for it. There are reports of users who, by most measures, have everything in the chain sorted and reported as working, but none of it is actually enabled. This could be down to specific drivers, or a BIOS issue. Some software might say 'ECC found, running, but not enabled', or words to that effect. Ultimately you need the ability to support ECC tracking, which often isn't supported natively on consumer grade motherboards. On server grade motherboards, it is.

It's a minefield, and your mileage may vary. Our recommendation here is that if you absolutely need an AMD CPU with ECC as a mission critical part of your build, go for EPYC.
https://www.anandtech.com/show/11891/be ... rkstations
NAS
[Main Server] QNAP TS-877 w. 4tb [ 3x HGST Deskstar NAS (HDN724040ALE640) & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A w. 5x 2TB Samsung F3 (HD203WI) EXT4 Raid5
[Backup] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) single disks.
[^] QNAP TS-659 Pro II
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-228
[^] QNAP TS-128
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Asus AC68U Router|100dl/50ul MBPS FTTH Internet | Win10, WC PC-Intel i7 920 Ivy bridge desktop (1x 512gb Samsung 850 Pro SSD + 1x 4tb HGST Ultrastar 7K4000)


Guides & articles
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin

User avatar
Moogle Stiltzkin
Ask me anything
Posts: 9389
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: QTS Hero... ZFS? What? When?

Post by Moogle Stiltzkin » Tue Jun 16, 2020 10:34 pm

What Is ZFS?: A Brief Primer
https://www.youtube.com/watch?v=lsFDp-W1Ks0


can i plz have quts hero now :(
NAS
[Main Server] QNAP TS-877 w. 4tb [ 3x HGST Deskstar NAS (HDN724040ALE640) & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A w. 5x 2TB Samsung F3 (HD203WI) EXT4 Raid5
[Backup] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) single disks.
[^] QNAP TS-659 Pro II
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-228
[^] QNAP TS-128
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Asus AC68U Router|100dl/50ul MBPS FTTH Internet | Win10, WC PC-Intel i7 920 Ivy bridge desktop (1x 512gb Samsung 850 Pro SSD + 1x 4tb HGST Ultrastar 7K4000)


Guides & articles
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin

occamsrazor
Know my way around
Posts: 234
Joined: Tue Mar 30, 2010 8:30 pm

Re: QTS Hero... ZFS? What? When?

Post by occamsrazor » Tue Jun 16, 2020 10:48 pm

Moogle Stiltzkin wrote:
Tue Jun 16, 2020 10:34 pm
can i plz have quts hero now :(
No! :-) It’s Q3 now...

“The QuTS hero license program will be available from Q3 2020”
https://www.qnap.com/quts-hero/en/
TS-451 [4 x 10TB WD Reds in Raid-5]
TS-239 Pro II [2 x 3TB in Raid-0]
pfSense router and Ubiquiti Unifi switches
Mac Minis, MacBook Pro, iPhones

User avatar
antik
Know my way around
Posts: 119
Joined: Mon May 18, 2015 2:51 pm

Re: QTS Hero... ZFS? What? When?

Post by antik » Tue Jun 16, 2020 11:36 pm

Or you can buy one of the available QuTS hero edition rack models and enjoy hero now :twisted:
QuTShero_NAS_edition.jpg
You do not have the required permissions to view the files attached to this post.
TVS-h1288X-W1250-48G (fw: h4.5.1.1472) + T3 card + QXG-10G1T + GTX 1050Ti + 4x 2,5“ 3,84TB SSD Samsung PM883 (OS, apps, VM's, RAID5) + 8x 10TB Seagate IronWolf Pro (RAID5).

TVS-871-16G (backup) (fw: 4.5.1.1480) + 8x 8TB Seagate IronWolf Pro (RAID5, backup and QVR Pro)
Network stuff: QHora-301W, QSW-804-4C, ASUS XG-U2008 and TP-Link TL-SG1008MP. Protected by 2x APC CYBERFORT II 700VA.

User avatar
Moogle Stiltzkin
Ask me anything
Posts: 9389
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: QTS Hero... ZFS? What? When?

Post by Moogle Stiltzkin » Thu Sep 10, 2020 5:02 pm

https://www.youtube.com/watch?v=MIjRBmzXSLE

@17:35

hang on, so is quts hero for other models delayed to Q1 2021 now o-O: ?
NAS
[Main Server] QNAP TS-877 w. 4tb [ 3x HGST Deskstar NAS (HDN724040ALE640) & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A w. 5x 2TB Samsung F3 (HD203WI) EXT4 Raid5
[Backup] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) single disks.
[^] QNAP TS-659 Pro II
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-228
[^] QNAP TS-128
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Asus AC68U Router|100dl/50ul MBPS FTTH Internet | Win10, WC PC-Intel i7 920 Ivy bridge desktop (1x 512gb Samsung 850 Pro SSD + 1x 4tb HGST Ultrastar 7K4000)


Guides & articles
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin

pkelecy
Starting out
Posts: 33
Joined: Fri Mar 09, 2018 11:56 pm

Re: QTS Hero... ZFS? What? When?

Post by pkelecy » Thu Sep 10, 2020 11:23 pm

It's available on the TS-h686/h886 which were just released:

https://www.amazon.com/QNAP-TS-h686-D16 ... B08FR8ZBJD

Post Reply

Return to “Presales”